• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2682
  • 1221
  • 191
  • 181
  • 120
  • 59
  • 35
  • 27
  • 26
  • 25
  • 24
  • 21
  • 20
  • 20
  • 18
  • Tagged with
  • 5713
  • 5713
  • 2024
  • 1738
  • 1486
  • 1378
  • 1251
  • 1194
  • 997
  • 755
  • 702
  • 672
  • 623
  • 533
  • 516
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Higher Order Neural Networks and Neural Networks for Stream Learning

Dong, Yue January 2017 (has links)
The goal of this thesis is to explore some variations of neural networks. The thesis is mainly split into two parts: a variation of the shaping functions in neural networks and a variation of learning rules in neural networks. In the first part, we mainly investigate polynomial perceptrons - a perceptron with a polynomial shaping function instead of a linear one. We prove the polynomial perceptron convergence theorem and illustrate the notion by showing that a higher order perceptron can learn the XOR function through empirical experiments with implementation. In the second part, we propose three models (SMLP, SA, SA2) for stream learning and anomaly detection in streams. The main technique allowing these models to perform at a level comparable to the state-of-the-art algorithms in stream learning is the learning rule used. We employ mini-batch gradient descent algorithm and stochastic gradient descent algorithm to speed up the models. In addition, the use of parallel processing with multi-threads makes the proposed methods highly efficient in dealing with streaming data. Our analysis shows that all models have linear runtime and constant memory requirement. We also demonstrate empirically that the proposed methods feature high detection rate, low false alarm rate, and fast response. The paper on the first two models (SMLP, SA) is published in the 29th Canadian AI Conference and won the best paper award. The invited journal paper on the third model (SA2) for Computational Intelligence is under peer review.
572

Hardware implementation of the complex Hopfield neural network

Cheng, Chih Kang 01 January 1995 (has links)
No description available.
573

META-LEARNING AND ENSEMBLE METHODS FOR DEEP NEURAL NETWORKS

Unknown Date (has links)
Deep Neural Networks have been widely applied in many different applications and achieve significant improvement over classical machine learning techniques. However, training a neural network usually requires large amount of data, which is not guaranteed in some applications such as medical image classification. To address this issue, people propose to implement meta learning and ensemble learning techniques to make deep learning trainers more powerful. This thesis focuses on using deep learning equipped with meta learning and ensemble learning to study specific problems. We first propose a new deep learning based method for suggestion mining. The major challenges of suggestion mining include cross domain issue and the issues caused by unstructured and highly imbalanced data structure. To overcome these challenges, we propose to apply Random Multi-model Deep Learning (RMDL) which combines three different deep learning architectures (DNNs, RNNs and CNNs) and automatically selects the optimal hyper parameter to improve the robustness and flexibility of the model. Our experimental results on the SemEval-2019 competition Task 9 data sets demonstrate that our proposed RMDL outperforms most of the existing suggestion mining methods. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
574

Liver Cancer Risk Quantification through an Artificial Neural Network based on Personal Health Data

Unknown Date (has links)
Liver cancer is the sixth most common type of cancer worldwide and is the third leading cause of cancer related mortality. Several types of cancer can form in the liver. Hepatocellular carcinoma (HCC) makes up 75%-85% of all primary liver cancers and it is a malignant disease with limited therapeutic options due to its aggressive progression. While the exact cause of liver cancer may not be known, habits/lifestyle may increase the risk of developing the disease. Several risk prediction models for HCC are available for individuals with hepatitis B and C virus infections who are at high risk but not for general population. To address this challenge, an artificial neural network (ANN) was developed, trained, and tested using the health data to predict liver cancer risk. Our results indicate that our ANN can be used to predict liver cancer risk with changes with lifestyle and may provide a novel approach to identify patients at higher risk and can be bene ted from early diagnosis. / Includes bibliography. / Thesis (PMS)--Florida Atlantic University, 2021. / FAU Electronic Theses and Dissertations Collection
575

Rozpoznávání ručně psaného textu pomocí hlubokých neuronových sítí / Deep Networks for Handwriting Recognition

Richtarik, Lukáš January 2020 (has links)
The work deals with the issue of handrwritten text recognition problem with deep neural networks. It focuses on the use of sequence to sequence method using encoder-decoder model. It also includes design of encoder-decoder model for handwritten text recognition using a transformer instead of recurrent neurons and a set of experiments that were performed on it.
576

Implementace struktur FPNN v C++ / C++ Implementation of FPNN Structures

Pánek, Richard January 2016 (has links)
This master's thesis deals with the design and the C++ implementation of the Field Programmable Neural Networks (FPNNs) simulator. It briefly introduces the concept of artificial neural networks as it is the base of the FPNN concept. It presents the concept formal definitions and its calculation methods. The thesis also describes the special features of the FPNNs and the differences between the FPNNs and the classic neural networks. Furthermore, it deals with models of fault tolerant FPNNs. All the presented principles are used as the base of the developed implementation and the subsequent experiments.
577

An Analysis of Overfitting in Particle Swarm Optimised Neural Networks

van Wyk, Andrich Benjamin January 2014 (has links)
The phenomenon of overfitting, where a feed-forward neural network (FFNN) over trains on training data at the cost of generalisation accuracy is known to be speci c to the training algorithm used. This study investigates over tting within the context of particle swarm optimised (PSO) FFNNs. Two of the most widely used PSO algorithms are compared in terms of FFNN accuracy and a description of the over tting behaviour is established. Each of the PSO components are in turn investigated to determine their e ect on FFNN over tting. A study of the maximum velocity (Vmax) parameter is performed and it is found that smaller Vmax values are optimal for FFNN training. The analysis is extended to the inertia and acceleration coe cient parameters, where it is shown that speci c interactions among the parameters have a dominant e ect on the resultant FFNN accuracy and may be used to reduce over tting. Further, the signi cant e ect of the swarm size on network accuracy is also shown, with a critical range being identi ed for the swarm size for e ective training. The study is concluded with an investigation into the e ect of the di erent activation functions. Given strong empirical evidence, an hypothesis is made that stating the gradient of the activation function signi cantly a ects the convergence of the PSO. Lastly, the PSO is shown to be a very effective algorithm for the training of self-adaptive FFNNs, capable of learning from unscaled data. / Dissertation (MSc)--University of Pretoria, 2014. / tm2015 / Computer Science / MSc / Unrestricted
578

RESOURCE MANAGEMENT IN EDGE COMPUTING FOR INTERNET OF THINGS APPLICATIONS

Galanis, Ioannis 01 December 2020 (has links)
The Internet of Things (IoT) computing paradigm has connected smart objects “things” and has brought new services at the proximity of the user. Edge Computing, a natural evolution of the traditional IoT, has been proposed to deal with the ever-increasing (i) number of IoT devices and (ii) the amount of data traffic that is produced by the IoT endpoints. EC promises to significantly reduce the unwanted latency that is imposed by the multi-hop communication delays and suggests that instead of uploading all the data to the remote cloud for further processing, it is beneficial to perform computation at the “edge” of the network, close to where the data is produced. However, bringing computation at the edge level has created numerous challenges as edge devices struggle to keep up with the growing application requirements (e.g. Neural Networks, or video-based analytics). In this thesis, we adopt the EC paradigm and we aim at addressing the open challenges. Our goal is to bridge the performance gap that is caused by the increased requirements of the IoT applications with respect to the IoT platform capabilities and provide latency- and energy-efficient computation at the edge level. Our first step is to study the performance of IoT applications that are based on Deep Neural Networks (DNNs). The exploding need to deploy DNN-based applications on resource-constrained edge devices has created several challenges, mainly due to the complex nature of DNNs. DNNs are becoming deeper and wider in order to fulfill users expectations for high accuracy, while they also become power hungry. For instance, executing a DNN on an edge device can drain the battery within minutes. Our solution to make DNNs more energy and inference friendly is to propose hardware-aware method that re-designs a given DNN architecture. Instead of proxy metrics, we measure the DNN performance on real edge devices and we capture their energy and inference time. Our method manages to find alternative DNN architectures that consume up to 78.82% less energy and are up to35.71% faster than the reference networks. In order to achieve end-to-end optimal performance, we also need to manage theedge device resources that will execute a DNN-based application. Due to their unique characteristics, we distinguish the edge devices into two categories: (i) a neuromorphic platform that is designed to execute Spiking Neural Networks (SNNs), and (ii) a general-purpose edge device that is suitable to host a DNN. For the first category, we train a traditional DNN and then we convert it to a spiking representation. We target the SpiNNaker neuromorphic platform and we develop a novel technique that efficiently configures the platform-dependent parameters, in order to achieve the highest possible SNN accuracy.Experimental results show that our technique is 2.5× faster than an exhaustive approach and can reach up to 0.8% higher accuracy compared to a CPU-based simulation method. Regarding the general-purpose edge devices, we show that a DNN-unaware platform can result in sub-optimal DNN performance in terms of power and inference time. Our approachconfigures the frequency of the device components (GPU, CPU, Memory) and manages to achieve average of 33.4% and up to 66.3% inference time improvements and an average of 42.8% and up to 61.5% power savings compared to the predefined configuration of an edge device. The last part of this thesis is the offloading optimization between the edge devicesand the gateway. The offloaded tasks create contention effects on gateway, which can lead to application slowdown. Our proposed solution configures (i) the number of application stages that are executed on each edge device, and (ii) the achieved utility in terms of Quality of Service (QoS) on each edge device. Our technique manages to (i) maximize theoverall QoS, and (ii) simultaneously satisfy network constraints (bandwidth) and user expectations (execution time). In case of multi-gateway deployments, we tackled the problem of unequal workload distribution. In particular, we propose a workload-aware management scheme that performs intra- and inter-gateway optimizations. The intra-gateway mechanism provides a balanced execution environment for the applications, and it achieves up to 95% performance deviation improvement, compared to un-optimized systems. The presented inter-gateway method manages to balance the workload among multiple gateways and is able to achieve a global performance threshold.
579

Exploration of hierarchical leadership and connectivity in neural networks in vitro.

Ham, Michael I. 12 1900 (has links)
Living neural networks are capable of processing information much faster than a modern computer, despite running at significantly lower clock speeds. Therefore, understanding the mechanisms neural networks utilize is an issue of substantial importance. Neuronal interaction dynamics were studied using histiotypic networks growing on microelectrode arrays in vitro. Hierarchical relationships were explored using bursting (when many neurons fire in a short time frame) dynamics, pairwise neuronal activation, and information theoretic measures. Together, these methods reveal that global network activity results from ignition by a small group of burst leader neurons, which form a primary circuit that is responsible for initiating most network-wide burst events. Phase delays between leaders and followers reveal information about the nature of the connection between the two. Physical distance from a burst leader appears to be an important factor in follower response dynamics. Information theory reveals that mutual information between neuronal pairs is also a function of physical distance. Activation relationships in developing networks were studied and plating density was found to play an important role in network connectivity development. These measures provide unique views of network connectivity and hierarchical relationship in vitro which should be included in biologically meaningful models of neural networks.
580

Identifying dyslectic gaze pattern : Comparison of methods for identifying dyslectic readers based on eye movement patterns

Lustig, Joakim January 2016 (has links)
Dyslexia affects between 5-17% of all school children, mak-ing it the most common learning disability. It has beenfound to severely affect learning ability in school subjectsas well as limit the choice of further education and occupa-tion. Since research has shown that early intervention andsupport can mitigate the negative effects of dyslexia, it iscrucial that the diagnosis of dyslexia is easily available andaimed at the right children. To make sure children whoare experiencing problems reading and potentially could bedyslectic are investigated for dyslexia an easy access, sys-tematic, and unbiased screening method would be helpful.This thesis therefore investigates the use of machine learn-ing methods to analyze eye movement patterns for dyslexiaclassification.The results showed that it was possible to separatedyslectic from non-dyslectic readers to 83% accuracy, us-ing non-sequential feature based machine learning methods.Equally good results for lower sample frequencies indicatedthat consumer grade eye trackers can be used for the pur-pose. Furthermore a sequential approach using RecurrentNeural Networks was also investigated, reaching an accu-racy of 78%. The thesis is intended to be an introduction to whatmethods could be viable for identifying dyslexia and as aninspiration for researchers aiming to do larger studies in thearea.

Page generated in 0.0595 seconds