• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 1
  • Tagged with
  • 15
  • 15
  • 15
  • 8
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Comparison of Bayesian learning and conjugate gradient descent training of neural networks

Nortje, W D 09 November 2004 (has links)
Neural networks are used in various fields to make predictions about the future value of a time series, or about the class membership of a given object. For the network to be effective, it needs to be trained on a set of training data combined with the expected results. Two aspects to keep in mind when considering a neural network as a solution, are the required training time and the prediction accuracy. This research compares the classification accuracy of conjugate gradient descent neural networks and Bayesian learning neural networks. Conjugate gradient descent networks are known for their short training times, but are not very consistent and results are heavily dependant on initial training conditions. Bayesian networks are slower, but much more consistent. The two types of neural networks are compared, and some attempts are made to combine their strong points in order to achieve shorter training times while maintaining a high classification accuracy. Bayesian learning outperforms the gradient descent methods by almost 1%, while the hybrid method achieves results between those of Bayesian learning and gradient descent. The drawback of the hybrid method is that there is no speed improvement above that of Bayesian learning. / Dissertation (MEng (Electronics))--University of Pretoria, 2005. / Electrical, Electronic and Computer Engineering / unrestricted
2

Adapting deep neural networks as models of human visual perception

McClure, Patrick January 2018 (has links)
Deep neural networks (DNNs) have recently been used to solve complex perceptual and decision tasks. In particular, convolutional neural networks (CNN) have been extremely successful for visual perception. In addition to performing well on the trained object recognition task, these CNNs also model brain data throughout the visual hierarchy better than previous models. However, these DNNs are still far from completely explaining visual perception in the human brain. In this thesis, we investigated two methods with the goal of improving DNNs’ capabilities to model human visual perception: (1) deep representational distance learning (RDL), a method for driving representational spaces in deep nets into alignment with other (e.g. brain) representational spaces and (2) variational DNNs that use sampling to perform approximate Bayesian inference. In the first investigation, RDL successfully transferred information from a teacher model to a student DNN. This was achieved by driving the student DNN’s representational distance matrix (RDM), which characterises the representational geometry, into alignment with that of the teacher. This led to a significant increase in test accuracy on machine learning benchmarks. In the future, we plan to use this method to simultaneously train DNNs to perform complex tasks and to predict neural data. In the second investigation, we showed that sampling during learning and inference using simple Bernoulli- and Gaussian-based noise improved a CNN’s representation of its own uncertainty for object recognition. We also found that sampling during learning and inference with Gaussian noise improved how well CNNs predict human behavioural data for image classification. While these methods alone do not fully explain human vision, they allow for training CNNs that better model several features of human visual perception.
3

Scaling Characteristics of Soil Hydraulic Parameters at Varying Spatial Resolutions

Belur Jana, Raghavendra 2010 May 1900 (has links)
This dissertation focuses on the challenge of soil hydraulic parameter scaling in soil hydrology and related applications in general; and, in particular, the upscaling of these parameters to provide effective values at coarse scales. Soil hydraulic properties are required for many hydrological and ecological models at their representative scales. Prediction accuracy of these models is highly dependent on the quality of the model input parameters. However, measurement of parameter data at all such required scales is impractical as that would entail huge outlays of finance, time and effort. Hence, alternate methods of estimating the soil hydraulic parameters at the scales of interest are necessary. Two approaches to bridge this gap between the measurement and application scales for soil hydraulic parameters are presented in this dissertation. The first one is a stochastic approach, based on artificial neural networks (ANNs) applied within a Bayesian framework. ANNs have been used before to derive soil hydraulic parameters from other more easily measured soil properties at matching scales. Here, ANNs were applied with different training and simulation scales. This concept was further extended to work within a Bayesian framework in order to provide estimates of uncertainty in such parameter estimations. Use of ancillary information such as elevation and vegetation data, in addition to the soil physical properties, were also tested. These multiscale pedotransfer function methods were successfully tested with numerical and field studies at different locations and scales. Most upscaling efforts thus far ignore the effect of the topography on the upscaled soil hydraulic parameter values. While this flat-terrain assumption is acceptable at coarse scales of a few hundred meters, at kilometer scales and beyond, the influence of the physical features cannot be ignored. anew upscaling scheme which accounts for variations in topography within a domain was developed to upscale soil hydraulic parameters to hill-slope (kilometer) scales. The algorithm was tested on different synthetically generated topographic configurations with good results. Extending the methodology to field conditions with greater complexities also produced good results. A comparison of different recently developed scaling schemes showed that at hill-slope scales, inclusion of topographic information produced better estimates of effective soil hydraulic parameters at that scale.
4

Open Technological Standardization Processes Through Learning Networks / 学習ネットワークを用いたオープン型技術標準化過程に関する研究

Mina, Christakis 23 March 2010 (has links)
Kyoto University (京都大学) / 0048 / 新制・課程博士 / 博士(工学) / 甲第15350号 / 工博第3229号 / 新制||工||1486(附属図書館) / 27828 / 京都大学大学院工学研究科都市社会工学専攻 / (主査)教授 小林 潔司, 教授 川﨑 雅史, 教授 藤井 聡 / 学位規則第4条第1項該当
5

Bayesian Neural Networks for Short Term Wind Power Forecasting / Bayesianska neuronnät för korttidsprognoser för vindkraft

Mbuvha, Rendani January 2017 (has links)
In recent years, wind and other variable renewable energy sources have gained a rapidly increasing share of the global energy mix. In this context the greatest concern facing renewable energy sources like wind is the uncertainty in production volumes as their generation ability is inherently dependent on weather conditions. When providing forecasts for newly commissioned wind farms there is a limited amount of historical power production data, while the number of potential features from different weather forecast providers is vast. Bayesian regularization is therefore seen as a possible technique for reducing model overfitting problems that may arise. This thesis investigates Bayesian Neural Networks in one-hour and day-ahead forecasting of wind power generation. Initial results show that Bayesian Neural Networks display equivalent predictive performance to Neural Networks trained by Maximum Likelihood in both one-hour and day ahead forecasting. Models selected using maximum evidence were found to have statistically significant lower test error performance compared to those selected based on minimum test error. Further results show that the Bayesian Framework is able to identify irrelevant features through Automatic Relevance Determination, though not resulting in a statistically significant error reduction in predictiveperformance in one-hour ahead forecasting. In day-ahead forecasting removing irrelevant features based on Automatic Relevance Determination is found to yield statistically significant improvements in test error. / Under de senaste åren har vind och andra variabla förnybara energikällor fått en snabbtökande andel av den globala energiandelen. I detta sammanhang är den största oron förförnybara energikällors produktionsvolymer vindosäkerheten, eftersom kraftverkens generationsförmåga i sig är beroende av väderförhållandena. Vid prognoser för nybyggdavindkraftverk finns en begränsad mängd historisk kraftproduktionsdata, medan antaletpotentiella mätvärden från olika väderprognosleverantörer är stor. Bayesian regulariseringses därför som en möjlig metod för att minska problem med den överanpassning avmodellerna som kan uppstå.Denna avhandling undersöker Bayesianska Neurala Nätverk (BNN) för prognosticeringen timme och en dag framåt av vindkraftproduktion. Resultat visar att BNN gerekvivalent prediktiv prestanda jämfört med neurala nätverk bildade med användandeav Maximum-likelihood för prognoser för en timme och dagsprognoser. Modeller somvalts med användning av maximum evidence visade sig ha statistiskt signifikant lägretestfelprestanda jämfört med de som valts utifrån minimaltestfel. Ytterligare resultatvisar att ett Bayesianskt ramverk kan identifiera irrelevanta särdrag genom automatiskrelevansbestämning. För prognoser för en timme framåt resulterar detta emellertid intei en statistiskt signifikant felreduktion i prediktiv prestanda. För 1-dagarsprognoser, närvi avlägsnar irrelevanta funktioner baserade på automatisk relevans, fås dock statistisktsignifikanta förbättringar av testfel.
6

Personalized face and gesture analysis using hierarchical neural networks

Joshi, Ajjen Das 05 February 2019 (has links)
The video-based computational analyses of human face and gesture signals encompass a myriad of challenging research problems involving computer vision, machine learning and human computer interaction. In this thesis, we focus on the following challenges: a) the classification of hand and body gestures along with the temporal localization of their occurrence in a continuous stream, b) the recognition of facial expressivity levels in people with Parkinson's Disease using multimodal feature representations, c) the prediction of student learning outcomes in intelligent tutoring systems using affect signals, and d) the personalization of machine learning models, which can adapt to subject and group-specific nuances in facial and gestural behavior. Specifically, we first conduct a quantitative comparison of two approaches to the problem of segmenting and classifying gestures on two benchmark gesture datasets: a method that simultaneously segments and classifies gestures versus a cascaded method that performs the tasks sequentially. Second, we introduce a framework that computationally predicts an accurate score for facial expressivity and validate it on a dataset of interview videos of people with Parkinson's disease. Third, based on a unique dataset of videos of students interacting with MathSpring, an intelligent tutoring system, collected by our collaborative research team, we build models to predict learning outcomes from their facial affect signals. Finally, we propose a novel solution to a relatively unexplored area in automatic face and gesture analysis research: personalization of models to individuals and groups. We develop hierarchical Bayesian neural networks to overcome the challenges posed by group or subject-specific variations in face and gesture signals. We successfully validate our formulation on the problems of personalized subject-specific gesture classification, context-specific facial expressivity recognition and student-specific learning outcome prediction. We demonstrate the flexibility of our hierarchical framework by validating the utility of both fully connected and recurrent neural architectures.
7

Bayesian Data-Driven Models for Irrigation Water Management

Torres-Rua, Alfonso F. 01 August 2011 (has links)
A crucial decision in the real-time management of today’s irrigation systems involves the coordination of diversions and delivery of water to croplands. Since most irrigation systems experience significant lags between when water is diverted and when it should be delivered, an important technical innovation in the next few years will involve improvements in short-term irrigation demand forecasting. The main objective of the researches presented was the development of these critically important models: (1) potential evapotranspiration forecasting; (2) hydraulic model error correction; and (3) estimation of aggregate water demands. These tools are based on statistical machine learning or data-driven modeling. These, of wide application in several areas of engineering analysis, can be used in irrigation and system management to provide improved and timely information to water managers. The development of such models is based on a Bayesian data-driven algorithm called the Relevance Vector Machine (RVM), and an extension of it, the Multivariate Relevance Vector Machine (MVRVM). The use of these types of learning machines has the advantage of avoidance of model overfitting, high robustness in the presence of unseen data, and uncertainty estimation for the results (error bars). The models were applied in an irrigation system located in the Lower Sevier River Basin near Delta, Utah. For the first model, the proposed method allows for estimation of future crop water demand values up to four days in advance. The model uses only daily air temperatures and the MVRVM as mapping algorithm. The second model minimizes the lumped error occurring in hydraulic simulation models. The RVM is applied as an error modeler, providing estimations of the occurring errors during the simulation runs. The third model provides estimation of future water releases for an entire agricultural area based on local data and satellite imagery up to two days in advance. The results obtained indicate the excellent adequacy in terms of accuracy, robustness, and stability, especially in the presence of unseen data. The comparison provided against another data-driven algorithm, of wide use in engineering, the Multilayer Perceptron, further validates the adequacy of use of the RVM and MVRVM for these types of processes.
8

Learning Compact Architectures for Deep Neural Networks

Srinivas, Suraj January 2017 (has links) (PDF)
Deep neural networks with millions of parameters are at the heart of many state of the art computer vision models. However, recent works have shown that models with much smaller number of parameters can often perform just as well. A smaller model has the advantage of being faster to evaluate and easier to store - both of which are crucial for real-time and embedded applications. While prior work on compressing neural networks have looked at methods based on sparsity, quantization and factorization of neural network layers, we look at the alternate approach of pruning neurons. Training Neural Networks is often described as a kind of `black magic', as successful training requires setting the right hyper-parameter values (such as the number of neurons in a layer, depth of the network, etc ). It is often not clear what these values should be, and these decisions often end up being either ad-hoc or driven through extensive experimentation. It would be desirable to automatically set some of these hyper-parameters for the user so as to minimize trial-and-error. Combining this objective with our earlier preference for smaller models, we ask the following question - for a given task, is it possible to come up with small neural network architectures automatically? In this thesis, we propose methods to achieve the same. The work is divided into four parts. First, given a neural network, we look at the problem of identifying important and unimportant neurons. We look at this problem in a data-free setting, i.e; assuming that the data the neural network was trained on, is not available. We propose two rules for identifying wasteful neurons and show that these suffice in such a data-free setting. By removing neurons based on these rules, we are able to reduce model size without significantly affecting accuracy. Second, we propose an automated learning procedure to remove neurons during the process of training. We call this procedure ‘Architecture-Learning’, as this automatically discovers the optimal width and depth of neural networks. We empirically show that this procedure is preferable to trial-and-error based Bayesian Optimization procedures for selecting neural network architectures. Third, we connect ‘Architecture-Learning’ to a popular regularize called ‘Dropout’, and propose a novel regularized which we call ‘Generalized Dropout’. From a Bayesian viewpoint, this method corresponds to a hierarchical extension of the Dropout algorithm. Empirically, we observe that Generalized Dropout corresponds to a more flexible version of Dropout, and works in scenarios where Dropout fails. Finally, we apply our procedure for removing neurons to the problem of removing weights in a neural network, and achieve state-of-the-art results in scarifying neural networks.
9

Bayesovské a neuronové sítě / Bayesian and Neural Networks

Hložek, Bohuslav January 2017 (has links)
This paper introduces Bayesian neural network based on Occams razor. Basic knowledge about neural networks and Bayes rule is summarized in the first part of this paper. Principles of Occams razor and Bayesian neural network are explained. A real case of use is introduced (about predicting landslide). The second part of this paper introduces how to construct Bayesian neural network in Python. Such an application is shown. Typical behaviour of Bayesian neural networks is demonstrated using example data.
10

Deep Bayesian Neural Networks for Prediction of Insurance Premiums / Djupa Bayesianska neurala nätverk för prediktioner på fordonsförsäkringar

Olsgärde, Nils January 2021 (has links)
In this project, the problem concerns predicting insurance premiums and particularly vehicle insurance premiums. These predictions were made with the help of Bayesian Neural Networks (BNNs), a type of Artificial Neural Network (ANN). The central concept of BNNs is that the parameters of the network follow distributions, which is beneficial. The modeling was done with the help of TensorFlow's Probability API, where a few models were built and tested on the data provided. The results conclude the possibility of predicting insurance premiums. However, the output distributions in this report were too wide to use. More data, both in volume and in the number of features, and better-structured data are needed. With better data, there is potential in building BNN and other machine learning (ML) models that could be useful for production purposes. / Detta projekt grundar sig i möjligheten till att predikera försäkringspremier, mer specifikt fordonsförsäkringspremier. Prediktioner har gjorts med hjälp av Bayesianska neurala nätverk, vilket är en typ av artificiella neurala nätverk. Det huvudsakliga konceptet med Bayesianska neurala nätverk är att parametrarna i nätverket följer distributioner vilket har vissa fördelar och inte är fallet för vanliga artificiella neurala nätverk. Ett antal modeller har konstruerats med hjälp av TensorFlow Probability API:t som tränats och testats på given data. Resultatet visar att det finns potential att prediktera premier med hjälp av de egenskapspunkter\footnote[2]{\say{Features} på engelska} som finns tillgängliga, men att resultaten inte är tillräckligt bra för att kunna användas i produktion. Med mer data, både till mängd och egenskapspunkter samt bättre strukturerad data finns potential att skapa bättre modeller av intresse för produktion.

Page generated in 0.052 seconds