Spelling suggestions: "subject:"stochastic beural bnetwork"" "subject:"stochastic beural conetwork""
1 |
Hardware implementation of autonomous probabilistic computersAhmed Zeeshan Pervaiz (7586213) 31 October 2019 (has links)
<pre><p>Conventional digital computers are built using stable deterministic units known as "bits".
These conventional computers have greatly evolved into sophisticated machines,
however there are many classes of problems such as optimization, sampling and
machine learning that still cannot be addressed efficiently with conventional
computing. Quantum computing, which uses q-bits, that are in a delicate
superposition of 0 and 1, is expected to perform some of these tasks
efficiently. However, decoherence, requirements for cryogenic operation and
limited many-body interactions pose significant challenges to scaled quantum
computers. Probabilistic computing is another unconventional computing paradigm
which introduces the concept of a probabilistic bit or "p-bit"; a robust
classical entity fluctuating between 0 and 1 and can be interconnected
electrically. The primary contribution of this thesis is the first experimental
proof-of-concept demonstration of p-bits built by slight modifications to the
magnetoresistive random-access memory (MRAM) operating at room temperature.
These p-bits are connected to form a clock-less autonomous probabilistic
computer. We first set the stage, by demonstrating a high-level emulation of
p-bits which establishes important rules of operation for autonomous
p-computers. The experimental demonstration is then followed by a low-level
emulation of MRAM based p-bits which will allow further study of device
characteristics and parameter variations for proper operation of p-computers.
We lastly demonstrate an FPGA based scalable synchronous probabilistic computer
which uses almost 450 digital p-bits to demonstrate large p-circuits.</p>
</pre>
|
2 |
Sparse Deep Learning and Stochastic Neural NetworkYan Sun (12425889) 13 May 2022 (has links)
<p>Deep learning has achieved state-of-the-art performance on many machine learning tasks. But the deep neural network(DNN) model still suffers a few issues. Over-parametrized neural network generally has better optimization landscape, but it is computationally expensive, hard to interpret and the model usually can not correctly quantify the prediction uncertainty. On the other hand, small DNN model could suffer from local trap and will be hard to optimize. In this dissertation, we tackle these issues from two directions, sparse deep learning and stochastic neural network. </p>
<p><br></p>
<p>For sparse deep learning, we proposed Bayesian neural network(BNN) model with mixture of normal prior. Theoretically, We established the posterior consistency and structure selection consistency, which ensures the sparse DNN model can be consistently identified. We also demonstrate the asymptotic normality of the prediction, which ensures the prediction uncertainty to be correctly quantified. Computationally, we proposed a prior annealing approach to optimize the posterior of BNN. The proposed methods share similar computation complexity to the standard stochastic gradient descent method for training DNN. Experiment results show that our model performs well on high dimensional variable selection as well as neural network pruning.</p>
<p><br></p>
<p>For stochastic neural network, we proposed a Kernel-Expanded Stochastic Neural Network model or K-StoNet model in short. We reformulate the DNN as a latent variable model and incorporate support vector regression (SVR) as the first hidden layer. The latent variable formulation breaks the training into a series of convex optimization problems and the model can be easily trained using the imputation-regularized optimization (IRO) algorithm. We provide theoretical guarantee for convergence of the algorithm and the prediction uncertainty quantification. Experiment results show that the proposed model can achieve good prediction performance and provide correct confidence region for prediction. </p>
|
3 |
Autonomous Probabilistic Hardware for Unconventional ComputingRafatul Faria (8771336) 29 April 2020 (has links)
In this thesis, we have proposed a new computing platform called probabilistic spin logic (PSL) based on probabilistic bits (p-bit) using low barrier nanomagnets (LBM) whose thermal barrier is of the order of a kT unlike conventional memory and spin logic devices that rely on high thermal barrier magnets (40-60 kT) to retain stability. p-bits are tunable random number generators (TRNG) analogous to the concept of binary stochastic neurons (BSN) in artificial neural network (ANN) whose output fluctuates between a +1 and -1 states with 50-50 probability at zero input bias and the stochastic output can be tuned by an applied input producing a sigmoidal characteristic response. p-bits can be interconnected by a synapse or weight matrix [J] to build p-circuits for solving a wide variety of complex unconventional problems such as inference, invertible Boolean logic, sampling and optimization. It is important to update the p-bits sequentially for proper operation where each p-bit update is informed of the states of other p-bits that it is connected to and this requires the use of sequencers in digital clocked hardware. But the unique feature of our probabilistic hardware is that they are autonomous that runs without any clocks or sequencers.<br>To ensure the necessary sequential informed update in our autonomous hardware it is important that the synapse delay is much smaller than the neuron fluctuation time.<br>We have demonstrated the notion of this autonomous hardware by SPICE simulation of different designs of low barrier nanomagnet based p-circuits for both symmetrically connected Boltzmann networks and directed acyclic Bayesian networks. It is interesting to note that for Bayesian networks a specific parent to child update order is important and requires specific design rule in the autonomous probabilistic hardware to naturally ensure the specific update order without any clocks. To address the issue of scalability of these autonomous hardware we have also proposed and benchmarked compact models for two different hardware designs against SPICE simulation and have shown that the compact models faithfully mimic the dynamics of the real hardware.<br>
|
Page generated in 0.0541 seconds