Spelling suggestions: "subject:"probabilistic computing"" "subject:"probabilistic acomputing""
1 |
Memristive Probabilistic ComputingAlahmadi, Hamzah 10 1900 (has links)
In the era of Internet of Things and Big Data, unconventional techniques are rising
to accommodate the large size of data and the resource constraints. New computing
structures are advancing based on non-volatile memory technologies and different
processing paradigms. Additionally, the intrinsic resiliency of current applications
leads to the development of creative techniques in computations. In those applications,
approximate computing provides a perfect fit to optimize the energy efficiency
while compromising on the accuracy. In this work, we build probabilistic adders
based on stochastic memristor. Probabilistic adders are analyzed with respect of the
stochastic behavior of the underlying memristors. Multiple adder implementations
are investigated and compared. The memristive probabilistic adder provides a different
approach from the typical approximate CMOS adders. Furthermore, it allows for
a high area saving and design exibility between the performance and power saving.
To reach a similar performance level as approximate CMOS adders, the memristive
adder achieves 60% of power saving. An image-compression application is investigated using the memristive probabilistic adders with the performance and the energy trade-off.
|
2 |
Hardware implementation of autonomous probabilistic computersAhmed Zeeshan Pervaiz (7586213) 31 October 2019 (has links)
<pre><p>Conventional digital computers are built using stable deterministic units known as "bits".
These conventional computers have greatly evolved into sophisticated machines,
however there are many classes of problems such as optimization, sampling and
machine learning that still cannot be addressed efficiently with conventional
computing. Quantum computing, which uses q-bits, that are in a delicate
superposition of 0 and 1, is expected to perform some of these tasks
efficiently. However, decoherence, requirements for cryogenic operation and
limited many-body interactions pose significant challenges to scaled quantum
computers. Probabilistic computing is another unconventional computing paradigm
which introduces the concept of a probabilistic bit or "p-bit"; a robust
classical entity fluctuating between 0 and 1 and can be interconnected
electrically. The primary contribution of this thesis is the first experimental
proof-of-concept demonstration of p-bits built by slight modifications to the
magnetoresistive random-access memory (MRAM) operating at room temperature.
These p-bits are connected to form a clock-less autonomous probabilistic
computer. We first set the stage, by demonstrating a high-level emulation of
p-bits which establishes important rules of operation for autonomous
p-computers. The experimental demonstration is then followed by a low-level
emulation of MRAM based p-bits which will allow further study of device
characteristics and parameter variations for proper operation of p-computers.
We lastly demonstrate an FPGA based scalable synchronous probabilistic computer
which uses almost 450 digital p-bits to demonstrate large p-circuits.</p>
</pre>
|
3 |
Evaluation of Stochastic Magnetic Tunnel Junctions as Building Blocks for Probabilistic ComputingOrchi Hassan (9862484) 17 December 2020 (has links)
<p>Probabilistic
computing has been proposed as an attractive alternative for bridging the computational
gap between the classical computers of today and the quantum computers of
tomorrow. It offers to accelerate the solution to many combinatorial
optimization and machine learning problems of interest today, motivating the
development of dedicated hardware. Similar to the ‘bit’ of classical computing
or ‘q-bit’ of quantum computing, probabilistic bit or ‘p-bit’ serve as a
fundamental building-block for probabilistic hardware. p-bits are robust
classical quantities, fluctuating rapidly between its two states, envisioned as
three-terminal devices with a stochastic output controlled by its input. It is
possible to implement fast and efficient hardware p-bits by modifying the
present day magnetic random access memory (MRAM) technology. In this
dissertation, we evaluate the design and performance of low-barrier magnet
(LBM) based p-bit realizations.<br> </p>
<p>LBMs
can be realized from perpendicular magnets designed to be close to the in-plane
transition or from circular in-plane magnets. Magnetic tunnel junctions (MTJs) built
using these LBMs as free layers can be integrated with standard transistors to
implement the three-terminal p-bit units. A crucial parameter that determines
the response of these devices is the correlation-time of magnetization. We show
that for magnets with low energy barriers (Δ ≤ k<sub>B</sub>T) the circular
disk magnets with in-plane magnetic anisotropy (IMA) can lead to
correlation-times in <i>sub-ns</i> timescales; two orders of magnitude smaller
compared to magnets having perpendicular magnetic anisotropy (PMA). We show
that this striking difference is due to a novel precession-like fluctuation mechanism
that is enabled by the large demagnetization field in mono-domain circular disk
magnets. Our predictions on fast fluctuations in LBM magnets have recently
received experimental confirmation as well.<br></p>
<p>We
provide a detailed energy-delay performance evaluation of the stochastic MTJ
(s-MTJ) based p-bit hardware. We analyze the hardware using benchmarked SPICE
multi-physics modules and classify the necessary and sufficient conditions for
designing them. We connect our device performance analysis to systems-level
metrics by emphasizing problem and substrate independent figures-of-merit such
as flips per second and dissipated energy per flip that can be used to classify
probabilistic hardware. </p>
|
4 |
Learning, probabilistic, and asynchronous technologies for an ultra efficient datapathMarr, Bo 17 November 2009 (has links)
A novel microarchitecture and circuit design techniques are presented for an asynchronous datapath that not only exhibits an extremely high rate of performance, but is also energy efficient. A 0.5 um chip was fabricated and tested that contains test circuits for the asynchronous datapath. Results show an adder and multiplier design that due to the 2-dimensional bit pipelining techniques, speculative completion, dynamic asynchronous circuits, and bit-level reservation stations and reorder buffers can commit 16-bit additions and multiplications at 1 giga operation per second (GOPS). The synchronicity simulator is also shown that simulates the same architecture except at more modern transistor nodes showing adder and multiplier performances at up to 11.1 GOPS in a commerically available 65 nm process. When compared to other designs and results, these prove to be some of the fastest if not the fastest adders and multipliers to date. The chip technology also was tested down to supply voltages below threshold making it extremely energy efficient. The asynchronous architecture also allows more exotic technologies, which are presented. Learning digital circuits are presented whereby the current supplied to a digital gate can be dynamically updated with floating gate technology. Probabilistic digital signal processing is also presented where the probabilistic operation is due to the statistical delay through the asynchronous circuits. Results show successful image processing with probabilistic operation in the least significant bits of the datapath resulting in large performance and energy gains.
|
5 |
SPINTRONIC DEVICES FROM CONVENTIONAL AND EMERGING 2D MATERIALS FOR PROBABILISTIC COMPUTINGVaibhav R Ostwal (9751070) 14 December 2020 (has links)
<p>Novel
computational paradigms based on non-von Neumann architectures are being
extensively explored for modern data-intensive applications and big-data
problems. One direction in this context is to harness the intrinsic physics of
spintronics devices for the implementation of nanoscale and low-power building
blocks of such emerging computational systems. For example, a Probabilistic
Spin Logic (PSL) that consists of networks of p-bits has been proposed for
neuromorphic computing, Bayesian networks, and for solving optimization
problems. In my work, I will discuss two types of device-components required
for PSL: (i) p-bits mimicking binary stochastic neurons (BSN) and (ii) compound
synapses for implementing weighted interconnects between p-bits. Furthermore, I
will also show how the integration of recently discovered van der Waals
ferromagnets in spintronics devices can reduce the current densities required
by orders of magnitude, paving the way for future low-power spintronics
devices.</p>
<p>First, a
spin-device with input-output isolation and stable magnets capable of
generating tunable random numbers, similar to a BSN, was demonstrated. In this
device, spin-orbit torque pulses are used to initialize a nano-magnet with
perpendicular magnetic anisotropy (PMA) along its hard axis. After removal of
each pulse, the nano-magnet can relax back to either of its two stable states,
generating a stream of binary random numbers. By applying a small Oersted field
using the input terminal of the device, the probability of obtaining 0 or 1 in
binary random numbers (P) can be tuned electrically. Furthermore, our work
shows that in the case when two stochastic devices are connected in series, “P”
of the second device is a function of “P” of the first p-bit and the weight of
the interconnection between them. Such control over correlated probabilities of
stochastic devices using interconnecting weights is the working principle of
PSL.</p>
<p>Next my
work focused on compact and energy efficient implementations of p-bits and
interconnecting weights using modified spin-devices. It was shown that unstable
in-plane magnetic tunneling junctions (MTJs), i.e. MTJs with a low energy
barrier, naturally fluctuate between two states (parallel and anti-parallel)
without any external excitation, in this way generating binary random numbers.
Furthermore, spin-orbit torque of tantalum is used to control the time spent by
the in-plane MTJ in either of its two states i.e. “P” of the device. In this
device, the READ and WRITE paths are separated since the MTJ state is read by
passing a current through the MTJ (READ path) while “P” is controlled by
passing a current through the tantalum bar (WRITE path). Hence, a BSN/p-bit is
implemented without energy-consuming hard axis initialization of the magnet and
Oersted fields. Next, probabilistic switching of stable magnets was utilized to
implement a novel compound synapse, which can be used for weighted
interconnects between p-bits. In this experiment, an ensemble of nano-magnets
was subjected to spin-orbit torque pulses such that each nano-magnet has a
finite probability of switching. Hence, when a series of pulses are applied,
the total magnetization of the ensemble gradually increases with the number of
pulses</p>
<p>applied similar to the
potentiation and depression curves of synapses. Furthermore, it was shown that
a modified pulse scheme can improve the linearity of the synaptic behavior,
which is desired for neuromorphic computing. By implementing both neuronal and
synaptic devices using simple nano-magnets, we have shown that PSL can be
realized using a modified Magnetic Random Access Memory (MRAM) technology. Note
that MRAM technology exists in many current foundries.</p>
<p>To further
reduce the current densities required for spin-torque devices, we have
fabricated heterostructures consisting of a 2-dimensional semiconducting
ferromagnet (Cr<sub>2</sub>Ge<sub>2</sub>Te<sub>6</sub>) and a metal with
spin-orbit coupling metal (tantalum). Because of properties such as clean
interfaces, perfect crystalline nanomagnet structure and sustained magnetic
moments down to the mono-layer limit and low current shunting, 2D ferromagnets
require orders of magnitude lower current densities for spin-orbit torque
switching than conventional metallic ferromagnets such as CoFeB.</p>
|
6 |
On Spin-inspired Realization of Quantum and Probabilistic ComputingBrian Matthew Sutton (7551479) 30 October 2019 (has links)
The decline of Moore's law has catalyzed a significant effort to identify beyond-CMOS devices and architectures for the coming decades. A multitude of classical and quantum systems have been proposed to address this challenge, and spintronics has emerged as a promising approach for these post-Moore systems. Many of these architectures are tailored specifically for applications in combinatorial optimization and machine learning. Here we propose the use of spintronics for such applications by exploring two distinct but related computing paradigms. First, the use of spin-currents to manipulate and control quantum information is investigated with demonstrated high-fidelity gate operation. This control is accomplished through repeated entanglement and measurement of a stationary qubit with a flying-spin through spin-torque like effects. Secondly, by transitioning from single-spin quantum bits to larger spin ensembles, we then explore the use of stochastic nanomagnets to realize a probabilistic system that is intrinsically governed by Boltzmann statistics. The nanomagnets explore the search space at rapid speeds and can be used in a wide-range of applications including optimization and quantum emulation by encoding the solution to a given problem as the ground state of the equivalent Boltzmann machine. These applications are demonstrated through hardware emulation using an all-digital autonomous probabilistic circuit.
|
7 |
Harnessing resilience: biased voltage overscaling for probabilistic signal processingGeorge, Jason 26 October 2011 (has links)
A central component of modern computing is the idea that computation requires
determinism. Contrary to this belief, the primary contribution of this work shows that
useful computation can be accomplished in an error-prone fashion. Focusing on low-power
computing and the increasing push toward energy conservation, the work seeks to sacrifice
accuracy in exchange for energy savings.
Probabilistic computing forms the basis for this error-prone computation by diverging from the requirement of determinism and allowing for randomness within computing.
Implemented as probabilistic CMOS (PCMOS), the approach realizes enormous energy sav-
ings in applications that require probability at an algorithmic level. Extending probabilistic
computing to applications that are inherently deterministic, the biased voltage overscaling
(BIVOS) technique presented here constrains the randomness introduced through PCMOS.
Doing so, BIVOS is able to limit the magnitude of any resulting deviations and realizes
energy savings with minimal impact to application quality.
Implemented for a ripple-carry adder, array multiplier, and finite-impulse-response (FIR)
filter; a BIVOS solution substantially reduces energy consumption and does so with im-
proved error rates compared to an energy equivalent reduced-precision solution. When
applied to H.264 video decoding, a BIVOS solution is able to achieve a 33.9% reduction in
energy consumption while maintaining a peak-signal-to-noise ratio of 35.0dB (compared to
14.3dB for a comparable reduced-precision solution).
While the work presented here focuses on a specific technology, the technique realized
through BIVOS has far broader implications. It is the departure from the conventional
mindset that useful computation requires determinism that represents the primary innovation of this work. With applicability to emerging and yet to be discovered technologies,
BIVOS has the potential to contribute to computing in a variety of fashions.
|
8 |
Probabilistic Computing: From Devices to SystemsJan Kaiser (8346969) 22 April 2022 (has links)
<p>Conventional computing is based on the concept of bits which are classical entities that are either 0 or 1 and can be represented by stable magnets. The field of quantum computing relies on qubits which are a complex linear combination of 0 and 1. Recently, the concept of probabilistic computing with probabilistic (<em>p-</em>)bits was introduced where <em>p-</em>bits are robust classical entities that fluctuate between 0 and 1. <em>P-</em>bits can be naturally represented by low-barrier nanomagnets. Probabilistic computers (<em>p-</em>computers) based on <em>p-</em>bits are domain-based hardware accelerators for Monte Carlo algorithms that can efficiently address probabilistic tasks like sampling, optimization and machine learning. </p>
<p>In this dissertation, starting from the intrinsic physics of nanomagnets, we show that a compact hardware implementation of a <em>p-</em>bit based on stochastic magnetic tunnel junctions (s-MTJs) can operate at high-speeds in the order of nanoseconds, a prediction that has recently received experimental support.</p>
<p>We then move to the system level and illustrate by simulation and by experiment how multiple interconnected <em>p-</em>bits can be utilized to train a Boltzmann machine built with hardware <em>p-</em>bits. We observe that even non-ideal s-MTJs can be utilized for probabilistic computing when combined with hardware-aware learning.</p>
<p>Finally, we show how to build a <em>p-</em>computer to accelerate a wide variety of problems ranging from optimization and sampling to quantum computing and machine learning. The common theme for all these applications is the underlying Monte Carlo and Markov chain Monte Carlo algorithms and their parallelism enabled by a unique <em>p-</em>computer architecture.</p>
|
9 |
Quantum Emulation with Probabilistic ComputersShuvro Chowdhury (14030571) 31 October 2022 (has links)
<p>The recent groundbreaking demonstrations of quantum supremacy in noisy intermediate scale quantum (NISQ) computing era has triggered an intense activity in establishing finer boundaries between classical and quantum computing. In this dissertation, we use established techniques based on quantum Monte Carlo (QMC) to map quantum problems into probabilistic networks where the fundamental unit of computation, p-bit, is inherently probabilistic and can be tuned to fluctuate between ‘0’ and ‘1’ with desired probability. We can view this mapped network as a Boltzmann machine whose states each represent a Feynman path leading from an initial configuration of q-bits to a final configuration. Each such path, in general, has a complex amplitude, ψ which can be associated with a complex energy. The real part of this energy can be used to generate samples of Feynman paths in the usual way, while the imaginary part is accounted for by treating the samples as complex entities, unlike ordinary Boltzmann machines where samples are positive. This mapping of a quantum circuit onto a Boltzmann machine with complex energies should be particularly useful in view of the advent of special-purpose hardware accelerators known as Ising Machines which can obtain a very large number of samples per second through massively parallel operation. We also demonstrate this acceleration using a recently used quantum problem and speeding its QMC simulation by a factor of ∼ 1000× compared to a highly optimized CPU program. Although this speed-up has been demonstrated using a graph colored architecture in FPGA, we project another ∼ 100× improvement with an architecture that utilizes clockless analog circuits. We believe that this will contribute significantly to the growing efforts to push the boundaries of the simulability of quantum circuits with classical/probabilistic resources and comparing them with NISQ-era quantum computers. </p>
|
Page generated in 0.1036 seconds