• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Powering Next-Generation Artificial Intelligence by Designing Three-dimensional High-Performance Neuromorphic Computing System with Memristors

An, Hongyu 17 September 2020 (has links)
Human brains can complete numerous intelligent tasks, such as pattern recognition, reasoning, control and movement, with remarkable energy efficiency (20 W). In contrast, a typical computer only recognizes 1,000 different objects but consumes about 250 W power [1]. This performance significant differences stem from the intrinsic different structures of human brains and digital computers. The latest discoveries in neuroscience indicate the capabilities of human brains are attributed to three unique features: (1) neural network structure; (2) spike-based signal representation; (3) synaptic plasticity and associative memory learning [1, 2]. In this dissertation, the next-generation platform of artificial intelligence is explored by utilizing memristors to design a three-dimensional high-performance neuromorphic computing system. The low-variation memristors (fabricated by Virginia Tech) reduce the learning accuracy of the system significantly through adding heat dissipation layers. Moreover, three emerging neuromorphic architectures are proposed showing a path to realizing the next-generation platform of artificial intelligence with self-learning capability and high energy efficiency. At last, an Associative Memory Learning System is exhibited to reproduce an associative memory learning that remembers and correlates two concurrent events (pronunciation and shape of digits) together. / Doctor of Philosophy / In this dissertation, the next-generation platform of artificial intelligence is explored by utilizing memristors to design a three-dimensional high-performance neuromorphic computing system. The low-variation memristors (fabricated by Virginia Tech) reduce the learning accuracy of the system significantly through adding heat dissipation layers. Moreover, three emerging neuromorphic architectures are proposed showing a path to realizing the next-generation platform of artificial intelligence with self-learning capability and high energy efficiency. At last, an Associative Memory Learning System is exhibited to reproduce an associative memory learning that remembers and correlates two concurrent events (pronunciation and shape of digits) together.
2

Memcapacitive Reservoir Computing Architectures

Tran, Dat Tien 03 June 2019 (has links)
In this thesis, I propose novel brain-inspired and energy-efficient computing systems. Designing such systems has been the forefront goal of neuromorphic scientists over the last few decades. The results from my research show that it is possible to design such systems with emerging nanoscale memcapacitive devices. Technological development has advanced greatly over the years with the conventional von Neumann architecture. The current architectures and materials, however, will inevitably reach their physical limitations. While conventional computing systems have achieved great performances in general tasks, they are often not power-efficient in performing tasks with large input data, such as natural image recognition and tracking objects in streaming video. Moreover, in the von Neumann architecture, all computations take place in the Central Processing Unit (CPU) and the results are saved in the memory. As a result, information is shuffled back and forth between the memory and the CPU for processing, which creates a bottleneck due to the limited bandwidth of data paths. Adding cache memory and using general-purpose Graphic Processing Units (GPUs) do not completely resolve this bottleneck. Neuromorphic architectures offer an alternative to the conventional architecture by mimicking the functionality of a biological neural network. In a biological neural network, neurons communicate with each other through a large number of dendrites and synapses. Each neuron (a processing unit) locally processes the information that is stored in its input synapses (memory units). Distributing information to neurons and localizing computation at the synapse level alleviate the bottleneck problem and allow for the processing of a large amount of data in parallel. Furthermore, biological neural networks are highly adaptable to complex environments, tolerant of system noise and variations, and capable of processing complex information with extremely low power. Over the past five decades, researchers have proposed various brain-inspired architectures to perform neuromorphic tasks. IBM's TrueNorth is considered as the state-of-the-art brain-inspired architecture. It has 106 CMOS neurons with 256 x 256 programmable synapses and consumes about 60nW/neuron. Even though TrueNorth is power-efficient, its number of neurons and synapses is nothing compared to a human brain that has 1011 neurons and each neuron has, on average, 7,000 synaptic connections to other neurons. The human brain only consumes 2.3nW/neuron. The memristor brought neuromorphic computing one step closer to the human brain target. A memristor is a passive nano-device that has a memory. Its resistance changes with applied voltages. The resistive change with an applied voltage is similar to the function of a synapse. Memristors have been the prominent option for designing low power systems with high-area density. In fact, Truong and Min reported that an improved memristor-based crossbar performed a neuromorphic task with 50% reduction in area and 48% of power savings compared to CMOS arrays. However, memristive devices, by their nature, are still resistors, and the power consumption is bounded by their resistance. Here, a memcapacitor offers a promising alternative. My initial work indicated that memcapacitive networks performed complex tasks with equivalent performance, compared to memristive networks, but with much higher energy efficiency. A memcapacitor is also a two-terminal nano-device and its capacitance varies with applied voltages. Similar to a memristor, the capacitance of the memcapacitor changes with an applied voltage, similar to the function of a synapse. The memcapacitor is a storage device and does not consume static energy. Its switching energy is also small due to its small capacitance (nF to pF range). As a result, networks of memcapacitors have the potential to perform complex tasks with much higher power efficiency. Several memcapacitive synaptic models have been proposed as artificial synapses. Pershin and Di Ventra illustrated that a memcapacitor with two diodes has the functionality of a synapse. Flak suggested that a memcapacitor behaves as a synapse when it is connected with three CMOS switches in a Cellular Nanoscale Network (CNN). Li et al. demonstrated that when four identical memcapacitors are connected in a bridge network, they characterize the function of a synapse as well. Reservoir Computing (RC) has been used to explain higher-order cognitive functions and the interaction of short-term memory with other cognitive processes. Rigotti et al. observed that a dynamic system with short-term memory is essential in defining the internal brain states of a test agent. Although both traditional Recurrent Neural Networks (RNNs) and RC are dynamical systems, RC has a great benefit over RNNs due to the fact that the learning process of RC is simple and based on the training of the output layer. RC harnesses the computing nature of a random network of nonlinear devices, such as memcapacitors. Appeltant et al. showed that RC with a simplified reservoir structure is sufficient to perform speech recognition. Fewer nonlinear units connecting in a delay feedback loop provide enough dynamic responses for RC. Fewer units in reservoirs mean fewer connections and inputs, and therefore lower power consumption. As Goudarzi and Teuscher indicated, RC architectures still have inherent challenges that need to be addressed. First, theoretical studies have shown that both regular and random reservoirs achieve similar performances for particular tasks. A random reservoir, however, is more appropriate for unstructured networks of nanoscale devices. What is the role of network structure in RC for solving a task (Q1)? Secondly, the nonlinear characteristics of nanoscale devices contribute directly to the dynamics of a physical network, which influences the overall performance of an RC system. To what degree is a mixture of nonlinear devices able to improve the performances of reservoirs (Q2)? Thirdly, modularity, such as CMOS circuits in a digital building, is an essential key in building a complex system from fundamental blocks. Is hierarchical RCs able to solve complex tasks? What network topologies/hierarchies will lead to optimal performance? What is the learning complexity of such a system (Q3)? My research goal is to address the above RC challenges by exploring memcapacitive reservoir architectures. The analysis of memcapacitive monolithic reservoirs addresses both questions Q1 and Q2 above by showing that Small-World Power-Law (SWPL) structure is an optimal topological structure for RCs to perform time series prediction (NARMA-10), temporal recognition (Isolate Spoken Digits), and spatial task (MNIST) with minimal power consumption. On average, the SWPL reservoirs reduce significantly the power consumption by a factor of 1.21x, 31x, and 31.2x compared to the regular, the random, and the small-world reservoirs, respectively. Further analysis of SWPL structures underlines that high locality α and low randomness β decrease the cost to the systems in terms of wiring and nanowire dissipated power but do not guarantee the optimal performance of reservoirs. With a genetic algorithm to refine network structure, SWPL reservoirs with optimal network parameters are able to achieve comparable performance with less power. Compared to the regular reservoirs, the SWPL reservoirs consume less power, by a factor of 1.3x, 1.4x, and 1.5x. Similarly, compared to the random topology, the SWPL reservoirs save power consumption by a factor of 4.8x, 1.6x, and 2.1x, respectively. The simulation results of mixed-device reservoirs (memristive and memcapacitive reservoirs) provide evidence that the combination of memristive and memcapacitive devices potentially enhances the nonlinear dynamics of reservoirs in three tasks: NARMA-10, Isolated Spoken Digits, and MNIST. In addressing the third question (Q3), the kernel quality measurements show that hierarchical reservoirs have better dynamic responses than monolithic reservoirs. The improvement of dynamic responses allows hierarchical reservoirs to achieve comparable performance for Isolated Spoken Digit tasks but with less power consumption by a factor of 1.4x, 8.8x, 9.5, and 6.3x for delay-line, delay-line feedback, simple cycle, and random structures, respectively. Similarly, for the CIFAR-10 image tasks, hierarchical reservoirs gain higher performance with less power, by a factor of 5.6x, 4.2x, 4.8x, and 1.9x. The results suggest that hierarchical reservoirs have better dynamics than the monolithic reservoirs to solve sufficiently complex tasks. Although the performance of deep mem-device reservoirs is low compared to the state-of-the-art deep Echo State Networks, the initial results demonstrate that deep mem-device reservoirs are able to solve a high-dimensional and complex task such as polyphonic music task. The performance of deep mem-device reservoirs can be further improved with better settings of network parameters and architectures. My research illustrates the potentials of novel memcapacitive systems with SWPL structures that are brained-inspired and energy-efficient in performing tasks. My research offers novel memcapacitive systems that are applicable to low-power applications, such as mobile devices and the Internet of Things (IoT), and provides an initial design step to incorporate nano memcapacitive devices into future applications of nanotechnology.
3

A Non-destructive Crossbar Architecture of Multi-Level Memory-Based Resistor

Sahebkarkhorasani, Seyedmorteza 01 May 2015 (has links)
Nowadays, researchers are trying to shrink the memory cell in order to increase the capacity of the memory system and reduce the hardware costs. In recent years, there has been a revolution in electronics by using fundamentals of physics to build a new memory for computer application in order to increase the capacity and decrease the power consumption. Increasing the capacity of the memory causes a growth in the chip area. From 1971 to 2012 semiconductor manufacturing process improved from 6µm to 22 µm. In May 2008, S.Williams stated that "it is time to stop shrinking". In his paper, he declared that the process of shrinking memory element has recently become very slow and it is time to use another alternative in order to create memory elements [9]. In this project, we present a new design of a memory array using the new element named Memristor [3]. Memristor is a two-terminal passive electrical element that relates the charge and magnetic flux to each other. The device remained unknown since 1971 when it was discovered by Chua and introduced as the fourth fundamental passive element like capacitor, inductor and resistor [3]. Memristor has a dynamic resistance and it can retain its previous value even after disconnecting the power supply. Due to this interesting behavior of the Memristor, it can be a good replacement for all of the Non-Volatile Memories (NVMs) in the near future. Combination of this newly introduced element with the nanowire crossbar architecture would be a great structure which is called Crossbar Memristor. Some frameworks have recently been introduced in literature that utilized Memristor crossbar array, but there are many challenges to implement the Memristor crossbar array due to fabrication and device limitations. In this work, we proposed a simple design of Memristor crossbar array architecture which uses input feedback in order to preserve its data after each read operation
4

Etude, réalisation et caractérisation de memristors organiques électro-greffés en tant que nanosynapses de circuits neuro-inspirés / Study, fabrication and characterization of electro-grafted organic memristors as nanosynapses for neuro inspired circuits

Cabaret, Théo 09 September 2014 (has links)
Cette thèse s'inscrit dans le contexte de l'étude des circuits neuromorphiques utilisant des dispositifs memristifs comme synapses. Son objectif principal est d'évaluer les mérites d'une nouvelle classe de mémoires organiques développées au LICSEN (CEA Saclay/IRAMIS) et, plus particulièrement, leur adéquation avec les propositions d'implémentation et les règles d'apprentissage proposées par l'équipe NanoArchi de l'IEF (Univ. Paris-Sud, Orsay). Les memristors étudiés sont basés sur l'electro-greffage en films minces de complexes organiques redox pour la formation de jonctions métal/molécules/métal robustes et scalables. Outre la fabrication de memristors, le travail inclut d'importants efforts de caractérisation électrique (vitesse, non-volatilité, scalabilité, robustesse, etc.) visant d'une part à étudier les mécanismes de commutation dans ces nouveaux matériaux memristifs organiques, et d'autres part, à évaluer leur potentiel en tant que synapses. Cette thèse présente également une étude préparatoire à la réalisation d'un démonstrateur de circuit mixte de type réseaux de neurones combinant nano-memristors et électronique conventionnelle (programmabilité des dispositifs en mode impulsionnel, réalisation d'assemblées de dispositifs, variabilité). De plus, la démonstration de la compatibilité de ces memristors avec la propriété STDP (Spike Timing Dependent Plasticity) ainsi que de l’apprentissage d’un « réflexe conditionné » ouvrent la voie aux apprentissages non-supervisés. / This PhD project takes place in the context of the study of neuromorphic circuits using memristor devices as synapses. The main objective is to evaluate a new class of organic memories developed at LICSEN (CEA Saclay/IRAMIS) and particularly their compatibility with the learning rules and the implementation strategy proposed by the Nanoarchi group at IEF (Univ. Paris-Sud, Orsay). These new memristors are based on the electro-grafting of organic redox complexes thin films to form robust and scalable metal/molecules/metal junctions. In addition to memristor fabrication, this work includes detailed electrical characterization studies (speed, retention property, scalability, robustness, etc.) aiming at, on the one hand, establishing the commutation mechanism in these new memristors and, on the other hand, evaluating their potential as synapses. This work also proposes a preparatory study of a neural-network type mixed-circuit demonstrator combining nano-memristors and conventional electronic (programmability of devices by spikes, fabrication of assemblies of memristors, variability). Moreover the demonstration of the compatibility of such memristors with the STDP (Spike Timing Dependent Plasticity) property and of the learning of a “conditioned reflex” opens the way to future unsupervised learning studies.
5

Architectures de circuits nanoélectroniques neuro-inspirée / Neuro-inspired architectures for nano-circuits

Chabi, Djaafar 09 March 2012 (has links)
Les nouvelles techniques de fabrication nanométriques comme l’auto-assemblage ou la nanoimpression permettent de réaliser des matrices régulières (crossbars) atteignant des densités extrêmes (jusqu’à 1012 nanocomposants/cm2) tout en limitant leur coût de fabrication. Cependant, il est attendu que ces technologies s’accompagnent d’une augmentation significative du nombre de défauts et de dispersions de caractéristiques. La capacité à exploiter ces crossbars est alors conditionnée par le développement de nouvelles techniques de calcul capables de les spécialiser et de tolérer une grande densité de défauts. Dans ce contexte, l’approche neuromimétique qui permet tout à la fois de configurer les nanodispositifs et de tolérer leurs défauts et dispersions de caractéristiques apparaît spécialement pertinente. L’objectif de cette thèse est de démontrer l’efficacité d’une telle approche et de quantifier la fiabilité obtenue avec une architecture neuromimétique à base de crossbar de memristors, ou neurocrossbar (NC). Tout d’abord la thèse introduit des algorithmes permettant l’apprentissage de fonctions logiques sur un NC. Par la suite, la thèse caractérise la tolérance du modèle NC aux défauts et aux variations de caractéristiques des memristors. Des modèles analytiques probabilistes de prédiction de la convergence de NC ont été proposés et confrontés à des simulations Monte-Carlo. Ils prennent en compte l’impact de chaque type de défaut et de dispersion. Grâce à ces modèles analytiques il devient possible d’extrapoler cette étude à des circuits NC de très grande taille. Finalement, l’efficacité des méthodes proposées est expérimentalement démontrée à travers l’apprentissage de fonctions logiques par un NC composé de transistors à nanotube de carbone à commande optique (OG-CNTFET). / Novel manufacturing techniques, such as nanoscale self-assembly or nanoimprint, allow a cost-efficient way to fabricate high-density crossbar matrices (1012 nanodevices/cm2). However, it is expected that these technologies will be accompanied by a significant increase of defects and dispersion in device characteristics. Thus, programming these crossbars require new computational techniques that possess high tolerance for such variations. In this context, approaches based on neural networks are promising for configuring nanodevices, since they provide a natural way for tolerating low yields and device variations. The main objective of this thesis is to explore such a neural-network approach, by examining factors such as efficiency and reliability, using the memristor crossbar architecture or neurocrossbar (NC). We introduce algorithms for learning the logic functions on the NC, and the tolerance of NC against static defects (stuck-defect) and dispersion of device properties is discussed. Probabilistic analytical models for predicting the convergence of NC are proposed and compared with Monte Carlo simulations, which take into account the impact of each type of defect and dispersion. These analytical models can be extrapolated to study large-sized NCs. Finally, the effectiveness of the proposed methods is experimentally demonstrated through the learning of logic functions by a real NC made of Optically Gated Carbon Nanotube Field Effect Transistor (OG-CNTFET).
6

Complete Design Methodology of a Massively Parallel and Pipelined Memristive Stateful IMPLY Logic Based Reconfigurable Architecture

Rahman, Kamela Choudhury 06 June 2016 (has links)
Continued dimensional scaling of CMOS processes is approaching fundamental limits and therefore, alternate new devices and microarchitectures are explored to address the growing need of area scaling and performance gain. New nanotechnologies, such as memristors, emerge. Memristors can be used to perform stateful logic with nanowire crossbars, which allows for implementation of very large binary networks that can be easily reconfigured. This research involves the design of a memristor-based massively parallel datapath for various applications, specifically SIMD (Single Instruction Multiple Data) like architecture, and parallel pipelines. The dissertation develops a new model of massively parallel memristor-CMOS hybrid datapath architectures at a systems level, as well as a complete methodology to design them. One innovation of the proposed approach is that the datapath design is based on space-time diagrams that use stateful IMPLY gates built from binary memristors. This notation aids in the circuit minimization in logic design, calculations of delay and memristor costs, and sneak-path avoidance. Another innovation of the proposed methodology is a general, new, architecture model, MsFSMD (Memristive stateful Finite State Machine with Datapath) that has two interacting sub-systems: 1) a controller composed of a memristive RAM, MsRAM, to act as a pulse generator, along with a finite state machine realized in CMOS, a CMOS counter, CMOS multiplexers and CMOS decoders, 2) massively parallel, pipelined, datapath realized with a new variant of a CMOL-like nanowire crossbar array, MsCMOL (Memristive stateful CMOL), with binary stateful memristor-based IMPLY gates. Next contribution of the dissertation is the new type of FPGA. In contrast to the previous memristor-based FPGA (mrFPGA), the proposed MsFPGA (Memristive stateful logic Field Programmable Gate Array) uses memristors for memory, connections programming, and combinational logic implementation. With a regular structure of square abutting blocks of memristive nanowire crossbars and their short connections, proposed architecture is highly reconfigurable. As an example of using the proposed new FPGA to realize biologically inspired systems, the detailed design of a pipelined Euclidean Distance processor was presented and its various applications are mentioned. Euclidean Distance calculation is widely used by many neural network and associative memory based algorithms.
7

The Design of a Simple, Spiking Sparse Coding Algorithm for Memristive Hardware

Woods, Walt 11 March 2016 (has links)
Calculating a sparse code for signals with high dimensionality, such as high-resolution images, takes substantial time to compute on a traditional computer architecture. Memristors present the opportunity to combine storage and computing elements into a single, compact device, drastically reducing the area required to perform these calculations. This work focused on the analysis of two existing sparse coding architectures, one of which utilizes memristors, as well as the design of a new, third architecture that employs a memristive crossbar. These architectures implement either a non-spiking or spiking variety of sparse coding based on the Locally Competitive Algorithm (LCA) introduced by Rozell et al. in 2008. Each architecture receives an arbitrary number of input lines and drives an arbitrary number of output lines. Training of the dictionary used for the sparse code was implemented through external control signals that approximate Oja's rule. The resulting designs were capable of representing input in real-time: no resets would be needed between frames of a video, for instance, though some settle time would be needed. The spiking architecture proposed is novel, emphasizing simplicity to achieve lower power than existing designs. The architectures presented were tested for their ability to encode and reconstruct 8 x 8 patches of natural images. The proposed network reconstructed patches with a normalized, root-mean-square error of 0.13, while a more complicated CMOS-only approach yielded 0.095, and a non-spiking approach yielded 0.074. Several outputs competing for representation of the input was shown to improve reconstruction quality and preserve more subtle components in the final encoding; the proposed algorithm lacks this feature. Steps to address this were proposed for future work by scaling input spikes according to the current expected residual, without adding much complexity. The architectures were also tested with the MNIST digit database, passing a sparse code onto a basic classifier. The proposed architecture scored 81% on this test, a CMOS-only spiking variant scored 76%, and the non-spiking algorithm scored 85%. Power calculations were made for each design and compared against other publications. The overall findings showed great promise for spiking memristor-based ASICs, consuming only 28% of the power used by non-spiking architectures and 6.6% as much power as a CMOS-only spiking architecture on this task. The spike-based nature of the novel design was also parameterized into several intuitive parameters that could be adjusted to prefer either performance or power efficiency. The design and analysis of architectures for sparse coding should greatly reduce the amount of future work needed to implement an end-to-end classification pipeline for images or other signal data. When lower power is a primary concern, the proposed architecture should be considered as it surpassed other published algorithms. These pipelines could be used to provide low-power visual assistance, highlighting objects within high-definition video frames in real-time. The technology could also be used to help self-driving cars identify hazards more quickly and efficiently.
8

Understanding Security Threats of Emerging Computing Architectures and Mitigating Performance Bottlenecks of On-Chip Interconnects in Manycore NTC System

Rajamanikkam, Chidhambaranathan 01 May 2019 (has links)
Emerging computing architectures such as, neuromorphic computing and third party intellectual property (3PIP) cores, have attracted significant attention in the recent past. Neuromorphic Computing introduces an unorthodox non-von neumann architecture that mimics the abstract behavior of neuron activity of the human brain. They can execute more complex applications, such as image processing, object recognition, more efficiently in terms of performance and energy than the traditional microprocessors. However, focus on the hardware security aspects of the neuromorphic computing at its nascent stage. 3PIP core, on the other hand, have covertly inserted malicious functional behavior that can inflict range of harms at the system/application levels. This dissertation examines the impact of various threat models that emerges from neuromorphic architectures and 3PIP cores. Near-Threshold Computing (NTC) serves as an energy-efficient paradigm by aggressively operating all computing resources with a supply voltage closer to its threshold voltage at the cost of performance. Therefore, STC system is scaled to many-core NTC system to reclaim the lost performance. However, the interconnect performance in many-core NTC system pose significant bottleneck that hinders the performance of many-core NTC system. This dissertation analyzes the interconnect performance, and further, propose a novel technique to boost the interconnect performance of many-core NTC system.
9

Анализа квантних механизама транспорта присутних у мемристивним уређајима на бази наноматеријала / Analiza kvantnih mehanizama transporta prisutnih u memristivnim uređajima na bazi nanomaterijala / Analysis of quantum transport mechanisms in nanometarial based memristive devices

Samardžić Nataša 24 December 2016 (has links)
<p>Истаживања у оквиру докторске дисертације имају за циљ да пруже допринос дубљем разумевању физичких механизама присутних код мемристора, с обзиром да у стручној литератури и даље постоје отворена питања везана за кључни процес који индукује мемристивни ефекат у материјалу. Као функционални материјал за меморије на бази промене валенце, на ком се испитује мемристивни ефекат, одабран је титанијум диоксид јер се већ показао као добар кандидат за резистивно-прекидачке меморије. Експериментални резултати показују ефекат квантизације проводности за ТiО2 мемристоре, што захтева развијање и примену модела балистичког транспорта за описивање електричних карактериристика узорка.</p> / <p>Istaživanja u okviru doktorske disertacije imaju za cilj da pruže doprinos dubljem razumevanju fizičkih mehanizama prisutnih kod memristora, s obzirom da u stručnoj literaturi i dalje postoje otvorena pitanja vezana za ključni proces koji indukuje memristivni efekat u materijalu. Kao funkcionalni materijal za memorije na bazi promene valence, na kom se ispituje memristivni efekat, odabran je titanijum dioksid jer se već pokazao kao dobar kandidat za rezistivno-prekidačke memorije. Eksperimentalni rezultati pokazuju efekat kvantizacije provodnosti za TiO2 memristore, što zahteva razvijanje i primenu modela balističkog transporta za opisivanje električnih karakteriristika uzorka.</p> / <p>Research topics in this PhD thesis aims to provide contribution in deeper<br />understanding of physical mechanisms which drives resistive-switching<br />mechanism in memristors, as existing literature provides open questions for<br />key mechanism processes which influences memristive effect in materials. In<br />order to test response of Valance Change Memories, TiO2 nanomaterial was<br />used as the functional layer, as this material was already shown suitable for<br />these applications. Measured results show conductance quantization effect<br />for TiO2 based memristors, which requiers ballistic transport model for<br />interpretation of electrical response of the device.</p>
10

Brain-inspired Stochastic Models and Implementations

Al-Shedivat, Maruan 12 May 2015 (has links)
One of the approaches to building artificial intelligence (AI) is to decipher the princi- ples of the brain function and to employ similar mechanisms for solving cognitive tasks, such as visual perception or natural language understanding, using machines. The recent breakthrough, named deep learning, demonstrated that large multi-layer networks of arti- ficial neural-like computing units attain remarkable performance on some of these tasks. Nevertheless, such artificial networks remain to be very loosely inspired by the brain, which rich structures and mechanisms may further suggest new algorithms or even new paradigms of computation. In this thesis, we explore brain-inspired probabilistic mechanisms, such as neural and synaptic stochasticity, in the context of generative models. The two questions we ask here are: (i) what kind of models can describe a neural learning system built of stochastic components? and (ii) how can we implement such systems e ̆ciently? To give specific answers, we consider two well known models and the corresponding neural architectures: the Naive Bayes model implemented with a winner-take-all spiking neural network and the Boltzmann machine implemented in a spiking or non-spiking fashion. We propose and analyze an e ̆cient neuromorphic implementation of the stochastic neu- ral firing mechanism and study the e ̄ects of synaptic unreliability on learning generative energy-based models implemented with neural networks.

Page generated in 0.0662 seconds