• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 762
  • 242
  • 119
  • 117
  • 37
  • 34
  • 16
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1735
  • 354
  • 303
  • 277
  • 261
  • 242
  • 191
  • 191
  • 183
  • 182
  • 181
  • 170
  • 166
  • 166
  • 163
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Memcapacitive Reservoir Computing Architectures

Tran, Dat Tien 03 June 2019 (has links)
In this thesis, I propose novel brain-inspired and energy-efficient computing systems. Designing such systems has been the forefront goal of neuromorphic scientists over the last few decades. The results from my research show that it is possible to design such systems with emerging nanoscale memcapacitive devices. Technological development has advanced greatly over the years with the conventional von Neumann architecture. The current architectures and materials, however, will inevitably reach their physical limitations. While conventional computing systems have achieved great performances in general tasks, they are often not power-efficient in performing tasks with large input data, such as natural image recognition and tracking objects in streaming video. Moreover, in the von Neumann architecture, all computations take place in the Central Processing Unit (CPU) and the results are saved in the memory. As a result, information is shuffled back and forth between the memory and the CPU for processing, which creates a bottleneck due to the limited bandwidth of data paths. Adding cache memory and using general-purpose Graphic Processing Units (GPUs) do not completely resolve this bottleneck. Neuromorphic architectures offer an alternative to the conventional architecture by mimicking the functionality of a biological neural network. In a biological neural network, neurons communicate with each other through a large number of dendrites and synapses. Each neuron (a processing unit) locally processes the information that is stored in its input synapses (memory units). Distributing information to neurons and localizing computation at the synapse level alleviate the bottleneck problem and allow for the processing of a large amount of data in parallel. Furthermore, biological neural networks are highly adaptable to complex environments, tolerant of system noise and variations, and capable of processing complex information with extremely low power. Over the past five decades, researchers have proposed various brain-inspired architectures to perform neuromorphic tasks. IBM's TrueNorth is considered as the state-of-the-art brain-inspired architecture. It has 106 CMOS neurons with 256 x 256 programmable synapses and consumes about 60nW/neuron. Even though TrueNorth is power-efficient, its number of neurons and synapses is nothing compared to a human brain that has 1011 neurons and each neuron has, on average, 7,000 synaptic connections to other neurons. The human brain only consumes 2.3nW/neuron. The memristor brought neuromorphic computing one step closer to the human brain target. A memristor is a passive nano-device that has a memory. Its resistance changes with applied voltages. The resistive change with an applied voltage is similar to the function of a synapse. Memristors have been the prominent option for designing low power systems with high-area density. In fact, Truong and Min reported that an improved memristor-based crossbar performed a neuromorphic task with 50% reduction in area and 48% of power savings compared to CMOS arrays. However, memristive devices, by their nature, are still resistors, and the power consumption is bounded by their resistance. Here, a memcapacitor offers a promising alternative. My initial work indicated that memcapacitive networks performed complex tasks with equivalent performance, compared to memristive networks, but with much higher energy efficiency. A memcapacitor is also a two-terminal nano-device and its capacitance varies with applied voltages. Similar to a memristor, the capacitance of the memcapacitor changes with an applied voltage, similar to the function of a synapse. The memcapacitor is a storage device and does not consume static energy. Its switching energy is also small due to its small capacitance (nF to pF range). As a result, networks of memcapacitors have the potential to perform complex tasks with much higher power efficiency. Several memcapacitive synaptic models have been proposed as artificial synapses. Pershin and Di Ventra illustrated that a memcapacitor with two diodes has the functionality of a synapse. Flak suggested that a memcapacitor behaves as a synapse when it is connected with three CMOS switches in a Cellular Nanoscale Network (CNN). Li et al. demonstrated that when four identical memcapacitors are connected in a bridge network, they characterize the function of a synapse as well. Reservoir Computing (RC) has been used to explain higher-order cognitive functions and the interaction of short-term memory with other cognitive processes. Rigotti et al. observed that a dynamic system with short-term memory is essential in defining the internal brain states of a test agent. Although both traditional Recurrent Neural Networks (RNNs) and RC are dynamical systems, RC has a great benefit over RNNs due to the fact that the learning process of RC is simple and based on the training of the output layer. RC harnesses the computing nature of a random network of nonlinear devices, such as memcapacitors. Appeltant et al. showed that RC with a simplified reservoir structure is sufficient to perform speech recognition. Fewer nonlinear units connecting in a delay feedback loop provide enough dynamic responses for RC. Fewer units in reservoirs mean fewer connections and inputs, and therefore lower power consumption. As Goudarzi and Teuscher indicated, RC architectures still have inherent challenges that need to be addressed. First, theoretical studies have shown that both regular and random reservoirs achieve similar performances for particular tasks. A random reservoir, however, is more appropriate for unstructured networks of nanoscale devices. What is the role of network structure in RC for solving a task (Q1)? Secondly, the nonlinear characteristics of nanoscale devices contribute directly to the dynamics of a physical network, which influences the overall performance of an RC system. To what degree is a mixture of nonlinear devices able to improve the performances of reservoirs (Q2)? Thirdly, modularity, such as CMOS circuits in a digital building, is an essential key in building a complex system from fundamental blocks. Is hierarchical RCs able to solve complex tasks? What network topologies/hierarchies will lead to optimal performance? What is the learning complexity of such a system (Q3)? My research goal is to address the above RC challenges by exploring memcapacitive reservoir architectures. The analysis of memcapacitive monolithic reservoirs addresses both questions Q1 and Q2 above by showing that Small-World Power-Law (SWPL) structure is an optimal topological structure for RCs to perform time series prediction (NARMA-10), temporal recognition (Isolate Spoken Digits), and spatial task (MNIST) with minimal power consumption. On average, the SWPL reservoirs reduce significantly the power consumption by a factor of 1.21x, 31x, and 31.2x compared to the regular, the random, and the small-world reservoirs, respectively. Further analysis of SWPL structures underlines that high locality α and low randomness β decrease the cost to the systems in terms of wiring and nanowire dissipated power but do not guarantee the optimal performance of reservoirs. With a genetic algorithm to refine network structure, SWPL reservoirs with optimal network parameters are able to achieve comparable performance with less power. Compared to the regular reservoirs, the SWPL reservoirs consume less power, by a factor of 1.3x, 1.4x, and 1.5x. Similarly, compared to the random topology, the SWPL reservoirs save power consumption by a factor of 4.8x, 1.6x, and 2.1x, respectively. The simulation results of mixed-device reservoirs (memristive and memcapacitive reservoirs) provide evidence that the combination of memristive and memcapacitive devices potentially enhances the nonlinear dynamics of reservoirs in three tasks: NARMA-10, Isolated Spoken Digits, and MNIST. In addressing the third question (Q3), the kernel quality measurements show that hierarchical reservoirs have better dynamic responses than monolithic reservoirs. The improvement of dynamic responses allows hierarchical reservoirs to achieve comparable performance for Isolated Spoken Digit tasks but with less power consumption by a factor of 1.4x, 8.8x, 9.5, and 6.3x for delay-line, delay-line feedback, simple cycle, and random structures, respectively. Similarly, for the CIFAR-10 image tasks, hierarchical reservoirs gain higher performance with less power, by a factor of 5.6x, 4.2x, 4.8x, and 1.9x. The results suggest that hierarchical reservoirs have better dynamics than the monolithic reservoirs to solve sufficiently complex tasks. Although the performance of deep mem-device reservoirs is low compared to the state-of-the-art deep Echo State Networks, the initial results demonstrate that deep mem-device reservoirs are able to solve a high-dimensional and complex task such as polyphonic music task. The performance of deep mem-device reservoirs can be further improved with better settings of network parameters and architectures. My research illustrates the potentials of novel memcapacitive systems with SWPL structures that are brained-inspired and energy-efficient in performing tasks. My research offers novel memcapacitive systems that are applicable to low-power applications, such as mobile devices and the Internet of Things (IoT), and provides an initial design step to incorporate nano memcapacitive devices into future applications of nanotechnology.
242

The Method Of Brackets And The Bernoulli Symbol

January 2016 (has links)
Symbolic computation has been widely applied to Combinatorics, Number Theory, and also other fields. Many reliable and fast algorithms with corresponding implementations now have been established and developed. Using the tool of Experimental Mathematics, especially with the help of mathematical software, in particularly Mathematica, we could visualize the data, manipulate algorithms and implementations. The work presented here, based on symbolic computation, involves the following two parts. The first part introduces a systematic integration method, called the Method of Brackets. It only consists of a small number of simple and direct rules coming from the Schwinger parametrization of Feynman diagrams. Verification of each rule makes this method rigorous. Then it follows a necessary theorem that different series representations of the integrand, though lead to different processes of computations, do not affect the result. Examples of application lead to further discussions on analytic continuation, especially on Pochhammer symbol, divergent series and connection to Mellin transform of the Method of Brackets. In the end, comparison with other integration methods and a Mathematica package manual are presented. The second part provides a symbolic approach on the study of Bernoulli numbers and its generalizations. The Bernoulli symbol $\mathcal{B}$ originally comes from Umbral Calculus, as a formal approach to Sheffer sequences. Recently, a rigorous footing by probabilistic proof makes it also a random variable with its density function a shifted hyperbolic trigonometric function. Such an approach together with general method on random variables gives a variety of results on generalized Bernoulli polynomials, multiple zeta functions, and also other related topics. / Lin Jiu
243

Solving a Class of Higher-Order Equations over a Group Structure

Andrei, Ştefan, Chin, Wei Ngan 01 1900 (has links)
In recent years, symbolic and constraint-solving techniques have been making major advances and are continually being deployed in new business and engineering applications. A major push behind this trend has been the development and deployment of sophisticated methods that are able to comprehend and evaluate important sub-classes of symbolic problems (such as those in polynomial, linear inequality and finite domains). However, relatively little has been explored in higher-order domains, such as equations with unknown functions. This paper proposes a new symbolic method for solving a class of higher-order equations with an unknown function over the complex domain. Our method exploits the closure property of group structure (for functions) in order to allow an equivalent system of equations to be expressed and solved in the first-order setting. Our work is an initial step towards the relatively unexplored realm of higher-order constraint-solving, in general; and higher-order equational solving, in particular. We shall provide some theoretical background for the proposed method, and also prototype an implementation under Mathematica. We hope that our foray will help open up more sophisticated applications, as well as encourage work towards new methods for solving higher-order constraints. / Singapore-MIT Alliance (SMA)
244

Control Algorithms for Chaotic Systems

Bradley, Elizabeth 01 March 1991 (has links)
This paper presents techniques that actively exploit chaotic behavior to accomplish otherwise-impossible control tasks. The state space is mapped by numerical integration at different system parameter values and trajectory segments from several of these maps are automatically combined into a path between the desired system states. A fine-grained search and high computational accuracy are required to locate appropriate trajectory segments, piece them together and cause the system to follow this composite path. The sensitivity of a chaotic system's state-space topology to the parameters of its equations and of its trajectories to the initial conditions make this approach rewarding in spite of its computational demands.
245

Type-omega DPLs

Arkoudas, Konstantine 16 October 2001 (has links)
Type-omega DPLs (Denotational Proof Languages) are languages for proof presentation and search that offer strong soundness guarantees. LCF-type systems such as HOL offer similar guarantees, but their soundness relies heavily on static type systems. By contrast, DPLs ensure soundness dynamically, through their evaluation semantics; no type system is necessary. This is possible owing to a novel two-tier syntax that separates deductions from computations, and to the abstraction of assumption bases, which is factored into the semantics of the language and allows for sound evaluation. Every type-omega DPL properly contains a type-alpha DPL, which can be used to present proofs in a lucid and detailed form, exclusively in terms of primitive inference rules. Derived inference rules are expressed as user-defined methods, which are "proof recipes" that take arguments and dynamically perform appropriate deductions. Methods arise naturally via parametric abstraction over type-alpha proofs. In that light, the evaluation of a method call can be viewed as a computation that carries out a type-alpha deduction. The type-alpha proof "unwound" by such a method call is called the "certificate" of the call. Certificates can be checked by exceptionally simple type-alpha interpreters, and thus they are useful whenever we wish to minimize our trusted base. Methods are statically closed over lexical environments, but dynamically scoped over assumption bases. They can take other methods as arguments, they can iterate, and they can branch conditionally. These capabilities, in tandem with the bifurcated syntax of type-omega DPLs and their dynamic assumption-base semantics, allow the user to define methods in a style that is disciplined enough to ensure soundness yet fluid enough to permit succinct and perspicuous expression of arbitrarily sophisticated derived inference rules. We demonstrate every major feature of type-omega DPLs by defining and studying NDL-omega, a higher-order, lexically scoped, call-by-value type-omega DPL for classical zero-order natural deduction---a simple choice that allows us to focus on type-omega syntax and semantics rather than on the subtleties of the underlying logic. We start by illustrating how type-alpha DPLs naturally lead to type-omega DPLs by way of abstraction; present the formal syntax and semantics of NDL-omega; prove several results about it, including soundness; give numerous examples of methods; point out connections to the lambda-phi calculus, a very general framework for type-omega DPLs; introduce a notion of computational and deductive cost; define several instrumented interpreters for computing such costs and for generating certificates; explore the use of type-omega DPLs as general programming languages; show that DPLs do not have to be type-less by formulating a static Hindley-Milner polymorphic type system for NDL-omega; discuss some idiosyncrasies of type-omega DPLs such as the potential divergence of proof checking; and compare type-omega DPLs to other approaches to proof presentation and discovery. Finally, a complete implementation of NDL-omega in SML-NJ is given for users who want to run the examples and experiment with the language.
246

Computing 3-D Motion in Custom Analog and Digital VLSI

Dron, Lisa 28 November 1994 (has links)
This thesis examines a complete design framework for a real-time, autonomous system with specialized VLSI hardware for computing 3-D camera motion. In the proposed architecture, the first step is to determine point correspondences between two images. Two processors, a CCD array edge detector and a mixed analog/digital binary block correlator, are proposed for this task. The report is divided into three parts. Part I covers the algorithmic analysis; part II describes the design and test of a 32$\time $32 CCD edge detector fabricated through MOSIS; and part III compares the design of the mixed analog/digital correlator to a fully digital implementation.
247

On the Computation of Heterogeneous Agent Models and Its Applications

Feng, Zhigang 24 April 2009 (has links)
This thesis has two parts, each with a different subject. Part 1 studies the macroeconomic implications of alternative health care reforms. Part 2 studies the computation and simulation of dynamic competitive equilibria in models with heterogeneous agents and market frictions. In 2007, 44.5 million non-elderly in the U.S did not have health insurance coverage. Empirical studies suggest that there are serious negative consequences associated with uninsurance. Consequently, there is wide agreement that reforming the current health care system is desirable and several proposals have been discussed among economists and in the political arena. However, little attention has been paid to quantify the macroeconomic consequences of reforming the health insurance system in the U.S. The objective of this section is to develop a theoretical framework to evaluate a broad set of health care reform plans. I build a model that is capable of reproducing a set of key facts of health expenditure and insurance demand patterns, as well as key macroeconomic conditions of the U.S. during the last decade. Then, I use this model to derive the macroeconomic implications of alternative reforms and alternative ways of funding these reforms. The second part of this thesis studies the computation and simulation of dynamic competitive equilibria in models with heterogeneous agents and market frictions. This type of models have been of considerable interest in macroeconomics and finance to analyze the effects of various macroeconomic policies, the evolution of wealth and income distribution, and the variability of asset prices. However, there is no reliable algorithm available to compute their equilibria. We develop a theoretical framework for the computation and simulation of dynamic competitive markets economies with heterogeneous agents and market frictions. We apply these methods to some macroeconomic models and find important improvements over traditional methods.
248

A General Framework for Multiparty Computations

Reistad, Tord Ingolf January 2012 (has links)
Multiparty computation is a computation between multiple players which want to compute a common function based on private input. It was first proposed over 20 years ago and has since matured into a well established science. The goal of this thesis has been to develop efficient protocols for different operations used in multiparty computation and to propose uses for multiparty computation in real world systems. This thesis therefore gives the reader an overview of multiparty computation from the simplest primitives to the current state of software frameworks for multiparty computation, and provides ideas for future applications. Included in this thesis is a proposed model of multiparty computation based on a model of communication complexity. This model provides a good foundation for the included papers and for measuring the efficiency of multiparty computation protocols. In addition to this model, a more practical approach is also included, which examines different secret sharing schemes and how they are used as building blocks for basic multiparty computation operations. This thesis identifies five basic multiparty computation operations: sharing, recombining, addition, multiplication and negation, and shows how these five operations can be used to create more complex operations. In particular two operations “less-than” and “bitwise decomposition” are examined in detail in the included papers. “less-than” performs the “<” operator on two secret shared values with a secret shared result and “bitwise decomposition” takes a secret shared value and transforms it into a vector of secret shared bitwise values. The overall goal of this thesis has been to create efficient methods for multiparty computation so that it might be used for practical applications in the future.
249

Hermite form computation of matrices of differential polynomials

Kim, Myung Sub 24 August 2009 (has links)
Given a matrix A in F(t)[D;\delta]^{n\times n} over the ring of differential polynomials, we first prove the existence of the Hermite form H of A over this ring. Then we determine degree bounds on U and H such that UA=H. Finally, based on the degree bounds on U and H, we compute the Hermite form H of A by reducing the problem to solving a linear system of equations over F(t). The algorithm requires a polynomial number of operations in F in terms of the input sizes: n, deg_{D} A, and deg_{t} A. When F=Q it requires time polynomial in the bit-length of the rational coefficients as well.
250

Hermite form computation of matrices of differential polynomials

Kim, Myung Sub 24 August 2009 (has links)
Given a matrix A in F(t)[D;\delta]^{n\times n} over the ring of differential polynomials, we first prove the existence of the Hermite form H of A over this ring. Then we determine degree bounds on U and H such that UA=H. Finally, based on the degree bounds on U and H, we compute the Hermite form H of A by reducing the problem to solving a linear system of equations over F(t). The algorithm requires a polynomial number of operations in F in terms of the input sizes: n, deg_{D} A, and deg_{t} A. When F=Q it requires time polynomial in the bit-length of the rational coefficients as well.

Page generated in 0.1171 seconds