• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5137
  • 1981
  • 420
  • 367
  • 312
  • 100
  • 73
  • 68
  • 66
  • 63
  • 56
  • 50
  • 44
  • 43
  • 39
  • Tagged with
  • 10698
  • 5795
  • 2836
  • 2720
  • 2637
  • 2389
  • 1656
  • 1614
  • 1545
  • 1523
  • 1336
  • 1114
  • 1030
  • 930
  • 898
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Enabling high-performance, mixed-signal approximate computing

St Amant, Renee Marie 07 July 2014 (has links)
For decades, the semiconductor industry enjoyed exponential improvements in microprocessor power and performance with the device scaling of successive technology generations. Scaling limitations at sub-micron technologies, however, have ceased to provide these historical performance improvements within a limited power budget. While device scaling provides a larger number of transistors per chip, for the same chip area, a growing percentage of the chip will have to be powered off at any given time due to power constraints. As such, the architecture community has focused on energy-efficient designs and is looking to specialized hardware to provide gains in performance. A focus on energy efficiency, along with increasingly less reliable transistors due to device scaling, has led to research in the area of approximate computing, where accuracy is traded for energy efficiency when precise computation is not required. There is a growing body of approximation-tolerant applications that, for example, compute on noisy or incomplete data, such as real-world sensor inputs, or make approximations to decrease the computation load in the analysis of cumbersome data sets. These approximation-tolerant applications span application domains, such as machine learning, image processing, robotics, and financial analysis, among others. Since the advent of the modern processor, computing models have largely presumed the attribute of accuracy. A willingness to relax accuracy requirements, however, with goal of gaining energy efficiency, warrants the re-investigation of the potential of analog computing. Analog hardware offers the opportunity for fast and low-power computation; however, it presents challenges in the form of accuracy. Where analog compute blocks have been applied to solve fixed-function problems, general-purpose computing has relied on digital hardware implementations that provide generality and programmability. The work presented in this thesis aims to answer the following questions: Can analog circuits be successfully integrated into general-purpose computing to provide performance and energy savings? And, what is required to address the historical analog challenges of inaccuracy, programmability, and a lack of generality to enable such an approach? This thesis work investigates a neural approach as a means to address the historical analog challenges of inaccuracy, programmability, and generality and to enable the use of analog circuits in general-purpose, high-performance computing. The first piece of this thesis work investigates the use of analog circuits at the microarchitecture level in the form of an analog neural branch predictor. The task of branch prediction can tolerate imprecision, as roll-back mechanisms correct for branch mispredictions, and application-level accuracy remains unaffected. We show that analog circuits enable the implementation of a highly-accurate, neural-prediction algorithm that is infeasible to implement in the digital domain. The second piece of this thesis work presents a neural accelerator that targets approximation-tolerant code. Analog neural acceleration provides application speedup of 3.3x and energy savings of 12.1x with a quality loss less than 10% for all except one approximation-tolerant benchmark. These results show that, using a neural approach, analog circuits can be applied to provide performance and energy efficiency in high-performance, general-purpose computing. / text
222

Temporal pattern identification in a self-organizing neural network with an application to data compression

Goodman, Stephen D. 08 1900 (has links)
No description available.
223

Learning in large-scale spiking neural networks

Bekolay, Trevor January 2011 (has links)
Learning is central to the exploration of intelligence. Psychology and machine learning provide high-level explanations of how rational agents learn. Neuroscience provides low-level descriptions of how the brain changes as a result of learning. This thesis attempts to bridge the gap between these two levels of description by solving problems using machine learning ideas, implemented in biologically plausible spiking neural networks with experimentally supported learning rules. We present three novel neural models that contribute to the understanding of how the brain might solve the three main problems posed by machine learning: supervised learning, in which the rational agent has a fine-grained feedback signal, reinforcement learning, in which the agent gets sparse feedback, and unsupervised learning, in which the agents has no explicit environmental feedback. In supervised learning, we argue that previous models of supervised learning in spiking neural networks solve a problem that is less general than the supervised learning problem posed by machine learning. We use an existing learning rule to solve the general supervised learning problem with a spiking neural network. We show that the learning rule can be mapped onto the well-known backpropagation rule used in artificial neural networks. In reinforcement learning, we augment an existing model of the basal ganglia to implement a simple actor-critic model that has a direct mapping to brain areas. The model is used to recreate behavioural and neural results from an experimental study of rats performing a simple reinforcement learning task. In unsupervised learning, we show that the BCM rule, a common learning rule used in unsupervised learning with rate-based neurons, can be adapted to a spiking neural network. We recreate the effects of STDP, a learning rule with strict time dependencies, using BCM, which does not explicitly remember the times of previous spikes. The simulations suggest that BCM is a more general rule than STDP. Finally, we propose a novel learning rule that can be used in all three of these simulations. The existence of such a rule suggests that the three types of learning examined separately in machine learning may not be implemented with separate processes in the brain.
224

Analysis of the central pattern generator for peristalsis in a caterpillar

Plavac, Nick. January 2007 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Department of Systems Science and Industrial Engineering, Thomas J. Watson School of Engineering and Applied Science, 2007. / Includes bibliographical references.
225

Ion channel dynamics in interneuron models of the cricket cercal sensory system /

Eaton, Carrie Elizabeth Diaz. January 2004 (has links) (PDF)
Thesis (M.A.) in Mathematics--University of Maine, 2004. / Includes vita. Includes bibliographical references (leaves 40-42).
226

Structural and functional characterization of scaffold protein par-3 /

Wu, Hao. January 2008 (has links)
Thesis (Ph.D.)--Hong Kong University of Science and Technology, 2008. / Includes bibliographical references (leaves 229-257). Also available in electronic version.
227

Analysis of electrocardiograms using artificial neural networks

Hedén, Bo. January 1997 (has links)
Thesis (doctoral)--Lund University, 1997. / Added t.p. with thesis statement inserted.
228

Expression and function of EphA4 and ephrin-As in avian trunk neural crest migration

McLennan, Rebecca, January 2004 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2004. / Typescript. Vita. Includes bibliographical references (leaves 179-221). Also available on the Internet.
229

Expression and function of EphA4 and ephrin-As in avian trunk neural crest migration /

McLennan, Rebecca, January 2004 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2004. / Typescript. Vita. Includes bibliographical references (leaves 179-221). Also available on the Internet.
230

Self-organized cortical map formation by guiding connections /

Lam, Yiu Man. January 2004 (has links)
Thesis (M.Phil.)--Hong Kong University of Science and Technology, 2005. / Includes bibliographical references (leaves 68-71). Also available in electronic version. Access restricted to campus users.

Page generated in 0.0326 seconds