• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2682
  • 1221
  • 191
  • 181
  • 120
  • 59
  • 35
  • 27
  • 26
  • 25
  • 24
  • 21
  • 20
  • 20
  • 18
  • Tagged with
  • 5713
  • 5713
  • 2024
  • 1738
  • 1486
  • 1378
  • 1251
  • 1194
  • 997
  • 755
  • 702
  • 672
  • 623
  • 533
  • 516
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Radial-Basis-Function Neural Network Optimization of Microwave Systems

Murphy, Ethan Kane 13 January 2003 (has links)
An original approach in microwave optimization, namely, a neural network procedure combined with the full-wave 3D electromagnetic simulator QuickWave-3D implemented a conformal FDTD method, is presented. The radial-basis-function network is trained by simulated frequency characteristics of S-parameters and geometric data of the corresponding system. High accuracy and computational efficiency of the procedure is illustrated for a waveguide bend, waveguide T-junction with a post, and a slotted waveguide as a radiating element.
552

Recurrent computation in brains and machines

Cueva, Christopher January 2019 (has links)
There are more neurons in the human brain than seconds in a lifetime. Given this incredible number how can we hope to understand the computations carried out by the full ensemble of neural firing patterns? And neural activity is not the only substrate available for computations. The incredible diversity of function found within biological organisms is matched by an equally rich reservoir available for computation. If we are interested in the metamorphosis of a caterpillar to a butterfly we could explore how DNA expression changes the cell. If we are interested in developing therapeutic drugs we could explore receptors and ion channels. And if we are interested in how humans and other animals interpret incoming streams of sensory information and process them to make moment-by-moment decisions then perhaps we can understand much of this behavior by studying the firing rates of neurons. This is the level and approach we will take in this thesis. Given this diversity of potential reservoirs for computation, combined with limitations in recording technologies, it can be difficult to satisfactorily conclude that we are studying the full set of neural dynamics involved in a particular task. To overcome this limitation, we augment the study of neural activity with the study of artificial recurrent neural networks (RNNs) trained to mimic the behavior of humans and other animals performing experimental tasks. The inputs to the RNN are time-varying signals representing experimental stimuli and we adjust the parameters of the RNN so its time-varying outputs are the desired behavioral responses. In these artificial RNNs we have complete information about the network connectivity and moment-by-moment firing patterns and know, by design, that these are the only computational mechanisms being used to solve the tasks. If the artificial RNN and electrode recordings of real neurons have the same dynamics we can be more confident that we are studying the sufficient set of biological dynamics involved in the task. This is important if we want to make claims about the types of dynamics required, and observed, for various computational tasks, as is the case in Chapter 2 of this thesis. In Chapter 2 we develop tests to identify several classes of neural dynamics. The specific neural dynamic regimes we focus on are interesting because they each have different computational capabilities, including, the ability to keep track of time, or preserve information robustly against the flow of time (working memory). We then apply these tests to electrode recordings from nonhuman primates and artificial RNNs to understand how neural networks are able to simultaneously keep track of time and remember previous experiences in working memory. To accomplish both computational goals the brain is thought to use distinct neural dynamics; stable neural trajectories can be used as a clock to coordinate cognitive activity whereas attractor dynamics provide a stable mechanism for memory storage but all timing information is lost. To identify these neural regimes we decode the passage of time from neural data. Additionally, to encode the passage of time, stabilized neural trajectories can be either high-dimensional as is the case for randomly connected recurrent networks (chaotic reservoir networks) or low-dimensional as is the case for artificial RNNs trained with backpropagation through time. To disambiguate these models we compute the cumulative dimensionality of the neural trajectory as it evolves over time. Recurrent neural networks can also be used to generate hypotheses about neural computation. In Chapter 3 we use RNNs to generate hypotheses about the diverse set of neural response properties seen during spatial navigation, in particular, grid cells, and other spatial correlates, including border cells and band-like cells. The approach we take is 1) pick a task that requires navigation (spatial or mental), 2) create a RNN to solve the task, and 3) adjust the task or constraints on the neural network such that grid cells and other spatial response patterns emerge naturally as the network learns to perform the task. We trained RNNs to perform navigation tasks in 2D arenas based on velocity inputs. We find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells. Surprisingly, the order of the emergence of grid-like and border cells during network training is also consistent with observations from developmental studies. Together, our results suggest that grid cells, border cells and other spatial correlates observed in the Entorhinal Cortex of the mammalian brain may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits. All the tasks we have considered so far in this thesis require memory, but in Chapter 4 we explicitly explore the interactions between multiple memories in a recurrent neural network. Memory is the hallmark of recurrent neural networks, in contrast to standard feedforward neural networks where all signals travel in one direction from inputs to outputs and the network contains no memory of previous experiences. A recurrent neural network, as the name suggests, contains feedback loops giving the network the computational power of memory. In this chapter we train a RNN to perform a human psychophysics experiment and find that in order to reproduce human behavior, noise must be added to the network, causing the RNN to use more stable discrete memories to constrain less stable continuous memories.
553

Exoplanet transit modelling : three new planet discoveries, and a novel artificial neural network treatment for stellar limb darkening

Hay, Kirstin January 2018 (has links)
This first part of this thesis concerns the discovery and parameter determination of three hot Jupiter planets, first detected with by the SuperWASP collaboration, and their planetary nature is confirmed with the modelling of radial velocity measurements and further ground-based transit lightcurves. WASP-92b, WASP-93b and WASP-118b are all hot Jupiters with short orbital periods – 2.17, 2.73 and 4.05 days respectively. The analysis in this thesis finds WASP-92b to have R[sub]p = 1.461 ± 0.077 R[sub]J and M[sub]p = 0.805 ± 0.068 M[sub]J; WASP-93b to have R[sub]p = 1.597 ± 0.077 R[sub]J and M[sub]p = 1.47 ± 0.029 M[sub]J, and WASP-118b to have R[sub]p = 1.440 ± 0.036 R[sub]J and M[sub]p = 0.514 ± 0.020 M[sub]J. The second part of this thesis presents three novel approaches to modelling the effect of stellar limb darkening when fitting exoplanet transit lightcurves. The first method trains a Gaussian Process to interpolate between pre-calculated limb darkening coefficients for the non-linear limb darkening law. The method uses existing knowledge of the stellar atmosphere parameters as the constraints of the determined limb darkening coefficients for the host star of the transiting exoplanet system. The second method deploys an artificial neural network to model limb darkening without the requirement of a parametric approximation of the form of the limb profile. The neural network is trained for a specific bandpass directly from the outputs of stellar atmosphere models, allowing predictions to be made for the stellar intensity at a given position on the stellar surface for values of the T[sub]eff , log g and [Fe/H]. The efficacy of the method is demonstrated by accurately fitting a transit lightcurve for the transit of Venus, and for a single transit lightcurve of TRES-2b. The final limb darkening modelling method proposes an adjustment to the neural network model to account for the fact that the stellar radius is not constant across wavelengths. The method also allows the full variation in light at the edge of the star to be modelled by not assuming a sharp boundary at the limb.
554

Reconhecimento automático do locutor com redes neurais pulsadas. / Automatic speaker recognition using pulse coupled neural networks.

Antonio Pedro Timoszczuk 22 March 2004 (has links)
As Redes Neurais Pulsadas são objeto de intensa pesquisa na atualidade. Neste trabalho é avaliado o potencial de aplicação deste paradigma neural, na tarefa de reconhecimento automático do locutor. Após uma revisão dos tópicos considerados importantes para o entendimento do reconhecimento automático do locutor e das redes neurais artificiais, é realizada a implementação e testes do modelo de neurônio com resposta por impulsos. A partir deste modelo é proposta uma nova arquitetura de rede com neurônios pulsados para a implementação de um sistema de reconhecimento automático do locutor. Para a realização dos testes foi utilizada a base de dados Speaker Recognition v1.0, do CSLU – Center for Spoken Language Understanding do Oregon Graduate Institute - E.U.A., contendo frases gravadas a partir de linhas telefônicas digitais. Para a etapa de classificação foi utilizada uma rede neural do tipo perceptron multicamada e os testes foram realizados no modo dependente e independente do texto. A viabilidade das Redes Neurais Pulsadas para o reconhecimento automático do locutor foi constatada, demonstrando que este paradigma neural é promissor para tratar as informações temporais do sinal de voz. / Pulsed Neural Networks have received a lot of attention from researchers. This work aims to verify the capability of this neural paradigm when applied to a speaker recognition task. After a description of the automatic speaker recognition and artificial neural networks fundamentals, a spike response model of neurons is tested. A novel neural network architecture based on this neuron model is proposed and used in a speaker recognition system. Text dependent and independent tests were performed using the Speaker Recognition v1.0 database from CSLU – Center for Spoken Language Understanding of Oregon Graduate Institute - U.S.A. A multilayer perceptron is used as a classifier. The Pulsed Neural Networks demonstrated its capability to deal with temporal information and the use of this neural paradigm in a speaker recognition task is promising.
555

Energy Efficient Hardware Design of Neural Networks

January 2018 (has links)
abstract: Hardware implementation of deep neural networks is earning significant importance nowadays. Deep neural networks are mathematical models that use learning algorithms inspired by the brain. Numerous deep learning algorithms such as multi-layer perceptrons (MLP) have demonstrated human-level recognition accuracy in image and speech classification tasks. Multiple layers of processing elements called neurons with several connections between them called synapses are used to build these networks. Hence, it involves operations that exhibit a high level of parallelism making it computationally and memory intensive. Constrained by computing resources and memory, most of the applications require a neural network which utilizes less energy. Energy efficient implementation of these computationally intense algorithms on neuromorphic hardware demands a lot of architectural optimizations. One of these optimizations would be the reduction in the network size using compression and several studies investigated compression by introducing element-wise or row-/column-/block-wise sparsity via pruning and regularization. Additionally, numerous recent works have concentrated on reducing the precision of activations and weights with some reducing to a single bit. However, combining various sparsity structures with binarized or very-low-precision (2-3 bit) neural networks have not been comprehensively explored. Output activations in these deep neural network algorithms are habitually non-binary making it difficult to exploit sparsity. On the other hand, biologically realistic models like spiking neural networks (SNN) closely mimic the operations in biological nervous systems and explore new avenues for brain-like cognitive computing. These networks deal with binary spikes, and they can exploit the input-dependent sparsity or redundancy to dynamically scale the amount of computation in turn leading to energy-efficient hardware implementation. This work discusses configurable spiking neuromorphic architecture that supports multiple hidden layers exploiting hardware reuse. It also presents design techniques for minimum-area/-energy DNN hardware with minimal degradation in accuracy. Area, performance and energy results of these DNN and SNN hardware is reported for the MNIST dataset. The Neuromorphic hardware designed for SNN algorithm in 28nm CMOS demonstrates high classification accuracy (>98% on MNIST) and low energy (51.4 - 773 (nJ) per classification). The optimized DNN hardware designed in 40nm CMOS that combines 8X structured compression and 3-bit weight precision showed 98.4% accuracy at 33 (nJ) per classification. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2018
556

Iterative cerebellar segmentation using convolutional neural networks

Gerard, Alex Michael 01 December 2018 (has links)
Convolutional neural networks (ConvNets) have quickly become the most widely used tool for image perception and interpretation tasks over the past several years. The single most important resource needed for training a ConvNet that will successfully generalize to unseen examples is an adequately sized labeled dataset. In many interesting medical imaging cases, the necessary size or quality of training data is not suitable for directly training a ConvNet. Furthermore, access to the expertise to manually label such datasets is often infeasible. To address these barriers, we investigate a method for iterative refinement of the ConvNet training. Initially, unlabeled images are attained, minimal labeling is performed, and a model is trained on the sparse manual labels. At the end of each training iteration, full images are predicted, and additional manual labels are identified to improve the training dataset. In this work, we show how to utilize patch-based ConvNets to iteratively build a training dataset for automatically segmenting MRI images of the human cerebellum. We construct this training dataset using a small collection of high-resolution 3D images and transfer the resulting model to a much larger, much lower resolution, collection of images. Both T1-weighted and T2-weighted MRI modalities are utilized to capture the additional features that arise from the differences in contrast between modalities. The objective is to perform tissue-level segmentation, classifying each volumetric pixel (voxel) in an image as white matter, gray matter, or cerebrospinal fluid (CSF). We will present performance results on the lower resolution dataset, and report achieving a 12.7% improvement in accuracy over the existing segmentation method, expectation maximization. Further, we will present example segmentations from our iterative approach that demonstrate it’s ability to detect white matter branching near the outer regions of the anatomy, which agrees with the known biological structure of the cerebellum and has typically eluded traditional segmentation algorithms.
557

Applications of artificial neural networks in epidemiology : prediction and classification

Black, James Francis Patrick, 1959- January 2002 (has links)
Abstract not available
558

Forecasting the Stock Market : A Neural Network Approch

Andersson, Magnus, Palm, Johan January 2009 (has links)
<p>Forecasting the stock market is a complex task, partly because of the random walk behavior of the stock price series. The task is further complicated by the noise, outliers and missing values that are common in financial time series. Despite of this, the subject receives a fair amount of attention, which probably can be attributed to the potential rewards that follows from being able to forecast the stock market.</p><p>Since artificial neural networks are capable of exploiting non-linear relations in the data, they are suitable to use when forecasting the stock market. In addition to this, they are able to outperform the classic autoregressive linear models.</p><p>The objective of this thesis is to investigate if the stock market can be forecasted, using the so called error correction neural network. This is accomplished through the development of a method aimed at finding the optimum forecast model.</p><p>The results of this thesis indicates that the developed method can be applied successfully when forecasting the stock market. Of the five stocks that were forecasted in this thesis using forecast models based on the developed method, all generated positive returns. This suggests that the stock market can be forecasted using neural networks.</p>
559

Probabilistic computation in stochastic pulse neuromime networks

Hangartner, Ricky Dale 11 February 1994 (has links)
Graduation date: 1994
560

Forecasting the Stock Market : A Neural Network Approch

Andersson, Magnus, Palm, Johan January 2009 (has links)
Forecasting the stock market is a complex task, partly because of the random walk behavior of the stock price series. The task is further complicated by the noise, outliers and missing values that are common in financial time series. Despite of this, the subject receives a fair amount of attention, which probably can be attributed to the potential rewards that follows from being able to forecast the stock market. Since artificial neural networks are capable of exploiting non-linear relations in the data, they are suitable to use when forecasting the stock market. In addition to this, they are able to outperform the classic autoregressive linear models. The objective of this thesis is to investigate if the stock market can be forecasted, using the so called error correction neural network. This is accomplished through the development of a method aimed at finding the optimum forecast model. The results of this thesis indicates that the developed method can be applied successfully when forecasting the stock market. Of the five stocks that were forecasted in this thesis using forecast models based on the developed method, all generated positive returns. This suggests that the stock market can be forecasted using neural networks.

Page generated in 0.3193 seconds