Spelling suggestions: "subject:"beural"" "subject:"aneural""
31 |
Neural models of temporal sequencesTaylor, Neill Richard January 1998 (has links)
No description available.
|
32 |
Analysing rounding data using radial basis function neural networks modelTriastuti Sugiyarto, Endang January 2007 (has links)
Unspecified counting practices used in a data collection may create rounding to certain ‘based’ number that can have serious consequences on data quality. Statistical methods for analysing missing data are commonly used to deal with the issue but it could actually aggravate the problem. Rounded data are not missing data, instead some observations were just systematically lumped to certain based numbers reflecting the rounding process or counting behaviour. A new method to analyse rounded data would therefore be academically valuable. The neural network model developed in this study fills the gap and serves the purpose by complementing and enhancing the conventional statistical methods. The model detects, analyses, and quantifies the existence of periodic structures in a data set because of rounding. The robustness of the model is examined using simulated data sets containing specific rounding numbers of different levels. The model is also subjected to theoretical and numerical tests to confirm its validity before being used on real applications. Overall, the model performs very well making it suitable for many applications. The assessment results show the importance of using the right best fit in rounding detection. The detection power and cut-off point estimation also depend on data distribution and rounding based numbers. Detecting rounding of prime numbers is easier than non-prime numbers due to the unique characteristics of the former. The bigger the number, the easier is the detection. This is in a complete contrast with non-prime numbers, where the bigger the number, the more will be the “factor” numbers distracting rounding detection. Using uniform best fit on uniform data produces the best result and lowest cut-off point. The consequence of using a wrong best fit on uniform data is however also the worst. The model performs best on data containing 10-40% rounding levels as less or more rounding levels produce unclear rounding pattern or distort the rounding detection, respectively. The modulo-test method also suffers the same problem. Real data applications on religious census data confirms the modulo-test finding that the data contains rounding base 5, while applications on cigarettes smoked and alcohol consumed data show good detection results. The cigarettes data seem to contain rounding base 5, while alcohol consumption data indicate no rounding patterns that may be attributed to the ways the two data were collected. The modelling applications can be extended to other areas in which rounding is common and can have significant consequences. The modelling development can he refined to include data-smoothing process and to make it user friendly as an online modelling tool. This will maximize the model’s potential use
|
33 |
TRANSIENT REDUCTION ANALYSIS using NEURAL NETWORKS (TRANN)Larson, P. T., Sheaffer, D. A. 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California / Our telemetry department has an application for a data categorization/compression of a
high speed transient signal in a short period of time. Categorization of the signal reveals
important system performance and compression is required because of the terminal nature
of our telemetry testing. Until recently, the hardware for the system of this type did not
exist. A new exploratory device from Intel has the capability to meet these extreme
requirements. This integrated circuit is an analog neural network capable of performing 2
billion connections per second. The two main advantages of this chip over traditional
hardware are the obvious computation speed of the device and the ability to compute a
three layer feed-forward neural network classifier. The initial investigative development
work using the Intel chip has been completed. The results from this proof of concept will
show data categorization/compression performed on the neural network integrated circuit
in real time. We will propose a preliminary design for a transient measurement system
employing the Intel integrated circuit.
|
34 |
Speech features and their significance in speaker recognitionSchuy, Lars January 2002 (has links)
This thesis addresses the significance of speech features within the task of speaker recognition. Motivated by the perception of simple attributes like `loud', `smooth', `fast', more than 70 new speech features are developed. A set of basic speech features like pitch, loudness and speech speed are combined together with these new features in a feature set, one set per utterance. A neural network classifier is used to evaluate the significance of these features by creating a speaker recognition system and analysing the behaviour of successfully trained single-speaker networks. An in-depth analysis of network weights allows a rating of significance and feature contribution. A subjective listening experiment validates and confirms the results of the neural network analysis. The work starts with an extended sentence analysis; ten sentences are uttered by 630 speakers. The extraction of 100 speech features is outlined and a 100-element feature vector for each utterance is derived. Some features themselves and the methods of analysing them have been used elsewhere, for example pitch, sound pressure level, spectral envelope, loudness, speech speed and glottal-to-noise excitation. However, more than 70 of the 100 features are derivatives of these basic features and have not yet been described and used before in the speakerr ecognition research,e speciallyyn ot within a rating of feature significance. These derivatives include histogram, 3`d and 4 moments, function approximation, as well as other statistical analyses applied to the basic features. The first approach assessing the significance of features and their possible use in a recognition system is based on a probability analysis. The analysis is established on the assumption that within the speaker's ten utterances' single feature values have a small deviation and cluster around the mean value of one speaker. The presented features indeed cluster into groups and show significant differences between speakers, thus enabling a clear separation of voices when applied to a small database of < 20 speakers. The recognition and assessment of individual feature contribution jecomes impossible, when the database is extended to 200 speakers. To ensure continous vplidation of feature contribution it is necessary to consider a different type of classifier. These limitations are overcome with the introduction of neural network classifiers. A separate network is assigned to each speaker, resulting in the creation of 630 networks. All networks are of standard feed-forward backpropagation type and have a 100-input, 20- hidden-nodes, one-output architecture. The 6300 available feature vectors are split into a training, validation and test set in the ratio of 5-3-2. The networks are initially trained with the same 100-feature input database. Successful training was achieved within 30 to 100 epochs per network. The speaker related to the network with the highest output is declared as the speaker represented by the input. The achieved recognition rate for 630 speakers is -49%. A subsequent preclusion of features with minor significance raises the recognition rate to 57%. The analysis of the network weight behaviour reveals two major pointsA definite ranking order of significance exists between the 100 features. Many of the newly introduced derivatives of pitch, brightness, spectral voice patterns and speech speed contribute intensely to recognition, whereas feature groups related to glottal-to-noiseexcitation ratio and sound pressure level play a less important role. The significance of features is rated by the training, testing and validation behaviour of the networks under data sets with reduced information content, the post-trained weight distribution and the standard deviation of weight distribution within networks. The findings match with results of a subjective listening experiment. As a second major result the analysis shows that there are large differences between speakers and the significance of features, i. e. not all speakers use the same feature set to the same extent. The speaker-related networks exhibit key features, where they are uniquely identifiable and these key features vary from speaker to speaker. Some features like pitch are used by all networks; other features like sound pressure level and glottal-to-noise excitation ratio are used by only a few distinct classifiers. Again, the findings correspond with results of a subjective listening experiment. This thesis presents more than 70 new features which never have been used before in speaker recognition. A quantitative ranking order of 100 speech features is introduced. Such a ranking order has not been documented elsewhere and is comparatively new to the area of speaker recognition. This ranking order is further extended and describes the amount to which a classifier uses or omits single features, solely depending on the characteristics of the voice sample. Such a separation has not yet been documented and is a novel contribution. The close correspondence of the subjective listening experiment and the findings of the network classifiers show that it is plausible to model the behaviour of human speech recognition with an artificial neural network. Again such a validation is original in the area of speaker recognition
|
35 |
Neural network training for modelling and controlMcLoone, Sean Francis January 1996 (has links)
No description available.
|
36 |
Implementation and capabilities of layered feed-forward networksRichards, Gareth D. January 1990 (has links)
No description available.
|
37 |
Information recovery from rank-order encoded imagesSen, Basabdatta B. January 2008 (has links)
The time to detection of a visual stimulus by the primate eye is recorded at 100 – 150ms. This near instantaneous recognition is in spite of the considerable processing required by the several stages of the visual pathway to recognise and react to a visual scene. How this is achieved is still a matter of speculation. Rank-order codes have been proposed as a means of encoding by the primate eye in the rapid transmission of the initial burst of information from the sensory neurons to the brain. We study the efficiency of rank-order codes in encoding perceptually-important information in an image. VanRullen and Thorpe built a model of the ganglion cell layers of the retina to simulate and study the viability of rank-order as a means of encoding by retinal neurons. We validate their model and quantify the information retrieved from rank-order encoded images in terms of the visually-important information recovered. Towards this goal, we apply the ‘perceptual information preservation algorithm’, proposed by Petrovic and Xydeas after slight modification. We observe a low information recovery due to losses suffered during the rank-order encoding and decoding processes. We propose to minimise these losses to recover maximum information in minimum time from rank-order encoded images. We first maximise information recovery by using the pseudo-inverse of the filter-bank matrix to minimise losses during rankorder decoding. We then apply the biological principle of lateral inhibition to minimise losses during rank-order encoding. In doing so, we propose the Filteroverlap Correction algorithm. To test the perfomance of rank-order codes in a biologically realistic model, we design and simulate a model of the foveal-pit ganglion cells of the retina keeping close to biological parameters. We use this as a rank-order encoder and analyse its performance relative to VanRullen and Thorpe’s retinal model.
|
38 |
Development of the neuromuscular junction in the embryo of Drosophila melanogasterBroadie, Kendal Scot January 1993 (has links)
No description available.
|
39 |
Acetylcholine and gaba receptors in insect CNSLummis, S. C. R. January 1984 (has links)
No description available.
|
40 |
Problem solving with optimization networksGee, Andrew Howard January 1993 (has links)
No description available.
|
Page generated in 0.0703 seconds