• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 12
  • 9
  • 8
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Novel Hybrid Learning Algorithm For Artificial Neural Networks

Ghosh, Ranadhir, n/a January 2003 (has links)
Last few decades have witnessed the use of artificial neural networks (ANN) in many real-world applications and have offered an attractive paradigm for a broad range of adaptive complex systems. In recent years ANN have enjoyed a great deal of success and have proven useful in wide variety pattern recognition or feature extraction tasks. Examples include optical character recognition, speech recognition and adaptive control to name a few. To keep the pace with its huge demand in diversified application areas, many different kinds of ANN architecture and learning types have been proposed by the researchers to meet varying needs. A novel hybrid learning approach for the training of a feed-forward ANN has been proposed in this thesis. The approach combines evolutionary algorithms with matrix solution methods such as singular value decomposition, Gram-Schmidt etc., to achieve optimum weights for hidden and output layers. The proposed hybrid method is to apply evolutionary algorithm in the first layer and least square method (LS) in the second layer of the ANN. The methodology also finds optimum number of hidden neurons using a hierarchical combination methodology structure for weights and architecture. A learning algorithm has many facets that can make a learning algorithm good for a particular application area. Often there are trade offs between classification accuracy and time complexity, nevertheless, the problem of memory complexity remains. This research explores all the different facets of the proposed new algorithm in terms of classification accuracy, convergence property, generalization ability, time and memory complexity.
2

An Automated Rule Refinement System

Andrews, Robert January 2003 (has links)
Artificial neural networks (ANNs) are essentially a 'black box' technology. The lack of an explanation component prevents the full and complete exploitation of this form of machine learning. During the mid 1990's the field of 'rule extraction' emerged. Rule extraction techniques attempt to derive a human comprehensible explanation structure from a trained ANN. Andrews et.al. (1995) proposed the following reasons for extending the ANN paradigm to include a rule extraction facility: * provision of a user explanation capability * extension of the ANN paradigm to 'safety critical' problem domains * software verification and debugging of ANN components in software systems * improving the generalization of ANN solutions * data exploration and induction of scientific theories * knowledge acquisition for symbolic AI systems An allied area of research is that of 'rule refinement'. In rule refinement an initial rule base, (i.e. what may be termed `prior knowledge') is inserted into an ANN by prestructuring some or all of the network architecture, weights, activation functions, learning rates, etc. The rule refinement process then proceeds in the same way as normal rule extraction viz (1) train the network on the available data set(s); and (2) extract the `refined' rules. Very few ANN techniques have the capability to act as a true rule refinement system. Existing techniques, such as KBANN, (Towell & Shavlik, (1993), are limited in that the rule base used to initialize the network must be a nearly complete, and the refinement process is limited to modifying antecedents. The limitations of existing techniques severely limit their applicability to real world problem domains. Ideally, a rule refinement technique should be able to deal with incomplete initial rule bases, modify antecedents, remove inaccurate rules, and add new knowledge by generating new rules. The motivation for this research project was to develop such a rule refinement system and to investigate its efficacy when applied to both nearly complete and incomplete problem domains. The premise behind rule refinement is that the refined rules better represent the actual domain theory than the initial domain theory used to initialize the network. The hypotheses tested in this research include: * that the utilization of prior domain knowledge will speed up network training, * produce smaller trained networks, * produce more accurate trained networks, and * bias the learning phase towards a solution that 'makes sense' in the problem domain. In 1998 Geva, Malmstrom, & Sitte, (1998) described the Local Cluster (LC) Neural Net. Geva et.al. (1998) showed that the LC network was able to learn / approximate complex functions to a high degree of accuracy. The hidden layer of the LC network is comprised of basis functions, (the local cluster units), that are composed of sigmoid based 'ridge' functions. In the General form of the LC network the ridge functions can be oriented in any direction. We describe RULEX, a technique designed to provide an explanation component for its underlying Local Cluster ANN through the extraction of symbolic rules from the weights of the local cluster units of the trained ANN. RULEX exploits a feature, ie, axis parallel ridge functions, of the Restricted Local Cluster (Geva , Andrews & Geva 2002), that allow hyper-rectangular rules of the form IF ∀ 1 ≤ i ≤ n : xi ∈ [ xi lower , xi upper ] THEN pattern belongs to the target class to be easily extracted from local functions that comprise the hidden layer of the LC network. RULEX is tested on 14 applications available in the public domain. RULEX results are compared with a leading machine learning technique, See5, with RULEX generally performing as well as See5 and in some cases outperforming See5 in predictive accuracy. We describe RULEIN, a rule refinement technique that allows symbolic rules to be converted into the parameters that define local cluster functions. RULEIN allows existing domain knowledge to be captured in the architecture of a LC ANN thus facilitating the first phase of the rule refinement paradigm. RULEIN is tested on a variety of artificial and real world problems. Experimental results indicate that RULEIN is able to satisfy the first requirement of a rule refinement technique by correctly translating a set of symbolic rules into a LC ANN that has the same predictive bahaviour as the set of rules from which it was constructed. Experimental results also show that in the cases where a strong domain theory exists, initializing an LC network using RULEIN generally speeds up network training, produces smaller, more accurate trained networks, with the trained network properly representing the underlying domain theory. In cases where a weak domain theory exists the same results are not always apparent. Experiments with the RULEIN / LC / RULEX rule refinement method show that the method is able to remove inaccurate rules from the initial knowledge base, modify rules in the initial knowledge base that are only partially correct, and learn new rules not present in the initial knowledge base. The combination of RULEIN / LC / RULEX thus is shown to be an effective rule refinement technique for use with a Restricted Local Cluster network.
3

The application of artificial neural networks to combustion and heat exchanger systems

Payne, Russell January 2005 (has links)
The operation of large industrial scale combustion systems, such as furnaces and boilers is increasingly dictated by emission legislation and requirements for improved efficiency. However, it can be exceedingly difficult and time consuming to gather the information required to improve original designs. Mathematical modelling techniques have led to the development of sophisticated furnace representations that are capable of representing combustion parameters. Whilst such data is ideal for design purposes, the current power of computing systems tends to generate simulation times that are too great to embed the models into online control strategies. The work presented in this thesis offers the possibility of replacing such mathematical models with suitably trained Artificial Neural Networks (ANNs) since they can compute the same outputs at a fraction of the model's speed, suggesting they could provide an ideal alternative in online control strategies. Furthermore, artificial neural networks have the ability to approximate and extrapolate making them extremely robust when encountering conditions not met previously. In addition to improving operational procedures, another approach to increasing furnace system efficiency is to minimise the waste heat energy produced during the combustion process. One very successful method involves the implementation of a heat exchanger system in the exiting gas flue stream, since this is predominantly the main source of heat loss. It can be exceptionally difficult to determine which heat exchanger is best suited for a particular application and it can prove an even more arduous task to control it effectively. Furthermore, there are many factors that alter the performance characteristics of a heat exchanger throughout the duration of its operational life, such as fouling or unexpected systematic faults. This thesis investigates the modelling of an experimental heat exchanger system via artificial neural networks with a view to aiding the design and selection process. Moreover, the work presented offers a means to control heat exchangers subject to varying operating conditions more effectively, thus promoting savings in both waste energy and time.
4

Development and structuring of commercial mortgage-backed securities in Australia

Chikolwa, Bwembya C January 2008 (has links)
According to the Reserve Bank of Australia (2006) the increased supply of Commercial Mortgage-Backed Securities (CMBS), with a range of subordination, has broadened the investor base in real estate debt markets and reduced the commercial property sector’s dependence on bank financing The CMBS market has been one of the most dynamic and fastest-growing sectors in the capital markets, for a market which was virtually nonexistent prior to 1990. The global CMBS market issuance which stood at AU$5.1 billion (US$4 billion) in 1990 had grown to AU$380 billion (US$299 billion) by the end of 2006. In Australia, a total of over 60 CMBSs with nearly 180 tranches totalling over AU$17.4 billion had been issued to December 2006 from when they were first introduced in 1999. To date few studies have been done on Australian CMBSs outside the credit rating agency circles. These studies are predominantly practitioner focused (Jones Lang LaSalle 2001; Richardson 2003; Roche 2000, 2002). O’Sullivan (1998) and Simonovski (2003) are the only academic studies on CMBSs. As such, this thesis examines issues relating to the development of Australian CMBSs and quantitatively and qualitatively analyses the structuring of Australian CMBSs. In assessing the growth of the Australian CMBS market, an interpretive historical approach (Baumgarter & Hensley 2005) is adopted to provide a cogent review and explanation of features of international and Australian CMBSs. This helps to understand the changing nature of the market and provides better understanding of the present and suggests possible future directions. The Australian CMBS market is matured in comparison with the larger US and EU CMBS markets as seen by the diversity of asset classes backing the issues and transaction types, tightening spreads, and record issuance volumes. / High property market transparency (Jones Lang LaSalle 2006b) and predominance of Listed Property Trusts (LPT) as CMBS issuers (Standard & Poor’s 2005b), who legally have to report their activities and underlying collateral performance to regulatory regimes such as Australian Stock Exchange (ASX)/Australian Securities and Investment Commission (ASIC) and their equity partners, have contributed to the success of the Australian CMBS market. Furthermore, the positive commercial real estate market outlook should support future CMBS issuance, with LPTs continuing their dominance as issuers. In investigating property risk assessment in Australian CMBSs, all the CMBSs issued over a six year period of 2000 to 2005 were obtained from Standard and Poor’s presale reports as found in their Ratings Direct database to identify and review how property risk factors were addressed in all issues and within specific property asset classes following the delineation of property risk by Adair and Hutchinson (2005). Adequate assessment of property risk and its reporting is critical to the success of CMBS issues. The proposed framework shows that assessing and reporting property risk in Australian CMBSs, which are primarily backed by direct property assets, under the headings of investment quality risk, covenant strength risk, and depreciation and obsolescence risk can easily be done. The proposed framework should prove useful to rating agencies, bond issuers and institutional investors. Rating agencies can adopt a more systematic and consistent approach towards reporting of assessed property risk in CMBSs. Issuers and institutional investors can examine the perceived consistency and appropriateness of the rating assigned to a CMBS issue by providing inferences concerning property risk assessment. / The ultimate goal of structuring CMBS transactions is to obtain a high credit rating as this has an impact on the yield obtainable and the success of the issue. The credit rating process involves highly subjective assessment of both qualitative and quantitative factors of a particular company as well as pertinent industry level or market level variables (Huang et al. 2004), with the final rating assigned by a credit committee via voting (Kwon et al. 1997). As such, credit rating agencies state that researchers cannot replicate their ratings quantitatively since their ratings reflect each agency’s opinion about an issue’s potential default risk and relies heavily on a committee’s analysis of the issuer’s ability and willingness to repay its debt. However, researchers have replicated bond ratings on the premise that financial ratios contain a large amount of information about a company’s credit risk. In this study, quantitative analysis of determinants of CMBS credit ratings issued by Standard and Poor’s from 2000 – 2006 using ANNs and OR and qualitative analysis of factors considered necessary to obtain a high credit rating and pricing issues necessary for the success of an issue through mail surveys of arrangers and issuers are undertaken. Of the quantitative variables propagated by credit rating agencies as being important to CMBS rating, only loan-to-value ratio (LTV) is found to be statistically significant, with the other variables being statistically insignificant using OR. This leads to the conclusion that statistical approaches used in corporate bond rating studies have limited replication capabilities in CMBS rating and that the endogeneity arguments raise significant questions about LTV and debt service coverage ratio (DSCR) as convenient, short-cut measures of CMBS default risk. / However, ANNs do offer promising predictive results and can be used to facilitate implementation of survey-based CMBS rating systems. This should contribute to making the CMBS rating methodology become more explicit which is advantageous in that both CMBS investors and issuers are provided with greater information and faith in the investment. ANN results show that 62.0% of CMBS rating is attributable to LTV (38.2%) and DSCR (23.6%); supporting earlier studies which have listed the two as being the most important variables in CMBS rating. The other variables’ contributions are: CMBS issue size (10.1%), CMBS tenure (6.7%), geographical diversity (13.5%) and property diversity (7.9%) respectively. The methodology used to obtain these results is validated when applied to predict LPT bond ratings. Both OR and ANN produce provide robust alternatives to rating LPT bonds, with no significant differences in results between the full models of the two methods. Qualitative analysis of surveys on arrangers and issuers provides insights into structuring issues they consider necessary to obtain a high credit rating and pricing issues necessary for the success of an issue. Rating of issues was found to be the main reason why investors invest in CMBSs and provision of funds at attractive rates as the main motivation behind CMBS issuance. Furthermore, asset quality was found to be the most important factor necessary to obtain a high credit rating supporting the view by Henderson and ING Barings (1997) that assets backing securitisation are its fundamental credit strength. / In addition, analyses of the surveys reveal the following: • The choice of which debt funding option to use depends on market conditions. • Credit tranching, over-collateralisation and cross-collateralisation are the main forms of credit enhancement in use. • On average, the AAA note tranche needs to be above AU$100 million and have 60 - 85% subordination for the CMBS issue to be economically viable. • Structuring costs range between 0.1% – 1% of issue size and structuring duration ranges from 4 – 9 months. • Preferred refinancing options are further capital market issues and bank debt. • Pricing CMBSs is greatly influenced by factors in the broader capital markets. For instance, the market had literary shut down as a result of the “credit crunch” caused by the meltdown in the US sub-prime mortgage market. These findings can be useful to issuers as a guide on the cost of going to the bond market to raise capital, which can be useful in comparing with other sources of funds. The findings of this thesis address crucial research priorities of the property industry as CMBSs are seen as a major commercial real estate debt instrument. By looking at how property risk can be assessed and reported in a more systematic way, and investigating quantitative and qualitative factors considered in structuring CMBSs, investor confidence can be increased through the increased body of knowledge. Several published refereed journal articles in Appendix C further validate the stature and significance of this thesis. It is evident that the property research in this thesis can lead aid in the revitalisation of the Australian CMBS market after the “shut down” caused by the melt-down in the US sub-prime mortgage market and can also be used to set up property-backed CMBSs in emerging countries where the CMBS market is immature or non-existent.
5

Immovable property taxation and the development of an artificial neural network valuation system for residential properties for tax purposes in Cyprus

Panayiotou, Panayiotis Andrea January 1999 (has links)
The last General Valuation in Cyprus, in 1980, took about twelve years to be completed by the Lands and Surveys Department. The comparison method was adopted and no computerised (mass appraisal) method or tool was used to assist the whole process. Although the issue of mass appraisal was raised by Sagric International, who had been invited to Cyprus as consultants, and recently by DataCentralen A/S with the development of a mass appraisal system based on regression analysis, there has been little literature and no research directly undertaken on the problems and the analysis of immovable property taxation in Cyprus and the development of an artificial neural networks valuation system for houses and apartments. The research project approached the issue of property taxation and mass appraisal through an investigation into Cyprus's needs for an up-dated tax base for equitabileness and for an assessment system capable of performing an effective revaluation at a certain date, with minimum acceptable mean error, minimum data and minimum cost. Investigation within Cyprus and world-wide indicated that this research project is a unique study in relation to Cyprus's property taxation and the development of a computer assisted mass appraisal system based on modular artificial neural networks. An empirical study was carried out, including prototyping and testing. The system results satisfy IAAO criteria for mass appraisal techniques, compare favourably with other studies and established a framework upon which future research into computer assisted mass appraisal for taxation purposes can be developed. In conclusion, the project has contributed significantly to the available literature on the immovable property taxation in Cyprus and the development of a computer assisted mass appraisal system for houses and apartments based on modular artificial neural network method. The proposed approach is novel not only in the context of Cyprus but also world-wide.
6

Neural network modelling and control of coal fired boiler plant

Thai, Shee Meng January 2005 (has links)
This thesis presents the development of a Neural Network Based Controller (NNBC) for chain grate stoker fired boilers. The objective of the controller was to increase combustion efficiency and maintain pollutant emissions below future medium term stringent legislation. Artificial Neural Networks (ANNs) were used to estimate future emissions from and control the combustion process. Initial tests at Casella CRE Ltd demonstrated the ability of ANNs to characterise the complex functional relationships which subsisted in the data set, and utilised previously gained knowledge to deliver predictions up to three minutes into the future. This technique was then built into a carefully designed control strategy that fundamentally mimicked the actions of an expert boiler operator, to control an industrial chain grate stoker at HM Prison Garth, Lancashire. Test results demonstrated that the developed novel NNBC was able to control the industrial stoker boiler plant to deliver the load demand whilst keeping the excess air level to a minimum. As a result the NNBC also managed to maintain the pollutant emissions within probable future limits for this size of boiler. This prototype controller would thus offer the industrial coal user with a means to improve the combustion efficiency on chain grate stokers as well as meeting medium term legislation limits on pollutant emissions that could be imposed by the European Commission.
7

Machine Learning – Based Dynamic Response Prediction of High – Speed Railway Bridges

Xu, Jin January 2020 (has links)
Targeting heavier freights and transporting passengers with higher speeds became the strategic railway development during the past decades significantly increasing interests on railway networks. Among different components of a railway network, bridges constitute a major portion imposing considerable construction and maintenance costs. On the other hand, heavier axle loads and higher trains speeds may cause resonance occurrence on bridges; which consequently limits operational train speed and lines. Therefore, satisfaction of new expectations requires conducting a large number of dynamic assessments/analyses on bridges, especially on existing ones. Evidently, such assessments need detailed information, expert engineers and consuming considerable computational costs. In order to save the computational efforts and decreasing required amount of expertise in preliminary evaluation of dynamic responses, predictive models using artificial neural network (ANN) are proposed in this study. In this regard, a previously developed closed-form solution method (based on solving a series of moving force) was adopted to calculate the dynamic responses (maximum deck deflection and maximum vertical deck acceleration) of randomly generated bridges. Basic variables in generation of random bridges were extracted both from literature and geometrical properties of existing bridges in Sweden. Different ANN architectures including number of inputs and neurons were considered to train the most accurate and computationally cost-effective mode. Then, the most efficient model was selected by comparing their performance using absolute error (ERR), Root Mean Square Error (RMSE) and coefficient of determination (R2). The obtained results revealed that the ANN model can acceptably predict the dynamic responses. The proposed model presents Err of about 11.1% and 9.9% for prediction of maximum acceleration and maximum deflection, respectively. Furthermore, its R2 for maximum acceleration and maximum deflection predictions equal to 0.982 and 0.998, respectively. And its RMSE is 0.309 and 1.51E-04 for predicting the maximum acceleration and maximum deflection prediction, respectively. Finally, sensitivity analyses were conducted to evaluate the importance of each input variable on the outcomes. It was noted that the span length of the bridge and speed of the train are the most influential parameters.
8

Early Stopping of a Neural Network via the Receiver Operating Curve.

Yu, Daoping 13 August 2010 (has links) (PDF)
This thesis presents the area under the ROC (Receiver Operating Characteristics) curve, or abbreviated AUC, as an alternate measure for evaluating the predictive performance of ANNs (Artificial Neural Networks) classifiers. Conventionally, neural networks are trained to have total error converge to zero which may give rise to over-fitting problems. To ensure that they do not over fit the training data and then fail to generalize well in new data, it appears effective to stop training as early as possible once getting AUC sufficiently large via integrating ROC/AUC analysis into the training process. In order to reduce learning costs involving the imbalanced data set of the uneven class distribution, random sampling and k-means clustering are implemented to draw a smaller subset of representatives from the original training data set. Finally, the confidence interval for the AUC is estimated in a non-parametric approach.
9

Avaliação da qualidade da água bruta superficial das barragens de Bita e Utinga de Suape aplicando estatística e sistemas inteligentes

SILVA, Ana Maria Ribeiro Bastos da 30 January 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-07-15T12:20:57Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese SILVA AMRB.pdf: 10197611 bytes, checksum: dfa95dac75e87b0ffef8a344cb8d9996 (MD5) / Made available in DSpace on 2016-07-15T12:20:57Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese SILVA AMRB.pdf: 10197611 bytes, checksum: dfa95dac75e87b0ffef8a344cb8d9996 (MD5) Previous issue date: 2015-01-30 / CNPq / Petrobrás / A aplicação de técnicas de Análises de Componentes Principais (ACP), Redes Neurais Artificiais (RNA), Lógica Fuzzy e Sistema Neurofuzzy para investigar as alterações da característica da água das barragens de Utinga e do Bita que abastecem de água bruta a ETA Suape é de fundamental importância em função do grande número de variáveis utilizadas para definir a qualidade. Neste trabalho, foram realizadas 10 coletas de água em cada área, no período de novembro de 2007 a agosto de 2012, totalizando 120 amostras. Ainda que o conjunto de dados experimentais obtidos seja reduzido, houve múltiplos esforços em demanda da aquisição de informações da qualidade da água junto aos órgãos oficiais de monitoramento ambiental. Os resultados mostraram uma tendência à degradação da propriedade da água das barragens em decorrência da presença de microrganismos, sais e nutrientes, responsáveis pelo processo de eutrofização, o que se configurou pela maior concentração de fósforo total, Coliformes termotolerantes, e diminuição de pH e OD, provavelmente devido à ocorrência de descarte de efluentes da agroindústria canavieira, industrial e doméstico. A ACP caracterizou mais 76% das amostras permitindo visualizar a existência de mudanças sazonais e uma pequena variação espacial d`água nas barragens. A condição da água das duas barragens foi modelada satisfatoriamente, razoável precisão e confiabilidade com os modelos estatístico e computacionais, para uma quantidade de parâmetros e dados ambientais, que embora limitados foram suficientes para realização deste trabalho. Ainda assim, fica evidente a eficiência e sucesso da utilização do Sistema Neurofuzzy (coeficiente de regressão de 0,608 a 0,925) que combina as vantagens das Redes Neurais e da Lógica Fuzzy em modelar o conjunto de dados da qualidade da água das barragens de Utinga e Bita. / The application of techniques such as the Principal Components Analysis (PCAs), Artificial Neural Networks (ANNs), Fuzzy Logic and Neuro-fuzzy Systems for investigating the changes in the water quality characteristics in the Utinga and Bita dams, which supplies raw water to the Suape Wastewater Treatment Plant (WWP), is of great importance due to the high number of variables used to define water quality. In this work were collected 10 water samples used to define water quality, in a period ranging from November 2007 to August 2012, with a total of 120 samples. Although the experimental dataset was limited, there were multiple efforts in gathering information from the Environmental Control Agencies. The results showed a tendency of degradation of the water properties in the dams studied due to the presence of microorganisms, salts and nutrients, responsible for the eutrophication process; result of the higher concentration of total phosphorus, Thermotolerant Coliforms and decrease in pH and DO, probably from the discharge of the sugarcane agroindustry and domestic waste. The PCAs characterised more than 76% of the samples collected, and consequently observing the existence of seasonal changes and small spatial variation of water levels in the dams. The water quality conditions in both dams were satisfactorily modelled, obtaining a reasonable precision and statistical and computational reliability for a certain amount of parameters and environmental data that, even though considered limited, were enough to run this trial. Nonetheless, it becomes evident the efficiency and success in using the Neuro- Fuzzy System (regression coefficient of 0.608 to 0.925), which combines the advantages of both the Neural Networks and Fuzzy Logic in modelling the water quality dataset in the Utinga and Bita dams.
10

Calibration of Two Dimensional Saccadic Electro-Oculograms Using Artificial Neural Networks

Coughlin, Michael J., n/a January 2003 (has links)
The electro-oculogram (EOG) is the most widely used technique for recording eye movements in clinical settings. It is inexpensive, practical, and non-invasive. Use of EOG is usually restricted to horizontal recordings as vertical EOG contains eyelid artefact (Oster & Stern, 1980) and blinks. The ability to analyse two dimensional (2D) eye movements may provide additional diagnostic information on pathologies, and further insights into the nature of brain functioning. Simultaneous recording of both horizontal and vertical EOG also introduces other difficulties into calibration of the eye movements, such as different gains in the two signals, and misalignment of electrodes producing crosstalk. These transformations of the signals create problems in relating the two dimensional EOG to actual rotations of the eyes. The application of an artificial neural network (ANN) that could map 2D recordings into 2D eye positions would overcome this problem and improve the utility of EOG. To determine whether ANNs are capable of correctly calibrating the saccadic eye movement data from 2D EOG (i.e. performing the necessary inverse transformation), the ANNs were first tested on data generated from mathematical models of saccadic eye movements. Multi-layer perceptrons (MLPs) with non-linear activation functions and trained with back propagation proved to be capable of calibrating simulated EOG data to a mean accuracy of 0.33° of visual angle (SE = 0.01). Linear perceptrons (LPs) were only nearly half as accurate. For five subjects performing a saccadic eye movement task in the upper right quadrant of the visual field, the mean accuracy provided by the MLPs was 1.07° of visual angle (SE = 0.01) for EOG data, and 0.95° of visual angle (SE = 0.03) for infrared limbus reflection (IRIS®) data. MLPs enabled calibration of 2D saccadic EOG to an accuracy not significantly different to that obtained with the infrared limbus tracking data.

Page generated in 0.028 seconds