Spelling suggestions: "subject:"beural networks computer science"" "subject:"aneural networks computer science""
431 |
Protein secondary structure prediction using neural networks and support vector machines /Tsilo, Lipontseng Cecilia. January 2008 (has links)
Thesis (M.Sc. (Statistics)) - Rhodes University, 2009. / A thesis submitted to Rhodes University in partial fulfillment of the requirements for the degree of Master of Science in Mathematical Statistics.
|
432 |
Estudo da aplicação de redes neurais artificiais para predição de séries temporais financeiras /Dametto, Ronaldo César. January 2018 (has links)
Orientador: Antonio Fernando Crepaldi / Banca: Rogerio Andrade Flauzino / Banca: Kelton Augusto Pontara da Costa / Resumo: O aprendizado de máquina vem sendo utilizado em diferentes segmentos da área financeira, como na previsão de preços de ações, mercado de câmbio, índices de mercado e composição de carteira de investimento. Este trabalho busca comparar e combinar três tipos de algoritmos de aprendizagem de máquina, mais especificamente, o método Ensemble de Redes Neurais Artificias com as redes Multilayer Perceptrons (MLP), auto-regressiva com entradas exógenas (NARX) e Long Short-Term Memory (LSTM) para predição do Índice Bovespa. A amostra da série do Ibovespa foi obtida pelo Yahoo!Finance no período de 04 de janeiro de 2010 a 28 de dezembro de 2017, de periodicidade diária. Foram utilizadas as séries temporais referentes a cotação do Dólar, além de indicadores numéricos da Análise Técnica como variáveis independentes para compor a predição. Os algoritmos foram desenvolvidos através da linguagem Python usando framework Keras. Para avaliação dos algoritmos foram utilizadas as métricas de desempenho MSE, RMSE e MAPE, além da comparação entre as previsões obtidas e os valores reais. Os resultados das métricas indicam bom desempenho de predição pelo modelo Ensemble proposto, obtendo 70% de acerto no movimento do índice, porém, não conseguiu atingir melhores resultados que as redes MLP e NARX, ambas com 80% de acerto. / Abstract: Different segments of the financial area, such as the forecast of stock prices, the foreign exchange market, the market indices and the composition of investment portfolio, use machine learning. This work aims to compare and combine two types of machine learning algorithms, the Artificial Neural Network Ensemble method with Multilayer Perceptrons (MLP), auto-regressive with exogenous inputs (NARX) and Long Short-Term Memory (LSTM) for prediction of the Bovespa Index. The Bovespa time series samples were obtained daily, using Yahoo! Finance, from January 4th, 2010 to December 28th, 2017. Dollar quotation, Google trends and numerical indicators of the Technical Analysis were used as independent variables to compose the prediction. The algorithms were developed using Python and Keras framework. Finally, in order to evaluate the algorithms, the MSE, RMSE and MAPE performance metrics, as well as the comparison between the obtained predictions and the actual values, were used. The results of the metrics indicate good prediction performance by the proposed Ensemble model, obtaining a 70% accuracy in the index movement, but failed to achieve better results than the MLP and NARX networks, both with 80% accuracy. / Mestre
|
433 |
Using artificial neural networks to forecast changes in national and regional price indices for the UK residential property marketParis, Stuart David January 2008 (has links)
The residential property market accounts for a substantial proportion of UKeconomic activity. However, there is no reliable forecasting service to predict theperiodic housing market crises or to produce estimates of long-term sustainablevalue. This research examined the use of artificial neural networks, trained usingnational economic, social and residential property transaction time-series data, toforecast trends within the housing market. Artificial neural networks have previously been applied successfully to produceestimates of the open market value of a property over a limited time period withinsub-markets. They have also been applied to the prediction of time-series data in anumber of fields, including finance. This research sought to extend their applicationto time-series of house prices in order to forecast changes in the residential propertymarket at national and regional levels. Neural networks were demonstrated to be successful in producing time-seriesforecasts of changes in the housing market, particularly when combined in simplecommittees of networks. They successfully modelled the direction, timing and scaleof annual changes in house prices, both for an extremely volatile and difficult period(1987 to 1991) and for the period 1999 to 2001. Poor initial forecasting results forthe period 2002 onwards were linked to new conditions in the credit and housingmarkets, including changes in the loan to income ratio. Self-organising maps wereused to identify the onset of new market conditions. Neural networks trained with asubset of post-1998 data added to the training set improved their forecastingperformance, suggesting that they were able to incorporate the new conditions intothe models. Sensitivity analysis was used to identify and rank the network input variables underdifferent market conditions. The measure of changes in the house price index itselfwas found to have the greatest effect on future changes in prices. Predictionsurfaces were used to investigate the relationship between pairs of input variables. The results show that artificial neural networks, trained using national economic,social and residential property transaction time-series data, can be used to forecaststrends within the housing market under various market conditions.
|
434 |
Investigation of artificial neural networks for modeling, identification and control of nonlinear plantMuga, Julius N'gon'ga January 2009 (has links)
Thesis (MTech (Electrical Engineering))--Cape Peninsula University of Technology, 2009 / In real world systems such as the waste water treatment plants, the nonlinearities,
uncertainty and complexity playa major role in their daily operations. Effective control of such
systems variables requires robust control methods to accommodate the uncertainties and
harsh environments. It has been shown that intelligent control systems have the ability to
accommodate the system uncertain parameters. Techniques such as fuzzy logic, neural
networks and genetic algorithms have had many successes in the field of control because
they contain essential characteristics needed for the design of identifiers and controllers for
complex systems where nonlinearities, complexity and uncertainties exist.
Approaches based on neural networks have proven to be powerful tools for solvinq nonlinear
control and optimisation problems. This is because neural networks have the ability to learn
and approximate nonlinear functions arbitrarily wei!. The approximation capabilities of such
networks can be used for the design of both identifiers and controllers. Basically, an artificial
neural network is a computing architecture that consists of massively parallel
interconnections of simple computing elements that provide insights into the kind of highly
parallel computation that is carried out by biological nervous system. A large number of
networks have been proposed and investigated with various topological structures.
functionality and training algorithms for the purposes of identification and control of practical
systems.
For the purpose of this research thesis an approach for the investigation of the use of neural
networks in identification, modelling and control of non-linear systems has been carried out.
In particular, neural network identifiers and controllers have been designed for the control of
the dissolved oxygen (DO) concentration of the activated sludge process in waste water
treatment plants. These plants, being complex processes With several variables (states) and
also affected by disturbances require some form of control in order to maintain the standards
of effluent. DO concentration control in the aeration tank is the most widely used controlled
variable. Nonlinearity is a feature that describes the dynamics of the dissolved oxygen
process and therefore the DO estimation and control may not be sufficiently achieved with a
conventional linear controller.
Neural networks structures are proposed, trained and utilized for purposes of identification.
modelling and design of NN controllers for nonlinear DO control. Algorithms and programs
are developed using Matlab environment and are deployed on a hardware PLC platform. The
research is limited to the feedforward multilayer perceptron and the recurrent neural
networks for the identification and control. Control models considered are the direct inverse mode! control, internal mode! contra! and feedback linearizing control. Real-time
implementation is limited to the lab-scale wastewater treatment plant.
|
435 |
Using linear regression and ANN techniques in determining variable importanceMbandi, Aderiana Mutheu January 2009 (has links)
Thesis (MTech (Chemical Engineering))--Cape Peninsula University of Technology, 2009.
Includes bibliographical references (leaves 95-100). / The use of Neural Networks in chemical engineering is well documented. There has
also been an increase in research concerned with the explanatory capacity of Neural
Networks although this has been hindered by the regard of Artificial Neural Networks
(ANN’s) as a black box technology.
Determining variable importance in complex systems that have many variables as
found in the fields of ecology, water treatment, petrochemical production, and
metallurgy, would reduce the variables to be used in optimisation exercises, easing
complexity of the model and ultimately saving money. In the process engineering
field, the use of data to optimise processes is limited if some degree of process
understanding is not present.
The project objective is to develop a methodology that uses Artificial Neural Network
(ANN) technology and Multiple Linear Regression (MLR) to identify explanatory
variables in a dataset and their importance on process outputs. The methodology is
tested by using data that exhibits defined and well known numeric relationships. The
numeric relationships are presented using four equations.
The research project assesses the relative importance of the independent variables
by using the “dropping method” on a regression model and ANN’s. Regression used
traditionally to determine variable contribution could be unsuccessful if a highly nonlinear
relationship exists. ANN’s could be the answer for this shortcoming.
For differentiation, the explanatory variables that do not contribute significantly
towards the output will be named “suspect variables”. Ultimately the suspect
variables identified in the regression model and ANN should be the same, assuming
a good regression model and network. The dummy variables introduced to the four equations are successfully identified as
suspect variables. Furthermore, the degree of variable importance was determined
using linear regression and ANN models. As the equations complexity increased, the
linear regression models accuracy decreased, thus suspect variables are not
correctly identified. The complexity of the equations does not affect the accuracy of
the ANN model, and the suspect variables are correctly identified.
The use of R2 and average error in establishing a criterion for identifying suspect
variables is explored. It is established that the cumulative variable importance
percentage (additive percentage), has to be below 5% for the explanatory variable to
be considered a suspect variable. Combining linear regression and ANN provides insight into the importance of explanatory variables and indeed suspect variables and their contribution can be determined. Suspect variables can be eliminated from the model once identified
simplifying the model, and increasing accuracy of the model.
|
436 |
Simulation of ion exchange processes using neuro-fuzzy reasoningVan Den Bosch, Magali Marie January 2009 (has links)
Thesis (MTech (Chemical Engineering))--Cape Peninsula University of Technology, 2009. / Neuro-fuzzy computing techniques have been approached and
evaluated in areas of process control; researchers have recently
begun to evaluate its potential in pattern recognition.
Multi-component ion exchange is a non-linear process, which is difficult
to model and simulate as there are many factors influencing the
chemical process which are not well understood. In the past, empirical
isotherm equations were used but there were definite shortcomings
resulting in unreliable simulations. In this work, the use of artificial intelligence has therefore been
researched to test the effectiveness in simulating ion exchange
processes. The branch of artificial intelligence used was the adaptive
neuro fuzzy inference system.
The objective of this research was to develop a neuro-fuzzy software
package to simulate ion exchange processes. The first step towards
building this system was to collect data from laboratory scale ion
exchange experiments. Different combinations of inputs (e.g. solution
concentration, resin loading, impeller speed), were tested to determine
whether it was necessary to monitor all available parameters. The
software was developed in MSEXCEL where tools like SOLVER could be
utilised whilst the code was written in Visual Basic. In order to compare
the neuro-fuzzy simulations to previously used empirical methods, the
Fritz and Schluender isotherm was used to model and simulate the
same data. The results have shown that both methods were adequate but the
neuro-fuzzyapproach was the more appropriate method.
After completion of this study, it could be concluded that a neuro-fuzzy
system does not always have the ability to describe ion exchange
processes adequately.
|
437 |
The use of neural networks to predict share pricesDe Villiers, J. 16 August 2012 (has links)
M.Comm. / The availability of large amounts of information and increases in computing power have facilitated the use of more sophisticated and effective technologies to analyse financial markets. The use of neural networks for financial time series forecasting has recently received increased attention. Neural networks are good at pattern recognition, generalisation and trend prediction. They can learn to predict next week's Dow Jones or flaws in concrete. Traditional methods used to analyse financial markets include technical and fundamental analysis. These methods have inherent shortcomings, which include bad timing of trading signals generated, and non-continuous data on which analysis is based. The purpose of the study was to create a tool with which to forecast financial time series on the Johannesburg Stock Exchange (JSE). The forecasted time series information was used to generate trading signals. A study of the building blocks of neural networks was done before the neural network was designed. The design of the neural network included data choice, data collection, calculations, data pre-processing and the determination of neural network parameters. The neural network was trained and tested with information from the financial sector of the JSE. The neural network was trained to predict share prices 4 days in advance with a Multiple Layer Feedforward Network (MLFN). The mean square error on the test set was 0.000930, with all test data values scaled between 0.1 - 0.9 and a sample size of 160. The prediction results were tested with a trading system, which generated a trade yielding 20 % return in 22 days. The neural network generated excellent results by predicting prices in advance. This enables better timing of trades and efficient use of capital. However, it was found that the price movement on the test set within the 4-day prediction period seldom exceeded the cost of trades, resulting in only one trade over a 5-month period for one security. This should not be a problem if all securities on the JSE are analysed for profitable trades. An additional neural network could also be designed to predict price movements further ahead, say 8 days, to assist the 4-day prediction
|
438 |
Incorporation of the first derivative of the objective function into the linear training of a radial basis function neural network for approximation via strict interpolationPrentice, Justin Steven Calder 23 July 2014 (has links)
D.Phil. (Applied mathematics) / Please refer to full text to view abstract
|
439 |
NeGPAIM : a model for the proactive detection of information security intrusions, utilizing fuzzy logic and neural network techniquesBotha, Martin January 2003 (has links)
“Information is the lifeblood of any organisation and everything an organisation does involves using information in some way” (Peppard, 1993, p.5). Therefore, it can be argued that information is an organisation’s most precious asset and as with all other assets, like equipment, money, personnel, and so on, this asset needs to be protected properly at all times (Whitman & Mattord, 2003, pp.1-14). The introduction of modern technologies, such as e-commerce, will not only increase the value of information, but will also increase security requirements of those organizations that are intending to utilize such technologies. Evidence of these requirements can be observed in the 2001 CSI/FBI Computer Crime and Security Survey (Power, 2001). According to this source, the annual financial losses caused through security breaches in 2001 have increased by 277% when compared to the results from 1997. The 2002 and 2003 Computer Crime and Security Survey confirms this by stating that the threat of computer crime and other related information security breaches continues unabated and that the financial toll is mounting (Richardson, 2003). Information is normally protected by means of a process of identifying, implementing, managing and maintaining a set of information security controls, countermeasures or safeguards (GMITS, 1998). In the rest of this thesis, the term security controls will be utilized when referring to information protection mechanisms or procedures. These security controls can be of a physical (for example, door locks), a technical (for example, passwords) and/or a procedural nature (for example, to make back-up copies of critical files)(Pfleeger, 2003, pp.22-23; Stallings, 1995, p.1). The effective identification, implementation, management and maintenance of this set of security controls are usually integrated into an Information Security Management Program, the objective of which is to ensure an acceptable level of information confidentiality, integrity and availability within the organisation at all times (Pfleeger, 2003, pp.10-12; Whitman & Mattord, 2003, pp.1-14; Von Solms, 1993). Once the most effective security controls have been identified and implemented, it is important that this level of security be maintained through a process of continued control. For this reason, it is important that proper change management, measurement, audit, monitoring and detection be implemented (Bruce & Dempsey, 1997). Monitoring and detection are important functions and refer to the ability to identify and detect situations where information security policies have been compromised and/or breached or security violations have taken place (BS 7799, 1999; GMITS, 1998; Von Solms, 1993). The Information Security Officer is usually the person responsible for most of the operational tasks in the control process within an Information Security Management Program (Von Solms, 1993). In practice, these tasks could also be performed by a system administrator, network administrator, etc. In the rest of the thesis the person responsible for these tasks will be referred to as system administrator. These tasks have proved to be very challenging and demanding. The main reason for this is the rapid advancement of technology in the discipline of Information Technology, for example, the modern distributed computing environment, the Internet, the “freedom” of end-users, the introduction of e-commerce, and etc. (Whitman & Mattord, 2003, p.9; Sundaram, 2000, p.1; Moses, 2001, p.6; Allen, 2001, p.1). As a result of the importance of this control process, and especially the monitoring and detection tasks, it is vital that the system administrator has proper tools at his/her disposal to perform this task effectively. Many of the tools that are currently available to the system administrator, utilize technical controls, such as, audit logs and user profiles. Audit logs are normally used to record all events executed on a system. These logs are simply files that record security and non-security related events that take place on a computer system within an organisation. For this reason, these logs can be used by these tools to gain valuable information on security violations, such as intrusions and, therefore, are able to monitor the current actions of each user (Microsoft, 2002; Smith, 1989, pp. 116-117). User profiles are files that contain information about users` desktop operating environments and are used by the operating system to structure each user environment so that it is the same each time a user logs onto the system (Microsoft, 2002; Block, 1994, p.54). Thus, a user profile is used to indicate which actions the user is allowed to perform on the system. Both technical controls (audit logs and user profiles) are frequently available in most computer environments (such as, UNIX, Firewalls, Windows, etc.) (Cooper et al, 1995, p.129). Therefore, seeing that the audit logs record most events taking place on an information system and the user profile indicates the authorized actions of each user, the system administrator could most probably utilise these controls in a more proactive manner.
|
440 |
A neural network based ionospheric model for the bottomside electron density profile over Grahamstown, South AfricaMcKinnell, L A January 2003 (has links)
This thesis describes the development and application of a neural network based ionospheric model for the bottomside electron density profile over Grahamstown, South Africa. All available ionospheric data from the archives of the Grahamstown (33.32ºS, 26.50ºE) ionospheric station were used for training neural networks (NNs) to predict the parameters required to produce the final profile. Inputs to the model, called the LAM model, are day number, hour, and measures of solar and magnetic activity. The output is a mathematical description of the bottomside electron density profile for that particular input set. The two main ionospheric layers, the E and F layers, are predicted separately and then combined at the final stage. For each layer, NNs have been trained to predict the individual ionospheric characteristics and coefficients that were required to describe the layer profile. NNs were also applied to the task of determining the hours between which an E layer is measurable by a groundbased ionosonde and the probability of the existence of an F1 layer. The F1 probability NN is innovative in that it provides information on the existence of the F1 layer as well as the probability of that layer being in a L-condition state - the state where an F1 layer is present on an ionogram but it is not possible to record any F1 parameters. In the event of an L-condition state being predicted as probable, an L algorithm has been designed to alter the shape of the profile to reflect this state. A smoothing algorithm has been implemented to remove discontinuities at the F1-F2 boundary and ensure that the profile represents realistic ionospheric behaviour in the F1 region. Tests show that the LAM model is more successful at predicting Grahamstown electron density profiles for a particular set of inputs than the International Reference Ionosphere (IRI). It is anticipated that the LAM model will be used as a tool in the pin-pointing of hostile HF transmitters, known as single-site location.
|
Page generated in 0.0689 seconds