• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 22
  • 14
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 114
  • 83
  • 54
  • 40
  • 34
  • 27
  • 18
  • 18
  • 17
  • 16
  • 16
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Heterogeneous networking for beyond 3G system in a high-speed train environment : investigation of handover procedures in a high-speed train environment and adoption of a pattern classification neural-networks approach for handover management

Ong, Felicia Li Chin January 2016 (has links)
Based on the targets outlined by the EU Horizon 2020 (H2020) framework, it is expected that heterogeneous networking will play a crucial role in delivering seamless end-to-end ubiquitous Internet access for users. In due course, the current GSM-Railway (GSM-R) will be deemed unsustainable, as the demand for packet-oriented services continues to increase. Therefore, the opportunity to identify a plausible replacement system conducted in this research study is timely and appropriate. In this research study, a hybrid satellite and terrestrial network for enabling ubiquitous Internet access in a high-speed train environment is investigated. The study focuses on the mobility management aspect of the system, primarily related to the handover management. A proposed handover strategy, employing the RACE II MONET and ITU-T Q.65 design methodology, will be addressed. This includes identifying the functional model (FM) which is then mapped to the functional architecture (FUA), based on the Q.1711 IMT-2000 FM. In addition, the signalling protocols, information flows and message format based on the adopted design methodology will also be specified. The approach is then simulated in OPNET and the findings are then presented and discussed. The opportunity of exploring the prospect of employing neural networks (NN) for handover is also undertaken. This study focuses specifically on the use of pattern classification neural networks to aid in the handover process, which is then simulated in MATLAB. The simulation outcomes demonstrated the effectiveness and appropriateness of the NN algorithm and the competence of the algorithm in facilitating the handover process.
72

Classificação de imagens digitais por textura usando redes neurais / Classification of di gital images through texture with the aid of neural networks

Liberman, Felipe January 1997 (has links)
Este trabalho apresenta um estudo sobre a classificação de imagens digitais através da textura com o auxílio de redes neurais. São utilizadas técnicas e conceitos de duas áreas da Informática: O Processamento de Imagens Digitais e a Inteligência Artificial. São apresentados os principais tópicos de processamento de imagens, as principais aplicações em tarefas industriais, reconhecimento de padrões e manipulação de imagens, os tipos de imagem e os formatos de armazenamento. São destacados os atributos da imagem a textura e sua quantificação através da matriz de concorrência dos níveis de cinza. Também apresenta-se alguns sistemas computacionais disponíveis para processamento de imagens. Na área de Inteligência Artificial, o enfoque é para técnicas computacionais inteligentes, mais especificamente as Redes Neurais. É feita uma breve apresentação da área, incluindo seu histórico e suas principais aplicações. As redes neurais são classificadas quanto ao tipo de treinamento, à regra de aprendizado, à topologia da rede e quanto ao tipo de interconexão dos neurônios. O modelo BPN (BackPropagation Network) é visto com maior detalhe, visto ser utilizado na implementação do sistema IMASEG (Sistema para Classificação de Imagens) que faz parte desse trabalho. O BPN é descrito quanto ao seu funcionamento, a forma de aprendizado e as respectivas equações utilizadas. O sistema IMASEG foi desenvolvido com o objetivo de implementar as técnicas propostas para a classificação de imagens utilizando textura e redes neurais. Seu funcionamento e algoritmos utilizados são detalhados e ao final, apresenta-se os resultados obtidos com a respectiva análise. A classificação de imagens é uma das principais etapas no processamento de imagens digitais. Dado um conjunto de classes e um padrão apresentado como entrada para o sistema, o problema consiste em decidir a que classe o padrão pertence. Deve haver a alternativa de rejeição do padrão. Podemos extrair da imagem atributos espectrais, espaciais e de contexto. Por serem mais facilmente quantificáveis, a maioria dos sistemas tradicionais utiliza apenas atributos espectrais para caracterizar uma imagem. Essa abordagem é muito utilizada em imagens multiespectrais. Entretanto, utilizando apenas atributos espectrais, não se obtém uma informação completa sobre a imagem, pois não são levados em consideração as relações espaciais entre seus pixels, bem como a forma de objetos. A textura, atributo espacial, é ainda pouco utilizada, visto que tem origem na sensação visual causada pelas variações tonais existentes em uma determinada região da imagem, tornando difícil sua quantificação. Neste trabalho, é feito um estudo sobre a utilização dos atributos espaciais da imagem no seu processamento. É feita uma análise do comportamento de cinco deles: média, desvio-padrão, uniformidade, entropia e contraste, todos extraídos de janelas pertencentes à uma classe. A uniformidade, entropia e contraste provém da matriz de concorrência dos níveis de cinza. Através do cálculo do valor desses atributos em diversas imagens, constata-se que existem algumas importantes relações entre eles. A partir da análise dos diferentes modelos de redes neurais e das diversas formas de quantificar a textura de uma imagem, é proposto um sistema computacional com o objetivo de classificar imagens. Esse sistema faz o processamento das imagens através de uma janela móvel. O usuário deve escolher o tamanho para a janela: 3x3, 5x5 ou 7x7 pixels. Essa escolha irá depender do tipo e da granularidade da textura que a imagem contém. Em seguida, utilizando a janela, deve selecionar amostras representativas de cada textura (classe) presente na imagem que se deseja classificar. O sistema então, encarrega-se de treinar a rede neural utilizando as amostras selecionadas pelo usuário. Para realizar o treinamento, é necessário encontrar uma forma de mapear os dados da realidade para a rede neural. Essa tarefa nem sempre é trivial. Nesse sistema, são propostas duas abordagens para realizar essa tarefa. Na primeira, o mapeamento é feito através do cálculo das feições da média, desvio-padrão e uniformidade, sendo esse último obtido da matriz de concorrência. Essas feições, após um escalonamento para a mesma faixa de valores, serão os parâmetros de entrada para a rede neural. Na segunda abordagem, o mapeamento é direto, ou seja, o valor de cada pixel, após o escalonamento, corresponde a uma entrada da rede neural. Após a etapa de treinamento, a imagem é processada por inteiro, fazendo-se uma varredura com a janela, gerando como saída uma imagem temática na qual cada tema representa uma das texturas existentes na imagem original. Para testar o sistema IMASEG, foram geradas várias imagens sintéticas com 256 níveis de cinza. Deste total, foram selecionadas 6 imagens para serem apresentadas nesse trabalho. Elas são representativas das diversas situações que podem ocorrer em relação aos valores da média, desvio-padrão e uniformidade. Cada imagem original é processada pelas duas abordagens, gerando duas imagens de saída. É feita uma análise quantitativa e qualitativa dos resultados obtidos, apontando-se as prováveis causas de sucessos e problemas encontrados. Conclui-se que a classificação por textura atinge o objetivo proposto e é muito útil no processamento de imagens, levando-se em consideração os bons resultados obtidos. / This paper is a study about the classification of digital images through texture with the aid of neural networks. The techniques and concepts from the field of Computer Science employed are: Digital Images Processing and Artificial Intelligence. The focus in Image Processing is on its main application in industrial tasks. pattern recognition and image manipulation, the types of images and the storing formats. The specific aspects analyzed are image attributes, texture and its quantification through the Coocurrence Matrix. Several available computing systems for image classification are presented. In Artificial Intelligence, the attention is concentrated on intelligent computational systems, more specifically on the neural networks which are briefly introduced. The subject's historical data and its main application are also addressed. The neural networks are classified according to the type of training, the learning rules, the network topology and the interconnection of neurones. The BPN model (Back Propagation Network) is examined more closely since it is employed in the implementation of the IMASEG system (classifying images system) which is part of this study. The BPN system is described in according to its functioning capacities, the learning method and the respective equations utilized. The IMASEG system was developed with the specific aim of implementing the techniques of image classification. Throughout the paper, the system's operation and related algorithms are presented to the reader, as well as the results obtained and the analysis performed provided in the end of the paper The image classification is one of the principal steps for the processing of digital images. It consists to decide of which class the pattern belong. It can refuse the pattern. We can extract spectral, spatial and contextual image's attributes. Because they are easily quantified, a major part of the traditional systems of image processing employ only the spectral attributes to work the images and are, therefore, extensively used in the processing of multispectral images. However, the exploration of ima ges through spectral attributes is not enough to provide a complete recognition of the image since information such as spatial relations among its pixels as well as the form of objects are not taken into consideration. The use of image processing with spatial attributes is also considered in this paper. Texture is still not a commonly employed attribute. This is due to the fact that its based on visual sensation which is produced by the existing tonal variations of a specific image region, making its quantification a difficult task to perform. A behavior analysis of the spatial attributes under consideration in this paper are the following: mean, standard deviation, uniformity, entropy and contrast. These five attributes were all taken from windows belonging to a single class. Uniformity, entropy and contrast are issued from the gray level coocurrence matrix. Via a calculation of the value of these attributes is observed that there is an important relationship among them. This paper proposes a system of image classification based on the analysis of different models of neural networks and also through the analysis of the diverse ways of quantifying the texture of an image. This system performs the image processing through a shifting window. Then, the user must choose the window's size from among the following dimensions: 3x3, 5x5 or 7x7 pixels. The choice will vary depending on the type and on the image's texture granularity. The selection of meaningful samples of each texture (class) present in the image one wishes to classify is the next step in the process. The system, then, is in charge of training the neural networks by applying the user's selected samples. In order to perform the training, it is necessary to first establish a way of mapping the data reality to the neural network, oftentimes a difficult task. In this system two approaches are proposed for the execution of this task. In the first, the mapping is done through the calculation of the mean, standard deviation and uniformity features. The last item is obtained from the coocurrence matrix. After these features have been scaled to the same value band, they will become the input to the neural networks. In the second approach, it is expected that the neural network will be able to extract textures attributes without executing an explicit calculation exercise. After the training phase, the image is completely processed through a window scanning generatin g a thematic image as the output onto which each theme will represent one of the texture's original image. In order to verify the adequacy of the IMASEG system, several synthetical graylevel images were created. Of these, 7 images were chosen as objects for this analysis, representing the various possible situations that might occur in relation to the average, standard deviation and uniformity. Each original image is processed in according with these two chosen approaches, thus generating two images as outputs, as well as a quantitative and qualitative analysis of the obtained results, pointing to the probable successes and failures generated. The final conclusion is that the classification through texture partially attains the proposed objectives and can be very useful in the processing of images, serving as an aid in the traditional classification process.
73

Optimization in an Error Backpropagation Neural Network Environment with a Performance Test on a Pattern Classification Problem

Fischer, Manfred M., Staufer-Steinnocher, Petra 03 1900 (has links) (PDF)
Various techniques of optimizing the multiple class cross-entropy error function to train single hidden layer neural network classifiers with softmax output transfer functions are investigated on a real-world multispectral pixel-by-pixel classification problem that is of fundamental importance in remote sensing. These techniques include epoch-based and batch versions of backpropagation of gradient descent, PR-conjugate gradient and BFGS quasi-Newton errors. The method of choice depends upon the nature of the learning task and whether one wants to optimize learning for speed or generalization performance. It was found that, comparatively considered, gradient descent error backpropagation provided the best and most stable out-of-sample performance results across batch and epoch-based modes of operation. If the goal is to maximize learning speed and a sacrifice in generalisation is acceptable, then PR-conjugate gradient error backpropagation tends to be superior. If the training set is very large, stochastic epoch-based versions of local optimizers should be chosen utilizing a larger rather than a smaller epoch size to avoid inacceptable instabilities in the generalization results. (authors' abstract) / Series: Discussion Papers of the Institute for Economic Geography and GIScience
74

Real-time Process Modelling Based on Big Data Stream Learning

He, Fan January 2017 (has links)
Most control systems now are assumed to be unchangeable, but this is an ideal situation. In real applications, they are often accompanied with many changes. Some of changes are from environment changes, and some are system requirements. So, the goal of thesis is to model a dynamic adaptive real-time control system process with big data stream. In this way, control system model can adjust itself using example measurements acquired during the operation and give suggestion to next arrival input, which also indicates the accuracy of states under control highly depends on quality of the process model.   In this thesis, we choose recurrent neural network to model process because it is a kind of cheap and fast artificial intelligence. In most of existent artificial intelligence, a database is necessity and the bigger the database is, the more accurate result can be. For example, in case-based reasoning, testcase should be compared with all of cases in database, then take the closer one’s result as reference. However, in neural network, it does not need any big database to support and search, and only needs simple calculation instead, because information is all stored in each connection. All small units called neuron are linear combination, but a neural network made up of neurons can perform some complex and non-linear functionalities. For training part, Backpropagation and Kalman filter are used together. Backpropagation is a widely-used and stable optimization algorithm. Kalman filter is new to gradient-based optimization, but it has been proved to converge faster than other traditional first-order-gradient-based algorithms.   Several experiments were prepared to compare new and existent algorithms under various circumstances. The first set of experiments are static systems and they are only used to investigate convergence rate and accuracy of different algorithms. The second set of experiments are time-varying systems and the purpose is to take one more attribute, adaptivity, into consideration.
75

Odhady časových řad pomocí modelů neuronových sítí / Time series annalyze by neural networks models

Jiráň, Robin January 2017 (has links)
This thesis deals about using models of neural networks like alternative of time series model based on Box-Jenkins methodology. The work is divided into two parts according to the model construction method. Each of the parts contains a theory that explains the individual processes and the progress of the model construction. This is followed by two experiments demonstrating the difference in approach to the design of a given model and creating a forecast by estimated values. for the following year. The last part expertly evaluates the quality of the predictions and considers the use of neural networks against prediction models as an alternative to Box-Jenkins methodology based models
76

Training Spiking Neural Networks for Energy-Efficient Neuromorphic Computing

Gopalakrishnan Srinivasan (8088431) 06 December 2019 (has links)
<p>Spiking Neural Networks (SNNs), widely known as the third generation of artificial neural networks, offer a promising solution to approaching the brains' processing capability for cognitive tasks. With more biologically realistic perspective on input processing, SNN performs neural computations using spikes in an event-driven manner. The asynchronous spike-based computing capability can be exploited to achieve improved energy efficiency in neuromorphic hardware. Furthermore, SNN, on account of spike-based processing, can be trained in an unsupervised manner using Spike Timing Dependent Plasticity (STDP). STDP-based learning rules modulate the strength of a multi-bit synapse based on the correlation between the spike times of the input and output neurons. In order to achieve plasticity with compressed synaptic memory, stochastic binary synapse is proposed where spike timing information is embedded in the synaptic switching probability. A bio-plausible probabilistic-STDP learning rule consistent with Hebbian learning theory is proposed to train a network of binary as well as quaternary synapses. In addition, hybrid probabilistic-STDP learning rule incorporating Hebbian and anti-Hebbian mechanisms is proposed to enhance the learnt representations of the stochastic SNN. The efficacy of the presented learning rules are demonstrated for feed-forward fully-connected and residual convolutional SNNs on the MNIST and the CIFAR-10 datasets.<br></p><p>STDP-based learning is limited to shallow SNNs (<5 layers) yielding lower than acceptable accuracy on complex datasets. This thesis proposes block-wise complexity-aware training algorithm, referred to as BlocTrain, for incrementally training deep SNNs with reduced memory requirements using spike-based backpropagation through time. The deep network is divided into blocks, where each block consists of few convolutional layers followed by an auxiliary classifier. The blocks are trained sequentially using local errors from the respective auxiliary classifiers. Also, the deeper blocks are trained only on the hard classes determined using the class-wise accuracy obtained from the classifier of previously trained blocks. Thus, BlocTrain improves the training time and computational efficiency with increasing block depth. In addition, higher computational efficiency is obtained during inference by exiting early for easy class instances and activating the deeper blocks only for hard class instances. The ability of BlocTrain to provide improved accuracy as well as higher training and inference efficiency compared to end-to-end approaches is demonstrated for deep SNNs (up to 11 layers) on the CIFAR-10 and the CIFAR-100 datasets.<br></p><p>Feed-forward SNNs are typically used for static image recognition while recurrent Liquid State Machines (LSMs) have been shown to encode time-varying speech data. Liquid-SNN, consisting of input neurons sparsely connected by plastic synapses to randomly interlinked reservoir of spiking neurons (or liquid), is proposed for unsupervised speech and image recognition. The strength of the synapses interconnecting the input and liquid are trained using STDP, which makes it possible to infer the class of a test pattern without a readout layer typical in standard LSMs. The Liquid-SNN suffers from scalability challenges due to the need to primarily increase the number of neurons to enhance the accuracy. SpiLinC, composed of an ensemble of multiple liquids, where each liquid is trained on a unique input segment, is proposed as a scalable model to achieve improved accuracy. SpiLinC recognizes a test pattern by combining the spiking activity of the individual liquids, each of which identifies unique input features. As a result, SpiLinC offers comparable accuracy to Liquid-SNN with added synaptic sparsity and faster training convergence, which is validated on the digit subset of TI46 speech corpus and the MNIST dataset.</p>
77

An Evaluation of Backpropagation Neural Network Modeling as an Alternative Methodology for Criterion Validation of Employee Selection Testing

Scarborough, David J. (David James) 08 1900 (has links)
Employee selection research identifies and makes use of associations between individual differences, such as those measured by psychological testing, and individual differences in job performance. Artificial neural networks are computer simulations of biological nerve systems that can be used to model unspecified relationships between sets of numbers. Thirty-five neural networks were trained to estimate normalized annual revenue produced by telephone sales agents based on personality and biographic predictors using concurrent validation data (N=1085). Accuracy of the neural estimates was compared to OLS regression and a proprietary nonlinear model used by the participating company to select agents.
78

Klasifikace příspěvků ve webových diskusích / Classification of Web Forum Entries

Margold, Tomáš January 2008 (has links)
This thesis is dealing text ranking on the internet background. There are described available methods for classification and splitting of the text reports. The part of this thesis is implementation of Bayes naive algorithm and classifier using neuron nets. Selected methods are compared considering their error rate or other ranking features.
79

Design and Optimization of DSP Techniques for the Mitigation of Linear and Nonlinear Impairments in Fiber-Optic Communication Systems / DESIGN AND OPTIMIZATION OF DIGITAL SIGNAL PROCESSING TECHNIQUES FOR THE MITIGATION OF LINEAR AND NONLINEAR IMPAIRMENTS IN FIBER-OPTIC COMMUNICATION SYSTEMS

Maghrabi, Mahmoud MT January 2021 (has links)
Optical fibers play a vital role in modern telecommunication systems and networks. An optical fiber link imposes some linear and nonlinear distortions on the propagating light-wave signal due to the inherent dispersive nature and nonlinear behavior of the fiber. These distortions impede the increasing demand for higher data rate transmission over longer distances. Developing efficient and computationally non-expensive digital signal processing (DSP) techniques to effectively compensate for the fiber impairments is therefore essential and of preeminent importance. This thesis proposes two DSP-based approaches for mitigating the induced distortions in short-reach and long-haul fiber-optic communication systems. The first approach introduces a powerful digital nonlinear feed-forward equalizer (NFFE), exploiting multilayer artificial neural network (ANN). The proposed ANN-NFFE mitigates nonlinear impairments of short-haul optical fiber communication systems, arising due to the nonlinearity introduced by direct photo-detection. In a direct detection system, the detection process is nonlinear due to the fact that the photo-current is proportional to the absolute square of the electric field intensity. The proposed equalizer provides the most efficient computational cost with high equalization performance. Its performance is comparable to the benchmark compensation performance achieved by maximum-likelihood sequence estimator. The equalizer trains an ANN to act as a nonlinear filter whose impulse response removes the intersymbol interference (ISI) distortions of the optical channel. Owing to the proposed extensive training of the equalizer, it achieves the ultimate performance limit of any feed-forward equalizer. The performance and efficiency of the equalizer are investigated by applying it to various practical short-reach fiber-optic transmission system scenarios. These scenarios are extracted from practical metro/media access networks and data center applications. The obtained results show that the ANN-NFFE compensates for the received BER degradation and significantly increases the tolerance to the chromatic dispersion distortion. The second approach is devoted for blindly combating impairments of long-haul fiber-optic systems and networks. A novel adjoint sensitivity analysis (ASA) approach for the nonlinear Schrödinger equation (NLSE) is proposed. The NLSE describes the light-wave propagation in optical fiber communication systems. The proposed ASA approach significantly accelerates the sensitivity calculations in any fiber-optic design problem. Using only one extra adjoint system simulation, all the sensitivities of a general objective function with respect to all fiber design parameters are estimated. We provide a full description of the solution to the derived adjoint problem. The accuracy and efficiency of our proposed algorithm are investigated through a comparison with the accurate but computationally expensive central finite-differences (CFD) approach. Numerical simulation results show that the proposed ASA algorithm has the same accuracy as the CFD approach but with a much lower computational cost. Moreover, we propose an efficient, robust, and accelerated adaptive digital back propagation (A-DBP) method based on adjoint optimization technique. Provided that the total transmission distance is known, the proposed A-DBP algorithm blindly compensates for the linear and nonlinear distortions of point-to-point long-reach optical fiber transmission systems or multi-point optical fiber transmission networks, without knowing the launch power and channel parameters. The NLSE-based ASA approach is extended for the sensitivity analysis of general multi-span DBP model. A modified split-step Fourier scheme method is introduced to solve the adjoint problem, and a complete analysis of its computational complexity is studied. An adjoint-based optimization (ABO) technique is introduced to significantly accelerate the parameters extraction of the A-DBP. The ABO algorithm utilizes a sequential quadratic programming (SQP) technique coupled with the extended ASA algorithm to rapidly solve the A-DBP training problem and optimize the design parameters using minimum overhead of extra system simulations. Regardless of the number of A-DBP design parameters, the derivatives of the training objective function with respect to all parameters are estimated using only one extra adjoint system simulation per optimization iterate. This is contrasted with the traditional finite-difference (FD)-based optimization methods whose sensitivity analysis calculations cost per iterate scales linearly with the number of parameters. The robustness, performance, and efficiency of the proposed A-DBP algorithm are demonstrated through applying it to mitigate the distortions of a 4-span optical fiber communication system scenario. Our results show that the proposed A-DBP achieves the optimal compensation performance obtained using an ideal fine-mesh DBP scheme utilizing the correct channel parameters. Compared to A-DBPs trained using SQP algorithms based on forward, backward, and central FD approaches, the proposed ABO algorithm trains the A-DBP with 2.02 times faster than the backward/forward FD-based optimizers, and with 3.63 times faster than the more accurate CFD-based optimizer. The achieved gain further increases as the number of design parameters increases. A coarse-mesh A-DBP with less number of spans is also adopted to significantly reduce the computational complexity, achieving compensation performance higher than that obtained using the coarse-mesh DBP with full number of spans. / Thesis / Doctor of Philosophy (PhD) / This thesis proposes two powerful and computationally efficient digital signal processing (DSP)-based techniques, namely, artificial neural network nonlinear feed forward equalizer (ANN-NFFE) and adaptive digital back propagation (A-DBP) equalizer, for mitigating the induced distortions in short-reach and long-haul fiber-optic communication systems, respectively. The ANN-NFFE combats nonlinear impairments of direct-detected short-haul optical fiber communication systems, achieving compensation performance comparable to the benchmark performance obtained using maximum-likelihood sequence estimator with much lower computational cost. A novel adjoint sensitivity analysis (ASA) approach is proposed to significantly accelerate sensitivity analyses of fiber-optic design problems. The A-DBP exploits a gradient-based optimization method coupled with the ASA algorithm to blindly compensate for the distortions of coherent-detected fiber-optic communication systems and networks, utilizing the minimum possible overhead of performed system simulations. The robustness and efficiency of the proposed equalizers are demonstrated using numerical simulations of varied examples extracted from practical optical fiber communication systems scenarios.
80

High Performance Data Mining Techniques For Intrusion Detection

Siddiqui, Muazzam Ahmed 01 January 2004 (has links)
The rapid growth of computers transformed the way in which information and data was stored. With this new paradigm of data access, comes the threat of this information being exposed to unauthorized and unintended users. Many systems have been developed which scrutinize the data for a deviation from the normal behavior of a user or system, or search for a known signature within the data. These systems are termed as Intrusion Detection Systems (IDS). These systems employ different techniques varying from statistical methods to machine learning algorithms. Intrusion detection systems use audit data generated by operating systems, application softwares or network devices. These sources produce huge amount of datasets with tens of millions of records in them. To analyze this data, data mining is used which is a process to dig useful patterns from a large bulk of information. A major obstacle in the process is that the traditional data mining and learning algorithms are overwhelmed by the bulk volume and complexity of available data. This makes these algorithms impractical for time critical tasks like intrusion detection because of the large execution time. Our approach towards this issue makes use of high performance data mining techniques to expedite the process by exploiting the parallelism in the existing data mining algorithms and the underlying hardware. We will show that how high performance and parallel computing can be used to scale the data mining algorithms to handle large datasets, allowing the data mining component to search a much larger set of patterns and models than traditional computational platforms and algorithms would allow. We develop parallel data mining algorithms by parallelizing existing machine learning techniques using cluster computing. These algorithms include parallel backpropagation and parallel fuzzy ARTMAP neural networks. We evaluate the performances of the developed models in terms of speedup over traditional algorithms, prediction rate and false alarm rate. Our results showed that the traditional backpropagation and fuzzy ARTMAP algorithms can benefit from high performance computing techniques which make them well suited for time critical tasks like intrusion detection.

Page generated in 0.1131 seconds