• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A local network neighbourhood artificial immune system

Graaff, A.J. (Alexander Jakobus) 17 October 2011 (has links)
As information is becoming more available online and will forevermore be part of any business, the true value of the large amounts of stored data is in the discovery of hidden and unknown relations and connections or traits in the data. The acquisition of these hidden relations can influence strategic decisions which have an impact on the success of a business. Data clustering is one of many methods to partition data into different groups in such a way that data patterns within the same group share some common trait compared to patterns across different groups. This thesis proposes a new artificial immune model for the problem of data clustering. The new model is inspired by the network theory of immunology and differs from its network based predecessor models in its formation of artificial lymphocyte networks. The proposed model is first applied to data clustering problems in stationary environments. Two different techniques are then proposed which enhances the proposed artificial immune model to dynamically determine the number of clusters in a data set with minimal to no user interference. A technique to generate synthetic data sets for data clustering of non-stationary environments is then proposed. Lastly, the original proposed artificial immune model and the enhanced version to dynamically determine the number of clusters are then applied to generated synthetic non-stationary data clustering problems. The influence of the parameters on the clustering performance is investigated for all versions of the proposed artificial immune model and supported by empirical results and statistical hypothesis tests. AFRIKAANS: Soos wat inligting meer aanlyn toeganglik raak en vir altyd meer deel vorm van enige besigheid, is die eintlike waarde van groot hoeveelhede data in die ontdekking van verskuilde en onbekende verwantskappe en konneksies of eienskappe in die data. Die verkryging van sulke verskuilde verwantskappe kan die strategiese besluitneming van ’n besigheid beinvloed, wat weer ’n impak het op die sukses van ’n besigheid. Data groepering is een van baie metodes om data op so ’n manier te groepeer dat data patrone wat deel vorm van dieselfde groep ’n gemeenskaplike eienskap deel in vergelyking met patrone wat verspreid is in ander groepe. Hierdie tesis stel ’n nuwe kunsmatige immuun model voor vir die probleem van data groepering. Die nuwe model is geinspireer deur die netwerk teorie in immunologie en verskil van vorige netwerk gebaseerde modelle deur die model se formasie van kunsmatige limfosiet netwerke. Die voorgestelde model word eers toegepas op data groeperingsprobleme in statiese omgewings. Twee verskillende tegnieke word dan voorgestel wat die voorgestelde kunsmatige immuun model op so ’n manier verbeter dat die model die aantal groepe in ’n data stel dinamies kan bepaal met minimum tot geen gebruiker invloed. ’n Tegniek om kunsmatige data stelle te genereer vir data groepering in dinamiese omgewings word dan voorgestel. Laastens word die oorspronklik voorgestelde model sowel as die verbeterde model wat dinamies die aantal groepe in ’n data stel kan bepaal toegepas op kunsmatig genereerde dinamiese data groeperingsprobleme. Die invloed van die parameters op die groepering prestasie is ondersoek vir alle weergawes van die voorgestelde kunsmatige immuun model en word toegelig deur empiriese resultate en statistiese hipotese toetse. / Thesis (PhD)--University of Pretoria, 2011. / Computer Science / unrestricted
2

Die ideaal van kunsmatige intelligensie : 'n hersenskim? / J.A. Louw

Louw, Jacobus Adriaan January 2010 (has links)
The ideal of artificial intelligence can firstly be set as the ability of a mechanical (or electronic) agent to be able to, as a human, observe, reason, learn, communicate and act in complex environments and secondly, to explain this type of behaviour in humans, animals or any other type of agent. The aim of this study is firstly to determine whether this ideal is feasible and secondly, to look at the physicalist premise thereof, viz., everything is physical according to Dooyeweerd’s view of the creation, fall and redemption motive. First we determine the essence of artificial intelligence through the Curch–Turing thesis. We then place the essence of artificial intelligence alongside the essence of life firstly to see whether the construction of an artificial intelligence agent is possible and whether the subject artificial intelligence has something to say regarding intelligent behaviour in humans, animals and similar agents. Lastly we look at the physicalist premise of artificial intelligence viz., everything is physical from the reformative creation, fall and redemption motive. The Church–Turing thesis forms the boundary of what is feasible in artificial intelligence and what is not feasible. Every component of the thesis is limited to the arithmetic law sphere of Being, i.e. the succession of discrete elements in a set of elements. Any effort to reduce the spatial aspect of the being to the arithmetic aspect of Being, like the enumeration of irrational numbers, ends in an antinomy. Any artificial intelligence agent is in its nature limited to the arithmetic law sphere of Being. The structural intertwinement, which such an artificial intelligence agent has with its underlying physical components is, in contrast with living organisms that of an irreversible grounded enkapsis. Life and mind has, in contrast to the arithmetic seclusion of an artificial intelligence agent, a fullness and totality. It has an ability to unlock Being in its fullness, which comes to the fore in a way that any living organism unlocks the plastic horizon of Being in the respective internal and phenomenological horizons. The unlocking of the spatial aspect plays a key role with its kernel of totality, simultaneity and continuousness. In both these horizons, the organism is in a living enkapsis with both its underlying physical substrate and the physical things in its external surroundings. The ideal of artificial intelligence is thus a phantasm. The only comment it can give on biology is that which has to do with the succession of discrete elements in a system. Hempel’s dilemma and the halting problem expose the physicalist point of departure of everything is physical as a religious premise, which is not empirically verifiable. Instead of getting a better view of Being the contours of meaning of life as well as all the supra physical aspects of Being fades away or is denied with concealment of Being. The only way in which we can get the broadest possible insight into Being is in the light of the Word of God. / Thesis (M.Phil.)--North-West University, Potchefstroom Campus, 2011.
3

Die ideaal van kunsmatige intelligensie : 'n hersenskim? / J.A. Louw

Louw, Jacobus Adriaan January 2010 (has links)
The ideal of artificial intelligence can firstly be set as the ability of a mechanical (or electronic) agent to be able to, as a human, observe, reason, learn, communicate and act in complex environments and secondly, to explain this type of behaviour in humans, animals or any other type of agent. The aim of this study is firstly to determine whether this ideal is feasible and secondly, to look at the physicalist premise thereof, viz., everything is physical according to Dooyeweerd’s view of the creation, fall and redemption motive. First we determine the essence of artificial intelligence through the Curch–Turing thesis. We then place the essence of artificial intelligence alongside the essence of life firstly to see whether the construction of an artificial intelligence agent is possible and whether the subject artificial intelligence has something to say regarding intelligent behaviour in humans, animals and similar agents. Lastly we look at the physicalist premise of artificial intelligence viz., everything is physical from the reformative creation, fall and redemption motive. The Church–Turing thesis forms the boundary of what is feasible in artificial intelligence and what is not feasible. Every component of the thesis is limited to the arithmetic law sphere of Being, i.e. the succession of discrete elements in a set of elements. Any effort to reduce the spatial aspect of the being to the arithmetic aspect of Being, like the enumeration of irrational numbers, ends in an antinomy. Any artificial intelligence agent is in its nature limited to the arithmetic law sphere of Being. The structural intertwinement, which such an artificial intelligence agent has with its underlying physical components is, in contrast with living organisms that of an irreversible grounded enkapsis. Life and mind has, in contrast to the arithmetic seclusion of an artificial intelligence agent, a fullness and totality. It has an ability to unlock Being in its fullness, which comes to the fore in a way that any living organism unlocks the plastic horizon of Being in the respective internal and phenomenological horizons. The unlocking of the spatial aspect plays a key role with its kernel of totality, simultaneity and continuousness. In both these horizons, the organism is in a living enkapsis with both its underlying physical substrate and the physical things in its external surroundings. The ideal of artificial intelligence is thus a phantasm. The only comment it can give on biology is that which has to do with the succession of discrete elements in a system. Hempel’s dilemma and the halting problem expose the physicalist point of departure of everything is physical as a religious premise, which is not empirically verifiable. Instead of getting a better view of Being the contours of meaning of life as well as all the supra physical aspects of Being fades away or is denied with concealment of Being. The only way in which we can get the broadest possible insight into Being is in the light of the Word of God. / Thesis (M.Phil.)--North-West University, Potchefstroom Campus, 2011.
4

Comparing generalized additive neural networks with multilayer perceptrons / Johannes Christiaan Goosen

Goosen, Johannes Christiaan January 2011 (has links)
In this dissertation, generalized additive neural networks (GANNs) and multilayer perceptrons (MLPs) are studied and compared as prediction techniques. MLPs are the most widely used type of artificial neural network (ANN), but are considered black boxes with regard to interpretability. There is currently no simple a priori method to determine the number of hidden neurons in each of the hidden layers of ANNs. Guidelines exist that are either heuristic or based on simulations that are derived from limited experiments. A modified version of the neural network construction with cross–validation samples (N2C2S) algorithm is therefore implemented and utilized to construct good MLP models. This algorithm enables the comparison with GANN models. GANNs are a relatively new type of ANN, based on the generalized additive model. The architecture of a GANN is less complex compared to MLPs and results can be interpreted with a graphical method, called the partial residual plot. A GANN consists of an input layer where each of the input nodes has its own MLP with one hidden layer. Originally, GANNs were constructed by interpreting partial residual plots. This method is time consuming and subjective, which may lead to the creation of suboptimal models. Consequently, an automated construction algorithm for GANNs was created and implemented in the SAS R statistical language. This system was called AutoGANN and is used to create good GANN models. A number of experiments are conducted on five publicly available data sets to gain insight into the similarities and differences between GANN and MLP models. The data sets include regression and classification tasks. In–sample model selection with the SBC model selection criterion and out–of–sample model selection with the average validation error as model selection criterion are performed. The models created are compared in terms of predictive accuracy, model complexity, comprehensibility, ease of construction and utility. The results show that the choice of model is highly dependent on the problem, as no single model always outperforms the other in terms of predictive accuracy. GANNs may be suggested for problems where interpretability of the results is important. The time taken to construct good MLP models by the modified N2C2S algorithm may be shorter than the time to build good GANN models by the automated construction algorithm / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
5

Comparing generalized additive neural networks with multilayer perceptrons / Johannes Christiaan Goosen

Goosen, Johannes Christiaan January 2011 (has links)
In this dissertation, generalized additive neural networks (GANNs) and multilayer perceptrons (MLPs) are studied and compared as prediction techniques. MLPs are the most widely used type of artificial neural network (ANN), but are considered black boxes with regard to interpretability. There is currently no simple a priori method to determine the number of hidden neurons in each of the hidden layers of ANNs. Guidelines exist that are either heuristic or based on simulations that are derived from limited experiments. A modified version of the neural network construction with cross–validation samples (N2C2S) algorithm is therefore implemented and utilized to construct good MLP models. This algorithm enables the comparison with GANN models. GANNs are a relatively new type of ANN, based on the generalized additive model. The architecture of a GANN is less complex compared to MLPs and results can be interpreted with a graphical method, called the partial residual plot. A GANN consists of an input layer where each of the input nodes has its own MLP with one hidden layer. Originally, GANNs were constructed by interpreting partial residual plots. This method is time consuming and subjective, which may lead to the creation of suboptimal models. Consequently, an automated construction algorithm for GANNs was created and implemented in the SAS R statistical language. This system was called AutoGANN and is used to create good GANN models. A number of experiments are conducted on five publicly available data sets to gain insight into the similarities and differences between GANN and MLP models. The data sets include regression and classification tasks. In–sample model selection with the SBC model selection criterion and out–of–sample model selection with the average validation error as model selection criterion are performed. The models created are compared in terms of predictive accuracy, model complexity, comprehensibility, ease of construction and utility. The results show that the choice of model is highly dependent on the problem, as no single model always outperforms the other in terms of predictive accuracy. GANNs may be suggested for problems where interpretability of the results is important. The time taken to construct good MLP models by the modified N2C2S algorithm may be shorter than the time to build good GANN models by the automated construction algorithm / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.

Page generated in 0.0711 seconds