• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • Tagged with
  • 41
  • 41
  • 41
  • 41
  • 37
  • 35
  • 35
  • 35
  • 35
  • 35
  • 35
  • 35
  • 35
  • 35
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Accelerating classifier training using AdaBoost within cascades of boosted ensembles : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Sciences at Massey University, Auckland, New Zealand

Susnjak, Teo January 2009 (has links)
This thesis seeks to address current problems encountered when training classifiers within the framework of cascades of boosted ensembles (CoBE). At present, a signifi- cant challenge facing this framework are inordinate classifier training runtimes. In some cases, it can take days or weeks (Viola and Jones, 2004; Verschae et al., 2008) to train a classifier. The protracted training runtimes are an obstacle to the wider use of this framework (Brubaker et al., 2006). They also hinder the process of producing effective object detection applications and make the testing of new theories and algorithms, as well as verifications of others research, a considerable challenge (McCane and Novins, 2003). An additional shortcoming of the CoBE framework is its limited ability to train clas- sifiers incrementally. Presently, the most reliable method of integrating new dataset in- formation into an existing classifier, is to re-train a classifier from beginning using the combined new and old datasets. This process is inefficient. It lacks scalability and dis- cards valuable information learned in previous training. To deal with these challenges, this thesis extends on the research by Barczak et al. (2008), and presents alternative CoBE frameworks for training classifiers. The alterna- tive frameworks reduce training runtimes by an order of magnitude over common CoBE frameworks and introduce additional tractability to the process. They achieve this, while preserving the generalization ability of their classifiers. This research also introduces a new framework for incrementally training CoBE clas- sifiers and shows how this can be done without re-training classifiers from beginning. However, the incremental framework for CoBEs has some limitations. Although it is able to improve the positive detection rates of existing classifiers, currently it is unable to lower their false detection rates.
32

Accelerating classifier training using AdaBoost within cascades of boosted ensembles : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Sciences at Massey University, Auckland, New Zealand

Susnjak, Teo January 2009 (has links)
This thesis seeks to address current problems encountered when training classifiers within the framework of cascades of boosted ensembles (CoBE). At present, a signifi- cant challenge facing this framework are inordinate classifier training runtimes. In some cases, it can take days or weeks (Viola and Jones, 2004; Verschae et al., 2008) to train a classifier. The protracted training runtimes are an obstacle to the wider use of this framework (Brubaker et al., 2006). They also hinder the process of producing effective object detection applications and make the testing of new theories and algorithms, as well as verifications of others research, a considerable challenge (McCane and Novins, 2003). An additional shortcoming of the CoBE framework is its limited ability to train clas- sifiers incrementally. Presently, the most reliable method of integrating new dataset in- formation into an existing classifier, is to re-train a classifier from beginning using the combined new and old datasets. This process is inefficient. It lacks scalability and dis- cards valuable information learned in previous training. To deal with these challenges, this thesis extends on the research by Barczak et al. (2008), and presents alternative CoBE frameworks for training classifiers. The alterna- tive frameworks reduce training runtimes by an order of magnitude over common CoBE frameworks and introduce additional tractability to the process. They achieve this, while preserving the generalization ability of their classifiers. This research also introduces a new framework for incrementally training CoBE clas- sifiers and shows how this can be done without re-training classifiers from beginning. However, the incremental framework for CoBEs has some limitations. Although it is able to improve the positive detection rates of existing classifiers, currently it is unable to lower their false detection rates.
33

Accelerating classifier training using AdaBoost within cascades of boosted ensembles : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Sciences at Massey University, Auckland, New Zealand

Susnjak, Teo January 2009 (has links)
This thesis seeks to address current problems encountered when training classifiers within the framework of cascades of boosted ensembles (CoBE). At present, a signifi- cant challenge facing this framework are inordinate classifier training runtimes. In some cases, it can take days or weeks (Viola and Jones, 2004; Verschae et al., 2008) to train a classifier. The protracted training runtimes are an obstacle to the wider use of this framework (Brubaker et al., 2006). They also hinder the process of producing effective object detection applications and make the testing of new theories and algorithms, as well as verifications of others research, a considerable challenge (McCane and Novins, 2003). An additional shortcoming of the CoBE framework is its limited ability to train clas- sifiers incrementally. Presently, the most reliable method of integrating new dataset in- formation into an existing classifier, is to re-train a classifier from beginning using the combined new and old datasets. This process is inefficient. It lacks scalability and dis- cards valuable information learned in previous training. To deal with these challenges, this thesis extends on the research by Barczak et al. (2008), and presents alternative CoBE frameworks for training classifiers. The alterna- tive frameworks reduce training runtimes by an order of magnitude over common CoBE frameworks and introduce additional tractability to the process. They achieve this, while preserving the generalization ability of their classifiers. This research also introduces a new framework for incrementally training CoBE clas- sifiers and shows how this can be done without re-training classifiers from beginning. However, the incremental framework for CoBEs has some limitations. Although it is able to improve the positive detection rates of existing classifiers, currently it is unable to lower their false detection rates.
34

A design framework and genetic algorithm for digital design optimisation on FPGAs

Savage, Matthew James Waipurukuma. January 2009 (has links)
Design tools of ever increasing power are required to keep pace with technological improvements in chip production. Chip capacities continually increase meaning that designs previously unfeasible become feasible. These designs are typically more complex and larger than their predecessors. Usually, the time available to a designer does not increase at the same rate. A designer is therefore tasked with a greater work load and a very limited amount of time. Design tools and automation are therefore necessary to compensate for this situation. The limiting characteristics of a design tool are its ease of use, the range of systems it can be applied to, and the quality of results obtained. Should a design tool lack in any of these three areas it will be of limited benefit. This work addresses only the quality of results obtained. While the other two are essential, they are unlikely to be relevant to a design tool if that tool is not adopted because the results were of insufficient quality. A design framework is proposed for the digital design of systems on FPGAs. This framework sets out the processes for producing a system specification of the design problem encountered, and then gives a procedure for processing that specification to produce a set of pareto-optimal designs in VHDL to implement the specification. The actual mapping of a specification to a VHDL design, is held in a mapping string that allows optimisation to be separated from other stages in the design framework. A new genetic algorithm, the Adaptive Speciation Genetic Algorithm (ASGA), is proposed featuring a customised selection, crossover, and mutation operator. This algorithm is assessed against other genetic algorithms from the literature on a knapsack problem and three digital design case studies. These case studies were the design of a parameter estimation circuit for a Self-Tuning Regulator (STR), the design of a Sum-of-Absolute- Difference (SAD) function for video motion detection problems, and the design of a five state Extended Kalman Filter (EKF). Results indicated that ASGA had good performance in all these problems. Through tests against other genetic algorithms, it was found the ASGA’s selection operator was inferior in some cases to that of the Pareto Envelope Selection Algorithm (PESA) by 3 Corne et al. By incorporating the selection operator of PESA performance improvements could be gained in the EKF problem.
35

A design framework and genetic algorithm for digital design optimisation on FPGAs

Savage, Matthew James Waipurukuma. January 2009 (has links)
Design tools of ever increasing power are required to keep pace with technological improvements in chip production. Chip capacities continually increase meaning that designs previously unfeasible become feasible. These designs are typically more complex and larger than their predecessors. Usually, the time available to a designer does not increase at the same rate. A designer is therefore tasked with a greater work load and a very limited amount of time. Design tools and automation are therefore necessary to compensate for this situation. The limiting characteristics of a design tool are its ease of use, the range of systems it can be applied to, and the quality of results obtained. Should a design tool lack in any of these three areas it will be of limited benefit. This work addresses only the quality of results obtained. While the other two are essential, they are unlikely to be relevant to a design tool if that tool is not adopted because the results were of insufficient quality. A design framework is proposed for the digital design of systems on FPGAs. This framework sets out the processes for producing a system specification of the design problem encountered, and then gives a procedure for processing that specification to produce a set of pareto-optimal designs in VHDL to implement the specification. The actual mapping of a specification to a VHDL design, is held in a mapping string that allows optimisation to be separated from other stages in the design framework. A new genetic algorithm, the Adaptive Speciation Genetic Algorithm (ASGA), is proposed featuring a customised selection, crossover, and mutation operator. This algorithm is assessed against other genetic algorithms from the literature on a knapsack problem and three digital design case studies. These case studies were the design of a parameter estimation circuit for a Self-Tuning Regulator (STR), the design of a Sum-of-Absolute- Difference (SAD) function for video motion detection problems, and the design of a five state Extended Kalman Filter (EKF). Results indicated that ASGA had good performance in all these problems. Through tests against other genetic algorithms, it was found the ASGA’s selection operator was inferior in some cases to that of the Pareto Envelope Selection Algorithm (PESA) by 3 Corne et al. By incorporating the selection operator of PESA performance improvements could be gained in the EKF problem.
36

A design framework and genetic algorithm for digital design optimisation on FPGAs

Savage, Matthew James Waipurukuma. January 2009 (has links)
Design tools of ever increasing power are required to keep pace with technological improvements in chip production. Chip capacities continually increase meaning that designs previously unfeasible become feasible. These designs are typically more complex and larger than their predecessors. Usually, the time available to a designer does not increase at the same rate. A designer is therefore tasked with a greater work load and a very limited amount of time. Design tools and automation are therefore necessary to compensate for this situation. The limiting characteristics of a design tool are its ease of use, the range of systems it can be applied to, and the quality of results obtained. Should a design tool lack in any of these three areas it will be of limited benefit. This work addresses only the quality of results obtained. While the other two are essential, they are unlikely to be relevant to a design tool if that tool is not adopted because the results were of insufficient quality. A design framework is proposed for the digital design of systems on FPGAs. This framework sets out the processes for producing a system specification of the design problem encountered, and then gives a procedure for processing that specification to produce a set of pareto-optimal designs in VHDL to implement the specification. The actual mapping of a specification to a VHDL design, is held in a mapping string that allows optimisation to be separated from other stages in the design framework. A new genetic algorithm, the Adaptive Speciation Genetic Algorithm (ASGA), is proposed featuring a customised selection, crossover, and mutation operator. This algorithm is assessed against other genetic algorithms from the literature on a knapsack problem and three digital design case studies. These case studies were the design of a parameter estimation circuit for a Self-Tuning Regulator (STR), the design of a Sum-of-Absolute- Difference (SAD) function for video motion detection problems, and the design of a five state Extended Kalman Filter (EKF). Results indicated that ASGA had good performance in all these problems. Through tests against other genetic algorithms, it was found the ASGA’s selection operator was inferior in some cases to that of the Pareto Envelope Selection Algorithm (PESA) by 3 Corne et al. By incorporating the selection operator of PESA performance improvements could be gained in the EKF problem.
37

A design framework and genetic algorithm for digital design optimisation on FPGAs

Savage, Matthew James Waipurukuma. January 2009 (has links)
Design tools of ever increasing power are required to keep pace with technological improvements in chip production. Chip capacities continually increase meaning that designs previously unfeasible become feasible. These designs are typically more complex and larger than their predecessors. Usually, the time available to a designer does not increase at the same rate. A designer is therefore tasked with a greater work load and a very limited amount of time. Design tools and automation are therefore necessary to compensate for this situation. The limiting characteristics of a design tool are its ease of use, the range of systems it can be applied to, and the quality of results obtained. Should a design tool lack in any of these three areas it will be of limited benefit. This work addresses only the quality of results obtained. While the other two are essential, they are unlikely to be relevant to a design tool if that tool is not adopted because the results were of insufficient quality. A design framework is proposed for the digital design of systems on FPGAs. This framework sets out the processes for producing a system specification of the design problem encountered, and then gives a procedure for processing that specification to produce a set of pareto-optimal designs in VHDL to implement the specification. The actual mapping of a specification to a VHDL design, is held in a mapping string that allows optimisation to be separated from other stages in the design framework. A new genetic algorithm, the Adaptive Speciation Genetic Algorithm (ASGA), is proposed featuring a customised selection, crossover, and mutation operator. This algorithm is assessed against other genetic algorithms from the literature on a knapsack problem and three digital design case studies. These case studies were the design of a parameter estimation circuit for a Self-Tuning Regulator (STR), the design of a Sum-of-Absolute- Difference (SAD) function for video motion detection problems, and the design of a five state Extended Kalman Filter (EKF). Results indicated that ASGA had good performance in all these problems. Through tests against other genetic algorithms, it was found the ASGA’s selection operator was inferior in some cases to that of the Pareto Envelope Selection Algorithm (PESA) by 3 Corne et al. By incorporating the selection operator of PESA performance improvements could be gained in the EKF problem.
38

Application of artificial neural networks in early detection of Mastitis from improved data collected on-line by robotic milking stations

Sun, Zhibin January 2008 (has links)
Two types of artificial neural networks, Multilayer Perceptron (MLP) and Self-organizing Feature Map (SOM), were employed to detect mastitis for robotic milking stations using the preprocessed data relating to the electrical conductivity and milk yield. The SOM was developed to classify the health status into three categories: healthy, moderately ill and severely ill. The clustering results were successfully evaluated and validated by using statistical techniques such as K-means clustering, ANOVA and Least Significant Difference. The result shows that the SOM could be used in the robotic milking stations as a detection model for mastitis. For developing MLP models, a new mastitis definition based on higher EC and lower quarter yield was created and Principle Components Analysis technique was adopted for addressing the problem of multi-colinearity existed in the data. Four MLPs with four combined datasets were developed and the results manifested that the PCA-based MLP model is superior to other non-PCA-based models in many respects such as less complexity, higher predictive accuracy. The overall correct classification rate (CCR), sensitivity and specificity of the model was 90.74 %, 86.90 and 91.36, respectively. We conclude that the PCA-based model developed here can improve the accuracy of prediction of mastitis by robotic milking stations.
39

Application of artificial neural networks for understanding and diagnosing the state of mastitis in dairy cattle

Hassan, K. J. January 2007 (has links)
Bovine mastitis adversely affects the dairy industry around the world. This disease is caused by a diverse range of bacteria, broadly categorised as minor and major pathogens. In-line tools that help identify these bacterial groupings in the early stages of the disease are advantageous as timely decisions could be made before the cow develops any clinical symptoms. The first objective of this research was to identify the most informative milk parameters for the detection of minor and major bacterial pathogens. The second objective of this research was to evaluate the potential of supervised and unsupervised neural network learning paradigms for the detection of minor infected and major infected quarters in the early stages of the disease. The third objective was to evaluate the effects of different proportions of infected to non-infected cases in the training data set on the correct classification rate of the supervised neural network models as there are proportionately more non-infected cases in a herd than infected cases. A database developed at Lincoln University was used to achieve the research objectives. Starting at calving, quarter milk samples were collected weekly from 112 cows for a period of fourteen weeks, resulting in 4852 samples with complete records for somatic cell count (SCC), electrical resistance, protein percentage, fat percentage, and bacteriological status. To account for the effects of the stage of lactation on milk parameters with respect to days in milking, data was divided into three days in milk ranges. In addition, cow variation was accounted for by the sire family from which the cow originated and the lactation number of each cow. Data was pre-processed before the application of advanced analytical techniques. Somatic cell score (SCS) and electrical resistance index were derived from somatic cell count and electrical resistance, respectively. After pre-processing, the data was divided into training and validation sets for the unsupervised neural network modelling experiment and, for the supervised neural network modelling experiments, the data was divided into training, calibration and validation sets. Prior to any modelling experiments, the data was analysed using statistical and multivariate visualisation techniques. Correlations (p<0.05) were found between the infection status of a quarter and its somatic cell score (SCS, 0.86), electrical resistance index (ERI, -0.59) and protein percentage (PP, 0.33). The multivariate parallel visualisation analysis validated the correlation analysis. Due to significant multicolinearity [Correlations: SCS and ERI (-0.65: p<0.05); SCS and PP (0.32: p<0.05); ERI and PP (-0.35: p<0.05)], the original variables were decorrelated using principle component analysis. SCS and ERI were found to be the most informative variables for discriminating between non-infected, minor infected and major infected cases. Unsupervised neural network (USNN) model was trained using the training data set which was extracted from the database, containing approximately equal number of randomly selected records for each bacteriological status [not infected (NI), infected with a major pathogen (MJI) and infected with a minor pathogen (MNI)]. The USNN model was validated with the remaining data using the four principle components, days in milk (DIM), lactation number (LN), sire number, and bacteriological status (BS). The specificity of the USNN model in correctly identifying non infected cases was 97%. Sensitivities for correctly detecting minor and major infections were 89% and 80%, respectively. The supervised neural network (SNN) models were trained, calibrated and validated with several sets of training, calibration and validation data, which were randomly extracted from the database in such a way that each set has a different proportion of infected to non-infected cases ranging from 1:1 to 1:10. The overall accuracy of these models based on validation data sets gradually increased with increase in the number of non-infected cases in the data sets (80% for the 1:1, 84% for 1:2, 86% for 1:4 and 93% for 1:10). Specificities of the best models for correctly recognising non-infected cases for the four data sets were 82% for 1:1, 91% for 1:2, 94% for 1:4 and 98% for 1:10. Sensitivities for correctly recognising minor infected cases for the four data sets were 86% for 1:1, 76% for 1:2, 71% for 1:4 and 44% for 1:10. Sensitivities for correctly recognising major infected cases for the four data sets were 20% for 1:1, 20% for 1:2, 30% for 1:4 and 40% for 1:10. Overall, sensitivity for the minor infected cases decreased while that of major infected cases increased with increase in the number non-infected cases in the training data set. Due to the very low prevalence of MJI category in this particular herd, results for this category may be inconclusive. This research suggests that somatic cell score and electrical resistance index of milk were the most effective variables for detecting the infection status of a quarter followed by milk protein and fat percentage. The neural network models were able to differentiate milk containing minor and major bacterial pathogens based on milk parameters associated with mastitis. It is concluded that the neural network models can be developed and incorporated into milking machines to provide an efficient and effective method for the diagnosis of mastitis.
40

Implementation of stochastic neural networks for approximating random processes

Ling, Hong January 2007 (has links)
Artificial Neural Networks (ANNs) can be viewed as a mathematical model to simulate natural and biological systems on the basis of mimicking the information processing methods in the human brain. The capability of current ANNs only focuses on approximating arbitrary deterministic input-output mappings. However, these ANNs do not adequately represent the variability which is observed in the systems' natural settings as well as capture the complexity of the whole system behaviour. This thesis addresses the development of a new class of neural networks called Stochastic Neural Networks (SNNs) in order to simulate internal stochastic properties of systems. Developing a suitable mathematical model for SNNs is based on canonical representation of stochastic processes or systems by means of Karhunen-Loéve Theorem. Some successful real examples, such as analysis of full displacement field of wood in compression, confirm the validity of the proposed neural networks. Furthermore, analysis of internal workings of SNNs provides an in-depth view on the operation of SNNs that help to gain a better understanding of the simulation of stochastic processes by SNNs.

Page generated in 0.0672 seconds