• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 88
  • 55
  • 52
  • 38
  • 34
  • 19
  • 15
  • 14
  • 13
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Machine Learning Models to Predict Cracking on Steel Slabs During Continuous Casting

Sibanda, Jacob January 2024 (has links)
Surface defects in steel slabs during continuous casting pose significant challengesfor quality control and product integrity in the steel industry. Predicting and classifyingthese defects accurately is crucial for ensuring product quality and minimizing productionlosses. This thesis investigates the effectiveness of machine learning models in predictingsurface defects of varying severity levels (ordinal classes) during the primary coolingstage of continuous casting. The study evaluates four machine learning algorithms,namely, XGBoost (main and baseline models), Decision Tree, and One-vs.-Rest SupportVector Machine (O-SVM), all trained with imbalanced defect class data. Model evaluationis conducted using a set of performance metrics, including precision, recall, F1-score,accuracy, macro-averaged Mean Absolute Error (MAE), Receiver Operating Characteristic(ROC) curves, Weighted Kappa and Ordinal Classification Index (OCI). Results indicatethat the XGBoost main model demonstrates robust performance across most evaluationmetrics, with high accuracy, precision, recall, and F1-score. Furthermore, incorporatingtemperature data from the primary cooling process inside the mold significantly enhancesthe predictive capabilities of machine learning models for defect prediction in continuouscasting. Key process parameters associated with defect formation, such as tundish temperature,casting speed, stopper rod argon pressure, and submerged entry nozzle (SEN) argonflow, are identified as significant contributors to defect severity. Feature importance andSHAP (SHapley Additive exPlanations) analysis reveal insights into the relationship betweenprocess variables and defect formation. Challenges and trade-offs, including modelcomplexity, interpretability, and computational efficiency, are discussed. Future researchdirections include further optimization and refinement of machine learning models andcollaboration with industry stakeholders to develop tailored solutions for defect predictionand quality control in continuous casting processes.
42

PATTERN RECOGNITION IN CLASS IMBALANCED DATASETS

Siddique, Nahian A 01 January 2016 (has links)
Class imbalanced datasets constitute a significant portion of the machine learning problems of interest, where recog­nizing the ‘rare class’ is the primary objective for most applications. Traditional linear machine learning algorithms are often not effective in recognizing the rare class. In this research work, a specifically optimized feed-forward artificial neural network (ANN) is proposed and developed to train from moderate to highly imbalanced datasets. The proposed methodology deals with the difficulty in classification task in multiple stages—by optimizing the training dataset, modifying kernel function to generate the gram matrix and optimizing the NN structure. First, the training dataset is extracted from the available sample set through an iterative process of selective under-sampling. Then, the proposed artificial NN comprises of a kernel function optimizer to specifically enhance class boundaries for imbalanced datasets by conformally transforming the kernel functions. Finally, a single hidden layer weighted neural network structure is proposed to train models from the imbalanced dataset. The proposed NN architecture is derived to effectively classify any binary dataset with even very high imbalance ratio with appropriate parameter tuning and sufficient number of processing elements. Effectiveness of the proposed method is tested on accuracy based performance metrics, achieving close to and above 90%, with several imbalanced datasets of generic nature and compared with state of the art methods. The proposed model is also used for classification of a 25GB computed tomographic colonography database to test its applicability for big data. Also the effectiveness of under-sampling, kernel optimization for training of the NN model from the modified kernel gram matrix representing the imbalanced data distribution is analyzed experimentally. Computation time analysis shows the feasibility of the system for practical purposes. This report is concluded with discussion of prospect of the developed model and suggestion for further development works in this direction.
43

Comparison of Machine Learning Techniques when Estimating Probability of Impairment : Estimating Probability of Impairment through Identification of Defaulting Customers one year Ahead of Time / En jämförelse av maskininlärningstekniker för uppskattning av Probability of Impairment : Uppskattningen av Probability of Impairment sker genom identifikation av låntagare som inte kommer fullfölja sina återbetalningsskyldigheter inom ett år

Eriksson, Alexander, Långström, Jacob January 2019 (has links)
Probability of Impairment, or Probability of Default, is the ratio of how many customers within a segment are expected to not fulfil their debt obligations and instead go into Default. This is a key metric within banking to estimate the level of credit risk, where the current standard is to estimate Probability of Impairment using Linear Regression. In this paper we show how this metric instead can be estimated through a classification approach with machine learning. By using models trained to find which specific customers will go into Default within the upcoming year, based on Neural Networks and Gradient Boosting, the Probability of Impairment is shown to be more accurately estimated than when using Linear Regression. Additionally, these models provide numerous real-life implementations internally within the banking sector. The new features of importance we found can be used to strengthen the models currently in use, and the ability to identify customers about to go into Default let banks take necessary actions ahead of time to cover otherwise unexpected risks. / Titeln på denna rapport är En jämförelse av maskininlärningstekniker för uppskattning av Probability of Impairment. Uppskattningen av Probability of Impairment sker genom identifikation av låntagare som inte kommer fullfölja sina återbetalningsskyldigheter inom ett år. Probability of Impairment, eller Probability of Default, är andelen kunder som uppskattas att inte fullfölja sina skyldigheter som låntagare och återbetalning därmed uteblir. Detta är ett nyckelmått inom banksektorn för att beräkna nivån av kreditrisk, vilken enligt nuvarande regleringsstandard uppskattas genom Linjär Regression. I denna uppsats visar vi hur detta mått istället kan uppskattas genom klassifikation med maskininlärning. Genom användandet av modeller anpassade för att hitta vilka specifika kunder som inte kommer fullfölja sina återbetalningsskyldigheter inom det kommande året, baserade på Neurala Nätverk och Gradient Boosting, visas att Probability of Impairment bättre uppskattas än genom Linjär Regression. Dessutom medför dessa modeller även ett stort antal interna användningsområden inom banksektorn. De nya variabler av intresse vi hittat kan användas för att stärka de modeller som idag används, samt förmågan att identifiera kunder som riskerar inte kunna fullfölja sina skyldigheter låter banker utföra nödvändiga åtgärder i god tid för att hantera annars oväntade risker.
44

Diversified Ensemble Classifiers for Highly Imbalanced Data Learning and their Application in Bioinformatics

DING, ZEJIN 07 May 2011 (has links)
In this dissertation, the problem of learning from highly imbalanced data is studied. Imbalance data learning is of great importance and challenge in many real applications. Dealing with a minority class normally needs new concepts, observations and solutions in order to fully understand the underlying complicated models. We try to systematically review and solve this special learning task in this dissertation.We propose a new ensemble learning framework—Diversified Ensemble Classifiers for Imbal-anced Data Learning (DECIDL), based on the advantages of existing ensemble imbalanced learning strategies. Our framework combines three learning techniques: a) ensemble learning, b) artificial example generation, and c) diversity construction by reversely data re-labeling. As a meta-learner, DECIDL utilizes general supervised learning algorithms as base learners to build an ensemble committee. We create a standard benchmark data pool, which contains 30 highly skewed sets with diverse characteristics from different domains, in order to facilitate future research on imbalance data learning. We use this benchmark pool to evaluate and compare our DECIDL framework with several ensemble learning methods, namely under-bagging, over-bagging, SMOTE-bagging, and AdaBoost. Extensive experiments suggest that our DECIDL framework is comparable with other methods. The data sets, experiments and results provide a valuable knowledge base for future research on imbalance learning. We develop a simple but effective artificial example generation method for data balancing. Two new methods DBEG-ensemble and DECIDL-DBEG are then designed to improve the power of imbalance learning. Experiments show that these two methods are comparable to the state-of-the-art methods, e.g., GSVM-RU and SMOTE-bagging. Furthermore, we investigate learning on imbalanced data from a new angle—active learning. By combining active learning with the DECIDL framework, we show that the newly designed Active-DECIDL method is very effective for imbalance learning, suggesting the DECIDL framework is very robust and flexible.Lastly, we apply the proposed learning methods to a real-world bioinformatics problem—protein methylation prediction. Extensive computational results show that the DECIDL method does perform very well for the imbalanced data mining task. Importantly, the experimental results have confirmed our new contributions on this particular data learning problem.
45

Interacting Fermi gases

Whitehead, Thomas Michael January 2018 (has links)
Interacting Fermi gases are one of the chief paradigms of condensed matter physics. They have been studied since the beginning of the development of quantum mechanics, but continue to produce surprises today. Recent experimental developments in the field of ultracold atomic gases, as well as conventional solid state materials, have produced new and exotic forms of Fermi gases, the theoretical understanding of which is still in its infancy. This Thesis aims to provide updated tools and additional insights into some of these systems, through the application of both numerical and analytical techniques. The first Part of this Thesis is concerned with the development of improved numerical tools for the study of interacting Fermi gases. These tools take the form of accurate model potentials for the dipolar and contact interactions, as found in various ultracold atomic gas experiments, and a new form of Jastrow correlation factor that interpolates between the radial symmetry of the inter-electron Coulomb potential at short inter-particle distances, and the symmetry of the numerical simulation cell at large separation. These methods are designed primarily for use in quantum Monte Carlo numerical calculations, and provide high accuracy along with considerable acceleration of simulations. The second Part shifts focus to an analytical analysis of spin-imbalanced Fermi gases with an attractive contact interaction. The spin-imbalanced Fermi gas is shown to be unstable to the formation of multi-particle instabilities, generalisations of a Cooper pair containing more than two fermions, and then a theory of superconductivity is built from these instabilities. This multi-particle superconductivity is shown to be energetically favourable over conventional superconducting phases in spin-imbalanced Fermi gases, and its unusual experimental consequences are discussed.
46

A Model Fusion Based Framework For Imbalanced Classification Problem with Noisy Dataset

January 2014 (has links)
abstract: Data imbalance and data noise often coexist in real world datasets. Data imbalance affects the learning classifier by degrading the recognition power of the classifier on the minority class, while data noise affects the learning classifier by providing inaccurate information and thus misleads the classifier. Because of these differences, data imbalance and data noise have been treated separately in the data mining field. Yet, such approach ignores the mutual effects and as a result may lead to new problems. A desirable solution is to tackle these two issues jointly. Noting the complementary nature of generative and discriminative models, this research proposes a unified model fusion based framework to handle the imbalanced classification with noisy dataset. The phase I study focuses on the imbalanced classification problem. A generative classifier, Gaussian Mixture Model (GMM) is studied which can learn the distribution of the imbalance data to improve the discrimination power on imbalanced classes. By fusing this knowledge into cost SVM (cSVM), a CSG method is proposed. Experimental results show the effectiveness of CSG in dealing with imbalanced classification problems. The phase II study expands the research scope to include the noisy dataset into the imbalanced classification problem. A model fusion based framework, K Nearest Gaussian (KNG) is proposed. KNG employs a generative modeling method, GMM, to model the training data as Gaussian mixtures and form adjustable confidence regions which are less sensitive to data imbalance and noise. Motivated by the K-nearest neighbor algorithm, the neighboring Gaussians are used to classify the testing instances. Experimental results show KNG method greatly outperforms traditional classification methods in dealing with imbalanced classification problems with noisy dataset. The phase III study addresses the issues of feature selection and parameter tuning of KNG algorithm. To further improve the performance of KNG algorithm, a Particle Swarm Optimization based method (PSO-KNG) is proposed. PSO-KNG formulates model parameters and data features into the same particle vector and thus can search the best feature and parameter combination jointly. The experimental results show that PSO can greatly improve the performance of KNG with better accuracy and much lower computational cost. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2014
47

Amélioration des procédures adaptatives pour l'apprentissage supervisé des données réelles / Improving adaptive methods of supervised learning for real data

Bahri, Emna 08 December 2010 (has links)
L'apprentissage automatique doit faire face à différentes difficultés lorsqu'il est confronté aux particularités des données réelles. En effet, ces données sont généralement complexes, volumineuses, de nature hétérogène, de sources variées, souvent acquises automatiquement. Parmi les difficultés les plus connues, on citera les problèmes liés à la sensibilité des algorithmes aux données bruitées et le traitement des données lorsque la variable de classe est déséquilibrée. Le dépassement de ces problèmes constitue un véritable enjeu pour améliorer l'efficacité du processus d'apprentissage face à des données réelles. Nous avons choisi dans cette thèse de réfléchir à des procédures adaptatives du type boosting qui soient efficaces en présence de bruit ou en présence de données déséquilibrées.Nous nous sommes intéressés, d’abord, au contrôle du bruit lorsque l'on utilise le boosting. En effet, les procédures de boosting ont beaucoup contribué à améliorer l'efficacité des procédures de prédiction en data mining, sauf en présence de données bruitées. Dans ce cas, un double problème se pose : le sur-apprentissage des exemples bruités et la détérioration de la vitesse de convergence du boosting. Face à ce double problème, nous proposons AdaBoost-Hybride, une adaptation de l’algorithme Adaboost fondée sur le lissage des résultats des hypothèses antérieures du boosting, qui a donné des résultats expérimentaux très satisfaisants.Ensuite, nous nous sommes intéressés à un autre problème ardu, celui de la prédiction lorsque la distribution de la classe est déséquilibrée. C'est ainsi que nous proposons une méthode adaptative du type boosting fondée sur la classification associative qui a l’intérêt de permettre la focalisation sur des petits groupes de cas, ce qui est bien adapté aux données déséquilibrées. Cette méthode repose sur 3 contributions : FCP-Growth-P, un algorithme supervisé de génération des itemsets de classe fréquents dérivé de FP-Growth dans lequel est introduit une condition d'élagage fondée sur les contre-exemples pour la spécification des règles, W-CARP une méthode de classification associative qui a pour but de donner des résultats au moins équivalents à ceux des approches existantes pour un temps d'exécution beaucoup plus réduit, enfin CARBoost, une méthode de classification associative adaptative qui utilise W-CARP comme classifieur faible. Dans un chapitre applicatif spécifique consacré à la détection d’intrusion, nous avons confronté les résultats de AdaBoost-Hybride et de CARBoost à ceux des méthodes de référence (données KDD Cup 99). / Machine learning often overlooks various difficulties when confronted real data. Indeed, these data are generally complex, voluminous, and heterogeneous, due to the variety of sources. Among these problems, the most well known concern the sensitivity of the algorithms to noise and unbalanced data. Overcoming these problems is a real challenge to improve the effectiveness of the learning process against real data. In this thesis, we have chosen to improve adaptive procedures (boosting) that are less effective in the presence of noise or with unbalanced data.First, we are interested in robustifying Boosting against noise. Most boosting procedures have contributed greatly to improve the predictive power of classifiers in data mining, but they are prone to noisy data. In this case, two problems arise, (1) the over-fitting due to the noisy examples and (2) the decrease of convergence rate of boosting. Against these two problems, we propose AdaBoost-Hybrid, an adaptation of the Adaboost algorithm that takes into account mistakes made in all the previous iteration. Experimental results are very promising.Then, we are interested in another difficult problem, the prediction when the class is unbalanced. Thus, we propose an adaptive method based on boosted associative classification. The interest of using associations rules is allowing the focus on small groups of cases, which is well suited for unbalanced data. This method relies on 3 contributions: (1) FCP-Growth-P, a supervised algorithm for extracting class frequent itemsets, derived from FP-Growth by introducing the condition of pruning based on counter-examples to specify rules, (2) W-CARP associative classification method which aims to give results at least equivalent to those of existing approaches but in a faster manner, (3) CARBoost, a classification method that uses adaptive associative W-CARP as weak classifier. Finally, in a chapter devoted to the specific application of intrusion’s detection, we compared the results of AdaBoost-Hybrid and CARBoost to those of reference methods (data KDD Cup 99).
48

Imbalanced Learning and Feature Extraction in Fraud Detection with Applications / Obalanserade Metoder och Attribut Aggregering för Upptäcka Bedrägeri, med Appliceringar

Jacobson, Martin January 2021 (has links)
This thesis deals with fraud detection in a real-world environment with datasets coming from Svenska Handelsbanken. The goal was to investigate how well machine learning can classify fraudulent transactions and how new additional features affected classification. The models used were EFSVM, RUTSVM, CS-SVM, ELM, MLP, Decision Tree, Extra Trees, and Random Forests. To determine the best results the Mathew Correlation Coefficient was used as performance metric, which has been shown to have a medium bias for imbalanced datasets. Each model could deal with high imbalanced datasets which is common for fraud detection. Best results were achieved with Random Forest and Extra Trees. The best scores were around 0.4 for the real-world datasets, though the score itself says nothing as it is more a testimony to the dataset’s separability. These scores were obtained when using aggregated features and not the standard raw dataset. The performance measure recall’s scores were around 0.88-0.93 with an increase in precision by 34.4%-67%, resulting in a large decrease of False Positives. Evaluation results showed a great difference compared to test-runs, either substantial increase or decrease. Two theories as to why are discussed, a great distribution change in the evaluation set, and the sample size increase (100%) for evaluation could have lead to the tests not being well representing of the performance. Feature aggregation were a central topic of this thesis, with the main focus on behaviour features which can describe patterns and habits of customers. For these there were five categories: Sender’s fraud history, Sender’s transaction history, Sender’s time transaction history, Sender’shistory to receiver, and receiver’s history. Out of these, the best performance increase was from the first which gave the top score, the other datasets did not show as much potential, with mostn ot increasing the results. Further studies need to be done before discarding these features, to be certain they don’t improve performance. Together with the data aggregation, a tool (t-SNE) to visualize high dimension data was usedto great success. With it an early understanding of what to expect from newly added features would bring to classification. For the best dataset it could be seen that a new sub-cluster of transactions had been created, leading to the belief that classification scores could improve, whichthey did. Feature selection and PCA-reduction techniques were also studied and PCA showedgood results and increased performance. Feature selection had not conclusive improvements. Over- and under-sampling were used and neither improved the scores, though undersampling could maintain the results which is interesting when increasing the dataset. / Denna avhandling handlar om upptäcka bedrägerier i en real-world miljö med data från Svenska Handelsbanken. Målet var att undersöka hur bra maskininlärning är på att klassificera bedrägliga transaktioner, och hur nya attributer hjälper klassificeringen. Metoderna som användes var EFSVM, RUTSVM, CS-SVM, ELM, MLP, Decision Tree, Extra Trees och Random Forests. För evaluering av resultat används Mathew Correlation Coefficient, vilket har visat sig ha småttt beroende med hänsyn till obalanserade datamängder. Varje modell har inbygda värden för attklara av att bearbeta med obalanserade datamängder, vilket är viktigt för att upptäcka bedrägerier. Resultatmässigt visade det sig att Random Forest och Extra Trees var bäst, utan att göra p-test:s, detta på grund att dataseten var relativt sätt små, vilket gör att små skillnader i resultat ej är säkra. De högsta resultaten var cirka 0.4, det absoluta värdet säger ingenting mer än som en indikation om graden av separation mellan klasserna. De bäst resultaten ficks när nya aggregerade attributer användes och inte standard datasetet. Dessa resultat hade recall värden av 0,88-0,93 och för dessa kunde det synas precision ökade med 34,4% - 67%, vilket ger en stor minskning av False Positives. Evluation-resultaten hade stor skillnad mot test-resultaten, denna skillnad hade antingen en betydande ökning eller minskning. Två anledningar om varför diskuterades, förändring av evaluation-datan mot test-datan eller att storleksökning (100%) för evaluation har lett till att testerna inte var representativa. Attribute-aggregering var ett centralt ämne, med fokus på beteende-mönster för att beskriva kunders vanor. För dessa fanns det fem kategorier: Avsändarens bedrägerihistorik, Avsändarens transaktionshistorik, Avsändarens historik av tid för transaktion, Avsändarens historik till mottagaren och mottagarens historik. Av dessa var den största prestationsökningen från bedrägerihistorik, de andra attributerna hade inte lika positiva resultat, de flesta ökade inte resultaten.Ytterligare mer omfattande studier måste göras innan dessa attributer kan sägas vara givande eller ogivande. Tillsammans med data-aggregering användes t-SNE för att visualisera högdimensionsdata med framgång. Med t-SNE kan en tidig förståelse för vad man kan förvänta sig av tillagda attributer, inom klassificering. För det bästa dataset kan man se att ett nytt kluster som hade skapats, vilket kan tolkas som datan var mer beskrivande. Där förväntades också resultaten förbättras, vilket de gjorde. Val av attributer och PCA-dimensions reducering studerades och PCA-visadeförbättring av resultaten. Over- och under-sampling testades och kunde ej förbättrade resultaten, även om undersampling kunde bibehålla resultated vilket är intressant om datamängden ökar.
49

Early diagnosis and personalised treatment focusing on synthetic data modelling: Novel visual learning approach in healthcare

Mahmoud, Ahsanullah Y., Neagu, Daniel, Scrimieri, Daniele, Abdullatif, Amr R.A. 09 August 2023 (has links)
Yes / The early diagnosis and personalised treatment of diseases are facilitated by machine learning. The quality of data has an impact on diagnosis because medical data are usually sparse, imbalanced, and contain irrelevant attributes, resulting in suboptimal diagnosis. To address the impacts of data challenges, improve resource allocation, and achieve better health outcomes, a novel visual learning approach is proposed. This study contributes to the visual learning approach by determining whether less or more synthetic data are required to improve the quality of a dataset, such as the number of observations and features, according to the intended personalised treatment and early diagnosis. In addition, numerous visualisation experiments are conducted, including using statistical characteristics, cumulative sums, histograms, correlation matrix, root mean square error, and principal component analysis in order to visualise both original and synthetic data to address the data challenges. Real medical datasets for cancer, heart disease, diabetes, cryotherapy and immunotherapy are selected as case studies. As a benchmark and point of classification comparison in terms of such as accuracy, sensitivity, and specificity, several models are implemented such as k-Nearest Neighbours and Random Forest. To simulate algorithm implementation and data, Generative Adversarial Network is used to create and manipulate synthetic data, whilst, Random Forest is implemented to classify the data. An amendable and adaptable system is constructed by combining Generative Adversarial Network and Random Forest models. The system model presents working steps, overview and flowchart. Experiments reveal that the majority of data-enhancement scenarios allow for the application of visual learning in the first stage of data analysis as a novel approach. To achieve meaningful adaptable synergy between appropriate quality data and optimal classification performance while maintaining statistical characteristics, visual learning provides researchers and practitioners with practical human-in-the-loop machine learning visualisation tools. Prior to implementing algorithms, the visual learning approach can be used to actualise early, and personalised diagnosis. For the immunotherapy data, the Random Forest performed best with precision, recall, f-measure, accuracy, sensitivity, and specificity of 81%, 82%, 81%, 88%, 95%, and 60%, as opposed to 91%, 96%, 93%, 93%, 96%, and 73% for synthetic data, respectively. Future studies might examine the optimal strategies to balance the quantity and quality of medical data.
50

Development of Artificial Intelligence-based In-Silico Toxicity Models. Data Quality Analysis and Model Performance Enhancement through Data Generation.

Malazizi, Ladan January 2008 (has links)
Toxic compounds, such as pesticides, are routinely tested against a range of aquatic, avian and mammalian species as part of the registration process. The need for reducing dependence on animal testing has led to an increasing interest in alternative methods such as in silico modelling. The QSAR (Quantitative Structure Activity Relationship)-based models are already in use for predicting physicochemical properties, environmental fate, eco-toxicological effects, and specific biological endpoints for a wide range of chemicals. Data plays an important role in modelling QSARs and also in result analysis for toxicity testing processes. This research addresses number of issues in predictive toxicology. One issue is the problem of data quality. Although large amount of toxicity data is available from online sources, this data may contain some unreliable samples and may be defined as of low quality. Its presentation also might not be consistent throughout different sources and that makes the access, interpretation and comparison of the information difficult. To address this issue we started with detailed investigation and experimental work on DEMETRA data. The DEMETRA datasets have been produced by the EC-funded project DEMETRA. Based on the investigation, experiments and the results obtained, the author identified a number of data quality criteria in order to provide a solution for data evaluation in toxicology domain. An algorithm has also been proposed to assess data quality before modelling. Another issue considered in the thesis was the missing values in datasets for toxicology domain. Least Square Method for a paired dataset and Serial Correlation for single version dataset provided the solution for the problem in two different situations. A procedural algorithm using these two methods has been proposed in order to overcome the problem of missing values. Another issue we paid attention to in this thesis was modelling of multi-class data sets in which the severe imbalance class samples distribution exists. The imbalanced data affect the performance of classifiers during the classification process. We have shown that as long as we understand how class members are constructed in dimensional space in each cluster we can reform the distribution and provide more knowledge domain for the classifier.

Page generated in 0.0649 seconds