• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 18
  • 9
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On the Application of Multi-Class Classification in Physical Therapy Recommendation

Zhang, Jing Unknown Date
No description available.
2

Applying Discriminant Functions with One-Class SVMs for Multi-Class Classification

Lee, Zhi-Ying 09 August 2007 (has links)
AdaBoost.M1 has been successfully applied to improve the accuracy of a learning algorithm for multi-class classification problems. However, it assumes that the performance of each base classifier must be better than 1/2, and this may be hard to achieve in practice for a multi-class problem. A new algorithm called AdaBoost.MK only requiring base classifiers better than a random guessing (1/k) is thus designed. Early SVM-based multi-class classification algorithms work by splitting the original problem into a set of two-class sub-problems. The time and space required by these algorithms are very demanding. In order to have low time and space complexities, we develop a base classifier that integrates one-class SVMs with discriminant functions. In this study, a hybrid method that integrates AdaBoost.MK and one-class SVMs with improved discriminant functions as the base classifiers is proposed to solve a multi-class classification problem. Experimental results on data sets from UCI and Statlog show that the proposed approach outperforms many popular multi-class algorithms including support vector clustering and AdaBoost.M1 with one-class SVMs as the base classifiers.
3

Advanced Text Analytics and Machine Learning Approach for Document Classification

Anne, Chaitanya 19 May 2017 (has links)
Text classification is used in information extraction and retrieval from a given text, and text classification has been considered as an important step to manage a vast number of records given in digital form that is far-reaching and expanding. This thesis addresses patent document classification problem into fifteen different categories or classes, where some classes overlap with other classes for practical reasons. For the development of the classification model using machine learning techniques, useful features have been extracted from the given documents. The features are used to classify patent document as well as to generate useful tag-words. The overall objective of this work is to systematize NASA’s patent management, by developing a set of automated tools that can assist NASA to manage and market its portfolio of intellectual properties (IP), and to enable easier discovery of relevant IP by users. We have identified an array of methods that can be applied such as k-Nearest Neighbors (kNN), two variations of the Support Vector Machine (SVM) algorithms, and two tree based classification algorithms: Random Forest and J48. The major research steps in this work consist of filtering techniques for variable selection, information gain and feature correlation analysis, and training and testing potential models using effective classifiers. Further, the obstacles associated with the imbalanced data were mitigated by adding synthetic data wherever appropriate, which resulted in a superior SVM classifier based model.
4

Machine Learning for Beam Based Mobility Optimization in NR

Ekman, Björn January 2017 (has links)
One option for enabling mobility between 5G nodes is to use a set of area-fixed reference beams in the downlink direction from each node. To save power these reference beams should be turned on only on demand, i.e. only if a mobile needs it. An User Equipment (UE) moving out of a beam's coverage will require a switch from one beam to another, preferably without having to turn on all possible beams to find out which one is the best. This thesis investigates how to transform the beam selection problem into a format suitable for machine learning and how good such solutions are compared to baseline models. The baseline models considered were beam overlap and average Reference Signal Received Power (RSRP), both building beam-to-beam maps. Emphasis in the thesis was on handovers between nodes and finding the beam with the highest RSRP. Beam-hit-rate and RSRP-difference (selected minus best) were key performance indicators and were compared for different numbers of activated beams. The problem was modeled as a Multiple Output Regression (MOR) problem and as a Multi-Class Classification (MCC) problem. Both problems are possible to solve with the random forest model, which was the learning model of choice during this work. An Ericsson simulator was used to simulate and collect data from a seven-site scenario with 40 UEs. Primary features available were the current serving beam index and its RSRP. Additional features, like position and distance, were suggested, though many ended up being limited either by the simulated scenario or by the cost of acquiring the feature in a real-world scenario. Using primary features only, learned models' performance were equal to or worse than the baseline models' performance. Adding distance improved the performance considerably, beating the baseline models, but still leaving room for more improvements.
5

Evolutionary Learning of Boosted Features for Visual Inspection Automation

Zhang, Meng 01 March 2018 (has links)
Feature extraction is one of the major challenges in object recognition. Features that are extracted from one type of objects cannot always be used directly for a different type of objects, therefore limiting the performance of feature extraction. Having an automatic feature learning algorithm could be a big advantage for an object recognition algorithm. This research first introduces several improvements on a fully automatic feature construction method called Evolution COnstructed Feature (ECO-Feature). These improvements are developed to construct more robust features and make the training process more efficient than the original version. The main weakness of the original ECO-Feature algorithm is that it is designed only for binary classification and cannot be directly applied to multi-class cases. We also observe that the recognition performance depends heavily on the size of the feature pool from which features can be selected and the ability of selecting the best features. For these reasons, we have developed an enhanced evolutionary learning method for multi-class object classification to address these challenges. Our method is called Evolutionary Learning of Boosted Features (ECO-Boost). ECO-Boost method is an efficient evolutionary learning algorithm developed to automatically construct highly discriminative image features from the training image for multi-class image classification. This unique method constructs image features that are often overlooked by humans, and is robust to minor image distortion and geometric transformations. We evaluate this algorithm with a few visual inspection datasets including specialty crops, fruits and road surface conditions. Results from extensive experiments confirm that ECO-Boost performs closely comparable to other methods and achieves a good balance between accuracy and simplicity for real-time multi-class object classification applications. It is a hardware-friendly algorithm that can be optimized for hardware implementation in an FPGA for real-time embedded visual inspection applications.
6

Multi-Class Imbalanced Learning for Time Series Problem : An Industrial Case Study

Andersson, Melanie January 2020 (has links)
Classification problems with multiple classes and imbalanced sample sizes present a new challenge than the binary classification problems. Methods have been proposed to handle imbalanced learning, however most of them are specifically designed for binary classification problems. Multi-class imbalance imposes additional challenges when applied to time series classification problems, such as weather classification. In this thesis, we introduce, apply and evaluate a new algorithm for handling multi-class imbalanced problems involving time series data. Our proposed algorithm is designed to handle both multi-class imbalance and time series classification problems and is inspired by the Imbalanced Fuzzy-Rough Ordered Weighted Average Nearest Neighbor Classification algorithm. The feasibility of our proposed algorithm is studied through an empirical evaluation performed on a telecom use-case at Ericsson, Sweden where data from commercial microwave links is used for weather classification. Our proposed algorithm is compared to the currently used model at Ericsson which is a one-dimensional convolutional neural network, as well as three other deep learning models. The empirical evaluation indicates that the performance of our proposed algorithm for weather classification is comparable to that of the current solution. Our proposed algorithm and the current solution are the two best performing models of the study.
7

Multi-class recognition using pair-wise classifiers / Daugelio klasių atpažinimas naudojant klasifikatorius poroms

Kybartas, Rimantas 01 October 2010 (has links)
There are plenty of solutions for the task of multi-class recognition. Unfortunately, these solutions are not always unanimous. Most of them are based on empirical experiments while statistical data features consideration is often omitted. That’s why questions like when and which method should be used, what the reliability of any chosen method is for solving a multi-class recognition task arise. In this dissertation two-stage multi-class decision methods are analyzed. Pair-wise classifiers able to better exploit statistical data features are used in the first stage of such methods. In the second stage a particular fusion rule of the first stage results is used to fuse the first stage results in order to produce the final classification decision. Complexity issues of pair-wise classifiers, training data size and precision of method quality estimation are pointed out in the research. The precision of algorithm highly depends on the data and the number of experiments performed (data permutation, division into training and testing data). It is shown that the declared superiority of some known algorithms is not reliable due to low precision of estimation. A detailed comparison of well known multi-class classification methods is performed and a new pair-wise classifier fusion method based on similar method used in multi-class classifier fusion is presented. The recommendations for multi-class classification task designer are provided. Methods which allow reducing classification... [to full text] / Daugelio klasių atpažinimo uždaviniams spręsti yra sukurta aibė sprendimų ir ne visada vieningų rekomendacijų. Dauguma jų paremta empiriniais bandymais, retai atsižvelgiama į statistines duomenų savybes. Dėl to sprendžiant daugelio klasių klasifikavimo uždavinį kyla klausimų, kurį metodą ir kada geriausia naudoti, koks vieno ar kito metodo patikimumas. Disertacijoje nagrinėjami dviejų pakopų sprendimo priėmimo metodai, kai pirmame etape sudaromi klasifikatoriai poroms (angl. pair-wise), sugebantys geriau išnaudoti klasių tarpusavio statistines savybes, o kitame etape yra atliekamas klasifikatorių poroms rezultatų apjungimas. Tyrime ypatingas dėmesys yra skiriamas klasifikatorių poroms sudėtingumui, mokymo duomenų kiekiui bei algoritmų kokybės įvertinimo tikslumui. Tikslumas labai priklauso nuo duomenų bei atliktų eksperimentų kiekio (duomenų permaišymo klasėse, juos skirstant į mokymo ir testavimo). Parodyta, jog dėl žemo įvertinimo tikslumo kai kurių publikuotų algoritmų deklaruojamas pranašumas prieš žinomus algoritmus nėra patikimas. Darbe atliktas detalus žinomų metodų palyginimas bei pristatytas naujai sukurtas klasifikatorių poroms apjungimo algoritmas, kuris yra paremtas analogišku algoritmu daugelio klasių klasifikatorių rezultatų apjungimui. Pateiktos bendros rekomendacijos, kaip projektuotojui elgtis daugelio klasių atveju. Pasiūlyti metodai, leidžiantys sumažinti klasifikavimo klaidą atliekant klasifikatorių poroms apjungimo koregavimą, kad algoritmas nebūtų... [toliau žr. visą tekstą]
8

Daugelio klasių atpažinimas naudojant klasifikatorius poroms / Multi-class recognition using pair-wise classifiers

Kybartas, Rimantas 01 October 2010 (has links)
Daugelio klasių atpažinimo uždaviniams spręsti yra sukurta aibė sprendimų ir ne visada vieningų rekomendacijų. Dauguma jų paremta empiriniais bandymais, retai atsižvelgiama į statistines duomenų savybes. Dėl to sprendžiant daugelio klasių klasifikavimo uždavinį kyla klausimų, kurį metodą ir kada geriausia naudoti, koks vieno ar kito metodo patikimumas. Disertacijoje nagrinėjami dviejų pakopų sprendimo priėmimo metodai, kai pirmame etape sudaromi klasifikatoriai poroms (angl. pair-wise), sugebantys geriau išnaudoti klasių tarpusavio statistines savybes, o kitame etape yra atliekamas klasifikatorių poroms rezultatų apjungimas. Tyrime ypatingas dėmesys yra skiriamas klasifikatorių poroms sudėtingumui, mokymo duomenų kiekiui bei algoritmų kokybės įvertinimo tikslumui. Tikslumas labai priklauso nuo duomenų bei atliktų eksperimentų kiekio (duomenų permaišymo klasėse, juos skirstant į mokymo ir testavimo). Parodyta, jog dėl žemo įvertinimo tikslumo kai kurių publikuotų algoritmų deklaruojamas pranašumas prieš žinomus algoritmus nėra patikimas. Darbe atliktas detalus žinomų metodų palyginimas bei pristatytas naujai sukurtas klasifikatorių poroms apjungimo algoritmas, kuris yra paremtas analogišku algoritmu daugelio klasių klasifikatorių rezultatų apjungimui. Pateiktos bendros rekomendacijos, kaip projektuotojui elgtis daugelio klasių atveju. Pasiūlyti metodai, leidžiantys sumažinti klasifikavimo klaidą atliekant klasifikatorių poroms apjungimo koregavimą, kad algoritmas nebūtų... [toliau žr. visą tekstą] / There are plenty of solutions for the task of multi-class recognition. Unfortunately, these solutions are not always unanimous. Most of them are based on empirical experiments while statistical data features consideration is often omitted. That’s why questions like when and which method should be used, what the reliability of any chosen method is for solving a multi-class recognition task arise. In this dissertation two-stage multi-class decision methods are analyzed. Pair-wise classifiers able to better exploit statistical data features are used in the first stage of such methods. In the second stage a particular fusion rule of the first stage results is used to fuse the first stage results in order to produce the final classification decision. Complexity issues of pair-wise classifiers, training data size and precision of method quality estimation are pointed out in the research. The precision of algorithm highly depends on the data and the number of experiments performed (data permutation, division into training and testing data). It is shown that the declared superiority of some known algorithms is not reliable due to low precision of estimation. A detailed comparison of well known multi-class classification methods is performed and a new pair-wise classifier fusion method based on similar method used in multi-class classifier fusion is presented. The recommendations for multi-class classification task designer are provided. Methods which allow reducing classification... [to full text]
9

Review of Large-Scale Coordinate Descent Algorithms for Multi-class Classification with Memory Constraints

Jovanovich, Aleksandar 03 June 2013 (has links)
No description available.
10

Multi-class Classification Methods Utilizing Mahalanobis Taguchi System And A Re-sampling Approach For Imbalanced Data Sets

Ayhan, Dilber 01 April 2009 (has links) (PDF)
Classification approaches are used in many areas in order to identify or estimate classes, which different observations belong to. The classification approach, Mahalanobis Taguchi System (MTS) is analyzed and further improved for multi-class classification problems under the scope of this thesis study. MTS tries to explore significant variables and classify a new observation based on its Mahalanobis distance (MD). In this study, first, sample size problems, which are encountered mostly in small data sets, and multicollinearity problems, which constitute some limitations of MTS, are analyzed and a re-sampling approach is explored as a solution. Our re-sampling approach, which only works for data sets with two classes, is a combination of over-sampling and under-sampling. Over-sampling is based on SMOTE, which generates the synthetic observations between the nearest neighbors of observations in the minority class. In addition, MTS models are used to test the performance of several re-sampling parameters, for which the most appropriate values are sought specific to each case. In the second part, multi-class classification methods with MTS are developed. An algorithm, namely Feature Weighted Multi-class MTS-I (FWMMTS-I), is inspired by the descent feature weighted MD. It relaxes adding up of the MDs for variables equally. This provides representations of noisy variables with weights close to zero so that they do not mask the other variables. As a second multi-class classification algorithm, the original MTS method is extended to multi-class problems, which is called Multi-class MTS (MMTS). In addition, a comparable approach to that of Su and Hsiao (2009), which also considers weights of variables, is studied with a modification in MD calculation. It is named as Feature Weighted Multi-class MTS-II (FWMMTS-II). The methods are compared on eight different multi-class data sets using a 5-fold stratified cross validation approach. Results show that FWMMTS-I is as accurate as MMTS, and they are better than FWMMTS-II. Interestingly, the Mahalanobis Distance Classifier (MDC) using all the variables directly in the classification model has performed equally well on the studied data sets.

Page generated in 0.2307 seconds