• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 149
  • 36
  • 22
  • 15
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 292
  • 292
  • 97
  • 93
  • 80
  • 69
  • 57
  • 57
  • 56
  • 39
  • 39
  • 36
  • 35
  • 32
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Supervised Learning for Prediction of Tumour Mutational Burden / Användning av statistisk inlärning för estimering av mutationsbörda

Hargell, Joanna January 2021 (has links)
Tumour Mutational Burden is a promising biomarker to predict response to immunotherapy. In this thesis, statistical methods of supervised learning were used to predict TMB: GLM, Decision Trees and SVM. Predictions were based on data from targeted DNA sequencing, using variants found in the exonic, intronic, UTR and intergenic regions of the human DNA. This project was of an exploratory nature, performed in a pan-cancer setting. Both regression and classification were considered. The purpose was to investigate whether variants found in these regions of the DNA sequence are useful when predicting TMB. Poisson regression and Negative binomial regression were used within the framework of GLM. The results indicated deficiencies in the model assumptions and that the use of GLM for the application is questionable. The single regression tree did not yield satisfactory prediction accuracy. However, performance was improved by using variance reducing methods such as bagging and random forests. The use of boosted regression trees did not yield any significant improvement in prediction accuracy. In the classification setting, binary as well as multiple classes were considered. The distinction between classes was based on commonly used thresholds in clinical care to achieve immunotherapy. SVM and classification trees yielded high prediction accuracy for the binary case: a misclassification rate of 0.0242 and 0 respectively for the independent test set. In the multiple classification setting, bagging and random forests were implemented, yet, did not improve performance over the single classification tree. SVM produced a misclassification rate of 0.103, and the corresponding number for the single classification tree was 0.109. It was concluded that SVM and Decision trees are suitable methods for predicting TMB based on targeted gene panels. However, to obtain reliable predictions, there is a need to move from a pan-cancer setting to a diagnosis-based setting. Furthermore, parameters affecting TMB, like pre-analytical factors need to be included in the statistical analysis. / Denna uppsats undersöker tre metoder inom statistisk inlärning: GLM, Decision Trees och SVM, med avsikt att förutsäga mutationsbörda, TMB, för cancerpatienter. Metoderna har applicerats både inom regression och klassificering. Förutsägelser gjordes baserat på data från panel-baserad DNA-sekvensering som innehåller varianter från kodande, introniska UTR och intergeniska regioner av mänskligt DNA. Projektet ämnar att undersöka om varianter från dessa regioner av DNA-sekvensen kan vara användbara för att förutsäga mutationsbördan för en patient. Poisson-regression och Negativ Binomial-regression undersöktes inom GLM. Resultaten indikerade på brister i modellerna och att GLM inte är lämplig för denna tillämpning. Regressionsträden gav inte tillräckligt noggranna förutsägelser, men implementering av bagging och random forests förbättrade modellernas prestanda. Boosting förbättrade inte resultaten. Inom klassificering användes både binära klasser och multipla klasser. Avgränsningen mellan klasser baserades på kända gränser för TMB inom vården för att få immunoterapi. SVM och decision trees gav god prestanda för binär klassificering, med ett klassificeringsfel på 0.024 för SVM och 0 för decision trees. Bagging och random forests implementerades för det multipla fallet inom decision trees, men förbättrade inte prestandan. För multipla klasser gav SVM ett klassificeringnsfel på 0.103 och decision trees 0.109. Både SVM och decision trees visade sig vara lämpliga metoder för för att förutse värdet på TMB. Däremot, för att förutsägelserna ska vara tillförlitliga finns det ett behov av att göra denna typ av analys för varje enskild cancerdiagnos. Dessutom finns det ett behov av att inkludera parametrar från den bioinformatiska processen i den statistiska analysen.
222

A Comparison of Classification Methods in Predicting the Presence of DNA Profiles in Sexual Assault Kits

Heckman, Derek J. 11 January 2018 (has links)
No description available.
223

Using Data Mining to Model Student Success

Geltz, Rebecca L. January 2009 (has links)
No description available.
224

Centralized Control of Power System Stabilizers

Sanchez Ayala, Gerardo 09 October 2014 (has links)
This study takes advantage of wide area measurements to propose a centralized nonlinear controller that acts on power system stabilizers, to cooperatively increase the damping of problematic small signal oscillations all over the system. The structure based on decision trees results in a simple, efficient, and dependable methodology that imposes much less computational burden than other nonlinear design approaches, making it a promising candidate for actual implementation by utilities and system operators. Details are given to utilize existing stabilizers while causing minimum changes to the equipment, and warranting improvement or at least no detriment of current system behavior. This enables power system stabilizers to overcome their inherent limitation to act only on the basis of local measurements to damp a single target frequency. This study demonstrates the implications of this new input on mathematical models, and the control functionality that is made available by its incorporation to conventional stabilizers. In preparation of the case of study, a heuristic dynamic reduction methodology is introduced that preserves a physical equivalent model, and that can be interpreted by any commercial software package. The steps of this method are general, versatile, and of easy adaptation to any particular power system model, with the aggregated value of producing a physical model as final result, that makes the approach appealing for industry. The accuracy of the resulting reduced network has been demonstrated with the model of the Central American System. / Ph. D.
225

Stock picking via nonsymmetrically pruned binary decision trees with reject option

Andriyashin, Anton 06 July 2010 (has links)
Die Auswahl von Aktien ist ein Gebiet der Finanzanalyse, die von speziellem Interesse sowohl für viele professionelle Investoren als auch für Wissenschaftler ist. Empirische Untersuchungen belegen, dass Aktienerträge vorhergesagt werden können. Während verschiedene Modellierungstechniken zur Aktienselektion eingesetzt werden könnten, analysiert diese Arbeit die meist verbreiteten Methoden, darunter allgemeine Gleichgewichtsmodelle und Asset Pricing Modelle; parametrische, nichtparametrische und semiparametrische Regressionsmodelle; sowie beliebte Black-Box Klassifikationsmethoden. Aufgrund vorteilhafter Eigenschaften binärer Klassifikationsbäume, wie zum Beispiel einer herausragenden Interpretationsmöglichkeit von Entscheidungsregeln, wird der Kern des Handelsalgorithmus unter Verwendung dieser modernen, nichtparametrischen Methode konstruiert. Die optimale Größe des Baumes wird als der entscheidende Faktor für die Vorhersageperformance von Klassifikationsbäumen angesehen. Während eine Vielfalt alternativer populärer Bauminduktions- und Pruningtechniken existiert, die in dieser Studie kritisch gewürdigt werden, besteht eines der Hauptanliegen dieser Arbeit in einer neuartigen Methode asymmetrischen Baumprunings mit Abweisungsoption. Diese Methode wird als Best Node Selection (BNS) bezeichnet. Eine wichtige inverse Fortpflanzungseigenschaft der BNS wird bewiesen. Diese eröffnet eine einfache Möglichkeit, um die Suche der optimalen Baumgröße in der Praxis zu implementieren. Das traditionelle costcomplexity Pruning zeigt eine ähnliche Performance hinsichtlich der Baumgenauigkeit verglichen mit beliebten alternativen Techniken, und es stellt die Standard Pruningmethode für viele Anwendungen dar. Die BNS wird mit cost-complexity Pruning empirisch verglichen, indem zwei rekursive Portfolios aus DAX-Aktien zusammengestellt werden. Vorhersagen über die Performance für jede einzelne Aktie werden von Entscheidungsbäumen gemacht, die aktualisiert werden, sobald neue Marktinformationen erhältlich sind. Es wird gezeigt, dass die BNS der traditionellen Methode deutlich überlegen ist, und zwar sowohl gemäß den Backtesting Ergebnissen als auch nach dem Diebold-Marianto Test für statistische Signifikanz des Performanceunterschieds zwischen zwei Vorhersagemethoden. Ein weiteres neuartiges Charakteristikum dieser Arbeit liegt in der Verwendung individueller Entscheidungsregeln für jede einzelne Aktie im Unterschied zum traditionellen Zusammenfassen lernender Muster. Empirische Daten in Form individueller Entscheidungsregeln für einen zufällig ausgesuchten Zeitpunkt in der Überprüfungsreihe rechtfertigen diese Methode. / Stock picking is the field of financial analysis that is of particular interest for many professional investors and researchers. There is a lot of research evidence supporting the fact that stock returns can effectively be forecasted. While various modeling techniques could be employed for stock price prediction, a critical analysis of popular methods including general equilibrium and asset pricing models; parametric, non- and semiparametric regression models; and popular black box classification approaches is provided. Due to advantageous properties of binary classification trees including excellent level of interpretability of decision rules, the trading algorithm core is built employing this modern nonparametric method. Optimal tree size is believed to be the crucial factor of forecasting performance of classification trees. While there exists a set of widely adopted alternative tree induction and pruning techniques, which are critically examined in the study, one of the main contributions of this work is a novel methodology of nonsymmetrical tree pruning with reject option called Best Node Selection (BNS). An important inverse propagation property of BNS is proven that provides an easy way to implement the search for the optimal tree size in practice. Traditional cost-complexity pruning shows similar performance in terms of tree accuracy when assessed against popular alternative techniques, and it is the default pruning method for many applications. BNS is compared with costcomplexity pruning empirically by composing two recursive portfolios out of DAX30 stocks. Performance forecasts for each of the stocks are provided by constructed decision trees that are updated when new market information becomes available. It is shown that BNS clearly outperforms the traditional approach according to the backtesting results and the Diebold-Mariano test for statistical significance of the performance difference between two forecasting methods. Another novel feature of this work is the use of individual decision rules for each stock as opposed to pooling of learning samples, which is done traditionally. Empirical data in the form of provided individual decision rules for a randomly selected time point in the backtesting set justify this approach.
226

'n Masjienleerbenadering tot woordafbreking in Afrikaans

Fick, Machteld 06 1900 (has links)
Text in Afrikaans / Die doel van hierdie studie was om te bepaal tot watter mate ’n suiwer patroongebaseerde benadering tot woordafbreking bevredigende resultate lewer. Die masjienleertegnieke kunsmatige neurale netwerke, beslissingsbome en die TEX-algoritme is ondersoek aangesien dit met letterpatrone uit woordelyste afgerig kan word om lettergreep- en saamgesteldewoordverdeling te doen. ’n Leksikon van Afrikaanse woorde is uit ’n korpus van elektroniese teks genereer. Om lyste vir lettergreep- en saamgesteldewoordverdeling te kry, is woorde in die leksikon in lettergrepe verdeel en saamgestelde woorde is in hul samestellende dele verdeel. Uit elkeen van hierdie lyste van ±183 000 woorde is ±10 000 woorde as toetsdata gereserveer terwyl die res as afrigtingsdata gebruik is. ’n Rekursiewe algoritme is vir saamgesteldewoordverdeling ontwikkel. In hierdie algoritme word alle ooreenstemmende woorde uit ’n verwysingslys (die leksikon) onttrek deur stringpassing van die begin en einde van woorde af. Verdelingspunte word dan op grond van woordlengte uit die samestelling van begin- en eindwoorde bepaal. Die algoritme is uitgebrei deur die tekortkominge van hierdie basiese prosedure aan te spreek. Neurale netwerke en beslissingsbome is afgerig en variasies van beide tegnieke is ondersoek om die optimale modelle te kry. Patrone vir die TEX-algoritme is met die OPatGen-program gegenereer. Tydens toetsing het die TEX-algoritme die beste op beide lettergreep- en saamgesteldewoordverdeling presteer met 99,56% en 99,12% akkuraatheid, respektiewelik. Dit kan dus vir woordafbreking gebruik word met min risiko vir afbrekingsfoute in gedrukte teks. Die neurale netwerk met 98,82% en 98,42% akkuraatheid op lettergreep- en saamgesteldewoordverdeling, respektiewelik, is ook bruikbaar vir lettergreepverdeling, maar dis meer riskant. Ons het bevind dat beslissingsbome te riskant is om vir lettergreepverdeling en veral vir woordverdeling te gebruik, met 97,91% en 90,71% akkuraatheid, respektiewelik. ’n Gekombineerde algoritme is ontwerp waarin saamgesteldewoordverdeling eers met die TEXalgoritme gedoen word, waarna die resultate van lettergreepverdeling deur beide die TEXalgoritme en die neurale netwerk gekombineer word. Die algoritme het 1,3% minder foute as die TEX-algoritme gemaak. ’n Toets op gepubliseerde Afrikaanse teks het getoon dat die risiko vir woordafbrekingsfoute in teks met gemiddeld tien woorde per re¨el ±0,02% is. / The aim of this study was to determine the level of success achievable with a purely pattern based approach to hyphenation in Afrikaans. The machine learning techniques artificial neural networks, decision trees and the TEX algorithm were investigated since they can be trained with patterns of letters from word lists for syllabification and decompounding. A lexicon of Afrikaans words was extracted from a corpus of electronic text. To obtain lists for syllabification and decompounding, words in the lexicon were respectively syllabified and compound words were decomposed. From each list of ±183 000 words, ±10 000 words were reserved as testing data and the rest was used as training data. A recursive algorithm for decompounding was developed. In this algorithm all words corresponding with a reference list (the lexicon) are extracted by string fitting from beginning and end of words. Splitting points are then determined based on the length of reassembled words. The algorithm was expanded by addressing shortcomings of this basic procedure. Artificial neural networks and decision trees were trained and variations of both were examined to find optimal syllabification and decompounding models. Patterns for the TEX algorithm were generated by using the program OPatGen. Testing showed that the TEX algorithm performed best on both syllabification and decompounding tasks with 99,56% and 99,12% accuracy, respectively. It can therefore be used for hyphenation in Afrikaans with little risk of hyphenation errors in printed text. The performance of the artificial neural network was lower, but still acceptable, with 98,82% and 98,42% accuracy for syllabification and decompounding, respectively. The decision tree with accuracy of 97,91% on syllabification and 90,71% on decompounding was found to be too risky to use for either of the tasks A combined algorithm was developed where words are first decompounded by using the TEX algorithm before syllabifying them with both the TEX algoritm and the neural network and combining the results. This algoritm reduced the number of errors made by the TEX algorithm by 1,3% but missed more hyphens. Testing the algorithm on Afrikaans publications showed the risk for hyphenation errors to be ±0,02% for text assumed to have an average of ten words per line. / Decision Sciences / D. Phil. (Operational Research)
227

Masjienleerbenadering tot woordafbreking in Afrikaans

Fick, Machteld 06 1900 (has links)
Text in Afrikaans / Die doel van hierdie studie was om te bepaal tot watter mate ’n suiwer patroongebaseerde benadering tot woordafbreking bevredigende resultate lewer. Die masjienleertegnieke kunsmatige neurale netwerke, beslissingsbome en die TEX-algoritme is ondersoek aangesien dit met letterpatrone uit woordelyste afgerig kan word om lettergreep- en saamgesteldewoordverdeling te doen. ’n Leksikon van Afrikaanse woorde is uit ’n korpus van elektroniese teks genereer. Om lyste vir lettergreep- en saamgesteldewoordverdeling te kry, is woorde in die leksikon in lettergrepe verdeel en saamgestelde woorde is in hul samestellende dele verdeel. Uit elkeen van hierdie lyste van ±183 000 woorde is ±10 000 woorde as toetsdata gereserveer terwyl die res as afrigtingsdata gebruik is. ’n Rekursiewe algoritme is vir saamgesteldewoordverdeling ontwikkel. In hierdie algoritme word alle ooreenstemmende woorde uit ’n verwysingslys (die leksikon) onttrek deur stringpassing van die begin en einde van woorde af. Verdelingspunte word dan op grond van woordlengte uit die samestelling van begin- en eindwoorde bepaal. Die algoritme is uitgebrei deur die tekortkominge van hierdie basiese prosedure aan te spreek. Neurale netwerke en beslissingsbome is afgerig en variasies van beide tegnieke is ondersoek om die optimale modelle te kry. Patrone vir die TEX-algoritme is met die OPatGen-program gegenereer. Tydens toetsing het die TEX-algoritme die beste op beide lettergreep- en saamgesteldewoordverdeling presteer met 99,56% en 99,12% akkuraatheid, respektiewelik. Dit kan dus vir woordafbreking gebruik word met min risiko vir afbrekingsfoute in gedrukte teks. Die neurale netwerk met 98,82% en 98,42% akkuraatheid op lettergreep- en saamgesteldewoordverdeling, respektiewelik, is ook bruikbaar vir lettergreepverdeling, maar dis meer riskant. Ons het bevind dat beslissingsbome te riskant is om vir lettergreepverdeling en veral vir woordverdeling te gebruik, met 97,91% en 90,71% akkuraatheid, respektiewelik. ’n Gekombineerde algoritme is ontwerp waarin saamgesteldewoordverdeling eers met die TEXalgoritme gedoen word, waarna die resultate van lettergreepverdeling deur beide die TEXalgoritme en die neurale netwerk gekombineer word. Die algoritme het 1,3% minder foute as die TEX-algoritme gemaak. ’n Toets op gepubliseerde Afrikaanse teks het getoon dat die risiko vir woordafbrekingsfoute in teks met gemiddeld tien woorde per re¨el ±0,02% is. / The aim of this study was to determine the level of success achievable with a purely pattern based approach to hyphenation in Afrikaans. The machine learning techniques artificial neural networks, decision trees and the TEX algorithm were investigated since they can be trained with patterns of letters from word lists for syllabification and decompounding. A lexicon of Afrikaans words was extracted from a corpus of electronic text. To obtain lists for syllabification and decompounding, words in the lexicon were respectively syllabified and compound words were decomposed. From each list of ±183 000 words, ±10 000 words were reserved as testing data and the rest was used as training data. A recursive algorithm for decompounding was developed. In this algorithm all words corresponding with a reference list (the lexicon) are extracted by string fitting from beginning and end of words. Splitting points are then determined based on the length of reassembled words. The algorithm was expanded by addressing shortcomings of this basic procedure. Artificial neural networks and decision trees were trained and variations of both were examined to find optimal syllabification and decompounding models. Patterns for the TEX algorithm were generated by using the program OPatGen. Testing showed that the TEX algorithm performed best on both syllabification and decompounding tasks with 99,56% and 99,12% accuracy, respectively. It can therefore be used for hyphenation in Afrikaans with little risk of hyphenation errors in printed text. The performance of the artificial neural network was lower, but still acceptable, with 98,82% and 98,42% accuracy for syllabification and decompounding, respectively. The decision tree with accuracy of 97,91% on syllabification and 90,71% on decompounding was found to be too risky to use for either of the tasks A combined algorithm was developed where words are first decompounded by using the TEX algorithm before syllabifying them with both the TEX algoritm and the neural network and combining the results. This algoritm reduced the number of errors made by the TEX algorithm by 1,3% but missed more hyphens. Testing the algorithm on Afrikaans publications showed the risk for hyphenation errors to be ±0,02% for text assumed to have an average of ten words per line. / Decision Sciences / D. Phil. (Operational Research)
228

Textual data mining applications for industrial knowledge management solutions

Ur-Rahman, Nadeem January 2010 (has links)
In recent years knowledge has become an important resource to enhance the business and many activities are required to manage these knowledge resources well and help companies to remain competitive within industrial environments. The data available in most industrial setups is complex in nature and multiple different data formats may be generated to track the progress of different projects either related to developing new products or providing better services to the customers. Knowledge Discovery from different databases requires considerable efforts and energies and data mining techniques serve the purpose through handling structured data formats. If however the data is semi-structured or unstructured the combined efforts of data and text mining technologies may be needed to bring fruitful results. This thesis focuses on issues related to discovery of knowledge from semi-structured or unstructured data formats through the applications of textual data mining techniques to automate the classification of textual information into two different categories or classes which can then be used to help manage the knowledge available in multiple data formats. Applications of different data mining techniques to discover valuable information and knowledge from manufacturing or construction industries have been explored as part of a literature review. The application of text mining techniques to handle semi-structured or unstructured data has been discussed in detail. A novel integration of different data and text mining tools has been proposed in the form of a framework in which knowledge discovery and its refinement processes are performed through the application of Clustering and Apriori Association Rule of Mining algorithms. Finally the hypothesis of acquiring better classification accuracies has been detailed through the application of the methodology on case study data available in the form of Post Project Reviews (PPRs) reports. The process of discovering useful knowledge, its interpretation and utilisation has been automated to classify the textual data into two classes.
229

Recherche de Supersymétrie à l’aide de leptons de même charge électrique dans l’expérience ATLAS

Trépanier, Hubert 08 1900 (has links)
La théorie de la Supersymétrie est étudiée ici en tant que théorie complémentaire au Modèle Standard, sachant que celui-ci n'explique qu'environ 5% de l'univers et est incapable de répondre à plusieurs questions fondamentales en physique des particules. Ce mémoire contient les résultats d'une recherche de Supersymétrie effectuée avec le détecteur ATLAS et utilisant des états finaux contenant entre autres une paire de leptons de même charge électrique ou trois leptons. Les données proviennent de collisions protons-protons à 13 TeV d'énergie dans le centre-de-masse produites au Grand Collisionneur de Hadrons (LHC) en 2015. L'analyse n'a trouvé aucun excès significatif au-delà des attentes du Modèle Standard mais a permis tout de même de poser de nouvelles limites sur la masse de certaines particules supersymétriques. Ce mémoire contient aussi l'étude exhaustive d'un bruit de fond important pour cette analyse, soit le bruit de fond provenant des électrons dont la charge est mal identifiée. L'extraction du taux d'inversion de charge, nécessaire pour connaître combien d'événements seront attribuables à ce bruit de fond, a démontré que la probabilité pour que la charge d'un électron soit mal identifiée par ATLAS variait du dixième de pourcent à 8-9% selon l'impulsion transverse et la pseudorapidité des électrons. Puis, une étude fut effectuée concernant l'élimination de ce bruit de fond via l'identification et la discrimination des électrons dont la charge est mal identifiée. Une analyse multi-variée se servant d'une méthode d'apprentissage par arbres de décision, basée sur les caractéristiques distinctives de ces électrons, montra qu'il était possible de conserver un haut taux d'électrons bien identifiés (95%) tout en rejetant la grande majorité des électrons possédant une charge mal identifiée (90-93%). / Since the Standard Model only explains about 5% of our universe and leaves us with a lot of open questions in fundamental particle physics, a new theory called Supersymmetry is studied as a complementary model to the Standard Model. A search for Supersymmetry with the ATLAS detector and using final states with same-sign leptons or three leptons is presented in this master thesis. The data used for this analysis were produced in 2015 by the Large Hadron Collider (LHC) using proton-proton collisions at 13 TeV of center-of-mass energy. No excess was found above the Standard Model expectations but we were able to set new limits on the mass of some supersymmetric particles. This thesis describes in detail the topic of the electron charge-flip background, which arises when the electric charge of an electron is mis-measured by the ATLAS detector. This is an important background to take into account when searching for Supersymmetry with same-sign leptons. The extraction of charge-flip probabilities, which is needed to determine the number of charge-flip events among our same-sign selection, was performed and found to vary from less than a percent to 8-9% depending on the transverse momentum and the pseudorapidity of the electron. The last part of this thesis consists in a study for the potential of rejection of charge-flip electrons. It was performed by identifying and discriminating those electrons based on a multi-variate analysis with a boosted decision tree method using distinctive properties of charge-flip electrons. It was found that we can reject the wide majority of mis-measured electrons (90-93%) while keeping a very high level of efficiency for well-measured ones (95%).
230

Реконфигурабилне архитектуре за хардверску акцелерацију предиктивних модела машинског учења / Rekonfigurabilne arhitekture za hardversku akceleraciju prediktivnih modela mašinskog učenja / Reconfigurable Architectures for Hardware Acceleration of Machine Learning Classifiers

Vranjković Vuk 02 July 2015 (has links)
<p>У овој дисертацији представљене су универзалне реконфигурабилне<br />архитектуре грубог степена гранулације за хардверску имплементацију<br />DT (decision trees), ANN (artificial neural networks) и SVM (support vector<br />machines) предиктивних модела као и хомогених и хетерогених<br />ансамбала. Коришћењем ових архитектура реализоване су две врсте<br />DT модела, две врсте ANN модела, две врсте SVM модела и седам<br />врста ансамбала на FPGA (field programmable gate arrays) чипу.<br />Експерименти, засновани на скуповима из стандардне UCI базе скупова<br />за машинско учење, показују да FPGA имплементација омогућава<br />значајно убрзање (од 1 до 6 редова величине) просечног времена<br />потребног за предикцију, у поређењу са софтверским решењима.</p> / <p>U ovoj disertaciji predstavljene su univerzalne rekonfigurabilne<br />arhitekture grubog stepena granulacije za hardversku implementaciju<br />DT (decision trees), ANN (artificial neural networks) i SVM (support vector<br />machines) prediktivnih modela kao i homogenih i heterogenih<br />ansambala. Korišćenjem ovih arhitektura realizovane su dve vrste<br />DT modela, dve vrste ANN modela, dve vrste SVM modela i sedam<br />vrsta ansambala na FPGA (field programmable gate arrays) čipu.<br />Eksperimenti, zasnovani na skupovima iz standardne UCI baze skupova<br />za mašinsko učenje, pokazuju da FPGA implementacija omogućava<br />značajno ubrzanje (od 1 do 6 redova veličine) prosečnog vremena<br />potrebnog za predikciju, u poređenju sa softverskim rešenjima.</p> / <p>This thesis proposes universal coarse-grained reconfigurable computing<br />architectures for hardware implementation of decision trees (DTs), artificial<br />neural networks (ANNs), support vector machines (SVMs), and<br />homogeneous and heterogeneous ensemble classifiers (HHESs). Using<br />these universal architectures, two versions of DTs, two versions of SVMs,<br />two versions of ANNs, and seven versions of HHESs machine learning<br />classifiers, have been implemented in field programmable gate arrays<br />(FPGA). Experimental results, based on datasets of standard UCI machine<br />learning repository database, show that FPGA implementation provides<br />significant improvement (1&ndash;6 orders of magnitude) in the average instance<br />classification time, in comparison with software implementations.</p>

Page generated in 0.0785 seconds