• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 11
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 91
  • 91
  • 59
  • 26
  • 26
  • 21
  • 17
  • 15
  • 14
  • 13
  • 11
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Νέες μέθοδοι εκμάθησης για ασαφή γνωστικά δίκτυα και εφαρμογές στην ιατρική και βιομηχανία / New learning techniques to train fuzzy cognitive maps and applications in medicine and industry

Παπαγεωργίου, Ελπινίκη 25 June 2007 (has links)
Αντικείµενο της διατριβής είναι η ανάπτυξη νέων µεθοδολογιών εκµάθησης και σύγκλισης των Ασαφών Γνωστικών ∆ικτύων που προτείνονται για τη βελτίωση και προσαρµογή της συµπεριφοράς τους, καθώς και για την αύξηση της απόδοσής τους, αναδεικνύοντάς τα σε αποτελεσµατικά δυναµικά συστήµατα µοντελοποίησης. Τα νέα βελτιωµένα Ασαφή Γνωστικά ∆ίκτυα, µέσω της εκµάθησης και προσαρµογής των βαρών τους, έχουν χρησιµοποιηθεί στην ιατρική σε θέµατα διάγνωσης και υποστήριξης στη λήψη απόφασης, καθώς και σε µοντέλα βιοµηχανικών συστηµάτων που αφορούν τον έλεγχο διαδικασιών, µε πολύ ικανοποιητικά αποτελέσµατα. Στη διατριβή αυτή παρουσιάζονται, αξιολογούνται και εφαρµόζονται δύο νέοι αλγόριθµοι εκµάθησης χωρίς επίβλεψη των Ασαφών Γνωστικών ∆ικτύων, οι αλγόριθµοι Active Hebbian Learning (AHL) και Nonlinear Hebbian Learning (NHL), βασισµένοι στον κλασσικό αλγόριθµό εκµάθησης χωρίς επίβλεψη τύπου Hebb των νευρωνικών δικτύων, καθώς και µια νέα προσέγγιση εκµάθησης των Ασαφών Γνωστικών ∆ικτύων βασισµένη στους εξελικτικούς αλγορίθµους και πιο συγκεκριµένα στον αλγόριθµο Βελτιστοποίησης µε Σµήνος Σωµατιδίων και στον ∆ιαφοροεξελικτικό αλγόριθµο. Οι προτεινόµενοι αλγόριθµοι AHL και NHL στηρίζουν νέες µεθοδολογίες εκµάθησης για τα ΑΓ∆ που βελτιώνουν τη λειτουργία, και την αξιοπιστία τους, και που παρέχουν στους εµπειρογνώµονες του εκάστοτε προβλήµατος που αναπτύσσουν το ΑΓ∆, την εκµάθηση των παραµέτρων για τη ρύθµιση των αιτιατών διασυνδέσεων µεταξύ των κόµβων. Αυτοί οι τύποι εκµάθησης που συνοδεύονται από την σωστή γνώση του εκάστοτε προβλήµατος-συστήµατος, συµβάλλουν στην αύξηση της απόδοσης των ΑΓ∆ και διευρύνουν τη χρήση τους. Επιπρόσθετα µε τους αλγορίθµους εκµάθησης χωρίς επίβλεψη τύπου Hebb για τα ΑΓ∆, αναπτύσσονται και προτείνονται νέες τεχνικές εκµάθησης των ΑΓ∆ βασισµένες στους εξελικτικούς αλγορίθµους. Πιο συγκεκριµένα, προτείνεται µια νέα µεθοδολογία για την εφαρµογή του εξελικτικού αλγορίθµου Βελτιστοποίησης µε Σµήνος Σωµατιδίων στην εκµάθηση των Ασαφών Γνωστικών ∆ικτύων και πιο συγκεκριµένα στον καθορισµό των βέλτιστων περιοχών τιµών των βαρών των Ασαφών Γνωστικών ∆ικτύων. Με τη µεθοδο αυτή λαµβάνεται υπόψη η γνώση των εµπειρογνωµόνων για τον σχεδιασµό του µοντέλου µε τη µορφή περιορισµών στους κόµβους που µας ενδιαφέρουν οι τιµές των καταστάσεών τους, που έχουν οριστοί ως κόµβοι έξοδοι του συστήµατος, και για τα βάρη λαµβάνονται υπόψη οι περιοχές των ασαφών συνόλων που έχουν συµφωνήσει όλοι οι εµπειρογνώµονες. Έτσι θέτoντας περιορισµούς σε όλα τα βάρη και στους κόµβους εξόδου και καθορίζοντας µια κατάλληλη αντικειµενική συνάρτηση για το εκάστοτε πρόβληµα, προκύπτουν κατάλληλοι πίνακες βαρών (appropriate weight matrices) που µπορούν να οδηγήσουν το σύστηµα σε επιθυµητές περιοχές λειτουργίας και ταυτόχρονα να ικανοποιούν τις ειδικές συνθήκες- περιορισµούς του προβλήµατος. Οι δύο νέες µέθοδοι εκµάθησης χωρίς επίβλεψη που έχουν προταθεί για τα ΑΓ∆ χρησιµοποιούνται και εφαρµόζονται µε επιτυχία σε δυο πολύπλοκα προβλήµατα από το χώρο της ιατρικής, στο πρόβληµα λήψης απόφασης στην ακτινοθεραπεία και στο πρόβληµα κατηγοριοποίησης των καρκινικών όγκων της ουροδόχου κύστης σε πραγµατικές κλινικές περιπτώσεις. Επίσης όλοι οι προτεινόµενοι αλγόριθµοι εφαρµόζονται σε µοντέλα βιοµηχανικών συστηµάτων που αφορούν τον έλεγχο διαδικασιών µε πολύ ικανοποιητικά αποτελέσµατα. Οι αλγόριθµοι αυτοί, όπως προκύπτει από την εφαρµογή τους σε συγκεκριµένα προβλήµατα, βελτιώνουν το µοντέλο του ΑΓ∆, συµβάλλουν σε ευφυέστερα συστήµατα και διευρύνουν τη δυνατότητα εφαρµογής τους σε πραγµατικά και πολύπλοκα προβλήµατα. Η κύρια συνεισφορά αυτής της διατριβής είναι η ανάπτυξη νέων µεθοδολογιών εκµάθησης και σύγκλισης των Ασαφών Γνωστικών ∆ικτύων προτείνοντας δυο νέους αλγορίθµους µη επιβλεπόµενης µάθησης τύπου Hebb, τον αλγόριθµο Active Hebbian Learning και τον αλγόριθµο Nonlinear Hebbian Learning για την προσαρµογή των βαρών των διασυνδέσεων µεταξύ των κόµβων των Ασαφών Γνωστικών ∆ικτύων, καθώς και εξελικτικούς αλγορίθµους βελτιστοποιώντας συγκεκριµένες αντικειµενικές συναρτήσεις για κάθε εξεταζόµενο πρόβληµα. Τα νέα βελτιωµένα Ασαφή Γνωστικά ∆ίκτυα µέσω των αλγορίθµων προσαρµογής των βαρών τους έχουν χρησιµοποιηθεί για την ανάπτυξη ενός ∆ιεπίπεδου Ιεραρχικού Συστήµατος για την υποστήριξη λήψης απόφασης στην ακτινοθεραπεία, για την ανάπτυξη ενός διαγνωστικού εργαλείου για την κατηγοριοποίηση του βαθµού κακοήθειας των καρκινικών όγκων της ουροδόχου κύστης, καθώς και για την επίλυση βιοµηχανικών προβληµάτων για τον έλεγχο διαδικασιών. / The main contribution of this Dissertation is the development of new learning and convergence methodologies for Fuzzy Cognitive Maps that are proposed for the improvement and adaptation of their behaviour, as well as for the increase of their performance, electing them in effective dynamic systems of modelling. The new improved Fuzzy Cognitive Maps, via the learning and adaptation of their weights, have been used in medicine for diagnosis and decision-making, as well as to alleviate the problem of the potential uncontrollable convergence to undesired states in models of industrial process control systems, with very satisfactory results. In this Dissertation are presented, validated and implemented two new learning algorithms without supervision for Fuzzy Cognitive Maps, the algorithms Active Hebbian Learning (AHL) and Nonlinear Hebbian Learning (NHL), based on the classic unsupervised Hebb-type learning algorithm of neural networks, as well as a new approach of learning for Fuzzy Cognitive Maps based on the evolutionary algorithms and more specifically on the algorithm of Particles Swarm Optimization and on the Differential Evolution algorithm. The proposed algorithms AHL and NHL support new learning methodologies for FCMs that improve their operation, efficiency and reliability, and that provide in the experts of each problem that develop the FCM, the learning of parameters for the regulation (fine-tuning) of cause-effect relationships (weights) between the concepts. These types of learning that are accompanied with the right knowledge of each problem-system, contribute in the increase of performance of FCMs and extend their use. Additionally to the unsupervised learning algorithms of Hebb-type for the FCMs, are developed and proposed new learning techniques of FCMs based on the evolutionary algorithms. More specifically, it is proposed a new learning methodology for the application of evolutionary algorithm of Particle Swarm Optimisation in the adaptation of FCMs and more concretely in the determination of the optimal regions of weight values of FCMs. With this method it is taken into consideration the experts’ knowledge for the modelling with the form of restrictions in the concepts that interest us their values, and are defined as output concepts, and for weights are received the arithmetic values of the fuzzy regions that have agreed all the experts. Thus considering restrictions in all weights and in the output concepts and determining a suitable objective function for each problem, result appropriate weight matrices that can lead the system to desirable regions of operation and simultaneously satisfy specific conditions of problem. The first two proposed methods of unsupervised learning that have been suggested for the FCMs are used and applied with success in two complicated problems in medicine, in the problem of decision-making in the radiotherapy process and in the problem of tumor characterization for urinary bladder in real clinical cases. Also all the proposed algorithms are applied in models of industrial systems that concern the control of processes with very satisfactory results. These algorithms, as it results from their application in concrete problems, improve the model of FCMs, they contribute in more intelligent systems and they extend their possibility of application in real and complex problems. The main contribution of the present Dissertation is to develop new learning and convergence methodologies for Fuzzy Cognitive Maps proposing two new unsupervised learning algorithms, the algorithm Active Hebbian Learning and the algorithm Nonlinear Hebbian Learning for the adaptation of weights of the interconnections between the concepts of Fuzzy Cognitive Maps, as well as Evolutionary Algorithms optimizing concrete objective functions for each examined problem. New improved Fuzzy Cognitive Maps via the algorithms of weight adaptation have been used for the development of an Integrated Two-level hierarchical System for the support of decision-making in the radiotherapy, for the development of a new diagnostic tool for tumour characterization of urinary bladder, as well as for the solution of industrial process control problems.
62

Sparse representations over learned dictionary for document analysis / Présentations parcimonieuses sur dictionnaire d'apprentissage pour l'analyse de documents

Do, Thanh Ha 04 April 2014 (has links)
Dans cette thèse, nous nous concentrons sur comment les représentations parcimonieuses peuvent aider à augmenter les performances pour réduire le bruit, extraire des régions de texte, reconnaissance des formes et localiser des symboles dans des documents graphiques. Pour ce faire, tout d'abord, nous donnons une synthèse des représentations parcimonieuses et ses applications en traitement d'images. Ensuite, nous présentons notre motivation pour l'utilisation de dictionnaires d'apprentissage avec des algorithmes efficaces pour les construire. Après avoir décrit l'idée générale des représentations parcimonieuses et du dictionnaire d'apprentissage, nous présentons nos contributions dans le domaine de la reconnaissance de symboles et du traitement des documents en les comparants aux travaux de l'état de l'art. Ces contributions s'emploient à répondre aux questions suivantes: La première question est comment nous pouvons supprimer le bruit des images où il n'existe aucune hypothèse sur le modèle de bruit sous-jacent à ces images ? La deuxième question est comment les représentations parcimonieuses sur le dictionnaire d'apprentissage peuvent être adaptées pour séparer le texte du graphique dans des documents? La troisième question est comment nous pouvons appliquer la représentation parcimonieuse à reconnaissance de symboles? Nous complétons cette thèse en proposant une approche de localisation de symboles dans les documents graphiques qui utilise les représentations parcimonieuses pour coder un vocabulaire visuel / In this thesis, we focus on how sparse representations can help to increase the performance of noise removal, text region extraction, pattern recognition and spotting symbols in graphical documents. To do that, first of all, we give a survey of sparse representations and its applications in image processing. Then, we present the motivation of building learning dictionary and efficient algorithms for constructing a learning dictionary. After describing the general idea of sparse representations and learned dictionary, we bring some contributions in the field of symbol recognition and document processing that achieve better performances compared to the state-of-the-art. These contributions begin by finding the answers to the following questions. The first question is how we can remove the noise of a document when we have no assumptions about the model of noise found in these images? The second question is how sparse representations over learned dictionary can separate the text/graphic parts in the graphical document? The third question is how we can apply the sparse representation for symbol recognition? We complete this thesis by proposing an approach of spotting symbols that use sparse representations for the coding of a visual vocabulary
63

Aspects of Online Learning

Harrington, Edward, edwardharrington@homemail.com.au January 2004 (has links)
Online learning algorithms have several key advantages compared to their batch learning algorithm counterparts: they are generally more memory efficient, and computationally mor efficient; they are simpler to implement; and they are able to adapt to changes where the learning model is time varying. Online algorithms because of their simplicity are very appealing to practitioners. his thesis investigates several online learning algorithms and their application. The thesis has an underlying theme of the idea of combining several simple algorithms to give better performance. In this thesis we investigate: combining weights, combining hypothesis, and (sort of) hierarchical combining.¶ Firstly, we propose a new online variant of the Bayes point machine (BPM), called the online Bayes point machine (OBPM). We study the theoretical and empirical performance of the OBPm algorithm. We show that the empirical performance of the OBPM algorithm is comparable with other large margin classifier methods such as the approximately large margin algorithm (ALMA) and methods which maximise the margin explicitly, like the support vector machine (SVM). The OBPM algorithm when used with a parallel architecture offers potential computational savings compared to ALMA. We compare the test error performance of the OBPM algorithm with other online algorithms: the Perceptron, the voted-Perceptron, and Bagging. We demonstrate that the combinationof the voted-Perceptron algorithm and the OBPM algorithm, called voted-OBPM algorithm has better test error performance than the voted-Perceptron and Bagging algorithms. We investigate the use of various online voting methods against the problem of ranking, and the problem of collaborative filtering of instances. We look at the application of online Bagging and OBPM algorithms to the telecommunications problem of channel equalization. We show that both online methods were successful at reducing the effect on the test error of label flipping and additive noise.¶ Secondly, we introduce a new mixture of experts algorithm, the fixed-share hierarchy (FSH) algorithm. The FSH algorithm is able to track the mixture of experts when the switching rate between the best experts may not be constant. We study the theoretical aspects of the FSH and the practical application of it to adaptive equalization. Using simulations we show that the FSH algorithm is able to track the best expert, or mixture of experts, in both the case where the switching rate is constant and the case where the switching rate is time varying.
64

Learning Algorithms Using Chance-Constrained Programs

Jagarlapudi, Saketha Nath 07 1900 (has links)
This thesis explores Chance-Constrained Programming (CCP) in the context of learning. It is shown that chance-constraint approaches lead to improved algorithms for three important learning problems — classification with specified error rates, large dataset classification and Ordinal Regression (OR). Using moments of training data, the CCPs are posed as Second Order Cone Programs (SOCPs). Novel iterative algorithms for solving the resulting SOCPs are also derived. Borrowing ideas from robust optimization theory, the proposed formulations are made robust to moment estimation errors. A maximum margin classifier with specified false positive and false negative rates is derived. The key idea is to employ chance-constraints for each class which imply that the actual misclassification rates do not exceed the specified. The formulation is applied to the case of biased classification. The problems of large dataset classification and ordinal regression are addressed by deriving formulations which employ chance-constraints for clusters in training data rather than constraints for each data point. Since the number of clusters can be substantially smaller than the number of data points, the resulting formulation size and number of inequalities are very small. Hence the formulations scale well to large datasets. The scalable classification and OR formulations are extended to feature spaces and the kernelized duals turn out to be instances of SOCPs with a single cone constraint. Exploiting this speciality, fast iterative solvers which outperform generic SOCP solvers, are proposed. Compared to state-of-the-art learners, the proposed algorithms achieve a speed up as high as 10000 times, when the specialized SOCP solvers are employed. The proposed formulations involve second order moments of data and hence are susceptible to moment estimation errors. A generic way of making the formulations robust to such estimation errors is illustrated. Two novel confidence sets for moments are derived and it is shown that when either of the confidence sets are employed, the robust formulations also yield SOCPs.
65

運用記憶體內運算於智慧型健保院所異常查核之研究 / A Research into In-Memory Computing Techniques for Intelligent Check of Health-Insurance Fraud

湯家哲, Tang, Jia Jhe Unknown Date (has links)
我國全民健保近年財務不佳,民國98年收支短絀達582億元。根據中央健康保險署資料,截至目前為止,特約醫事服務機構違規次數累積達13722次。在所有重大違規事件中,大部分是詐欺行為。 健保審查機制主要以電腦隨機抽樣,再由人工進行調查。然而,這樣的審查方式無法有效抽取到違規醫事機構之樣本,造成審查效果不彰。 Benford’s Law又稱第一位數法則,其概念為第一位數的值越小則該數字出現的頻率越大,反之相反。該方法被應用於會計、金融、審計及經濟領域中。楊喻翔(2012)將Benford’s Law相關指標應用於我國全民健保上,並結合機器學習演算法來進行健保異常偵測。 Zaharia et al. (2012)提出了一種具容錯的群集記憶內運算模式 Apache Spark,在相同的運算節點及資源下,其資料運算效率及速度可勝出Hadoop MapReduce 20倍以上。 為解決健保異常查核效果不彰問題,本研究將採用Benford’s Law,使用國家衛生研究院發行之健保資料計算成為Benford’s Law指標和實務指標,接著並使用支援向量機和邏輯斯迴歸來建構出異常查核模型。然而健保資料量龐大,為加快運算時間,本研究使用Apache Spark做為運算環境,並以Hadoop MapReduce作為標竿,比較運算效率。 研究結果顯示,本研究撰寫的Spark程式運算時間能較MapReduce快2倍;在分類模型上,支援向量機和邏輯斯迴歸所進行的住院資料測試,敏感度皆有80%以上;而所進行的門診資料測試,兩個模型的準確率沒有住院資料高,但邏輯斯迴歸測試結果仍保有一定的準確性,在敏感度仍有75%,整體正確率有73%。 本研究使用Apache Spark節省處理大量健保資料的運算時間。其次本研究建立的智慧型異常查核模型,確實能查核出違約的醫事機構,而模型所查核出可能有詐欺及濫用健保之醫事機構,可進行下階段人工調查,最終得改善健保查核效力。 / Financial condition of National Health Insurance (NHI) has been wretched in recent years. The income statement in 2009 indicated that National Health Insurance Administration (NHIA) was in debt for NTD $58.2 billion. According to NHIA data, certain medical institutions in Taiwan violated the NHI laws for 13722 times. Among all illegal cases, fraud is the most serious. In order to find illegal medical institutions, NHIA conducted random sampling by computer. Once the data was collected, NHIA investigators got involved in the review process. However, the way to get the samples mentioned above cannot reveal the reality. Benford's law is called the First-Digit Law. The concept of Benford’s Law is that the smaller digits would appear more frequently, while larger digits would occur less frequently. Benford’s Law is applied to accounting, finance, auditing and economics. Yang(2012) used Benford’s Law in NHI data and he also used machine learning algorithms to do fraud detection. Zaharia et al. (2012) proposed a fault-tolerant in-memory cluster computing -Apache Spark. Under the same computing nodes and resources, Apache Spark’s computing is faster than Hadoop MapReduce 20 times. In order to solve the problem of medical claims review, Benford’s Law was applied to this study. This study used NHI data which was published by National Health Research Institutes. Then, we computed NHI data to generate Benford’s Law variables and technical variables. Finally, we used support vector machine and logistics regression to construct the illegal check model. During system development, we found that the data size was big. With the purpose of reducing the computing time, we used Apache Spark to build computing environment. Furthermore, we adopted Hadoop MapReduce as benchmark to compare the performance of computing time. This study indicated that Apache Spark is faster twice than Hadoop MapReduce. In illegal check model, with support vector machine and logistics regression, we had 80% sensitivity in inpatient data. In outpatient data, the accuracy of support vector machine and logistics regression were lower than inpatient data. In this case, logistics regression still had 75% sensitivity and 73% accuracy. This study used Apache Spark to compute NHI data with lower computing time. Second, we constructed the intelligent illegal check model which can find the illegal medical institutions for manual check. With the use of illegal check model, the procedure of medical claims review will be improved.
66

Σύγκριση μεθόδων δημιουργίας έμπειρων συστημάτων με κανόνες για προβλήματα κατηγοριοποίησης από σύνολα δεδομένων

Τζετζούμης, Ευάγγελος 31 January 2013 (has links)
Σκοπός της παρούσας εργασίας είναι η σύγκριση διαφόρων μεθόδων κατηγοριοποίησης που στηρίζονται σε αναπαράσταση γνώσης με κανόνες μέσω της δημιουργίας έμπειρων συστημάτων από γνωστά σύνολα δεδομένων. Για την εφαρμογή των μεθόδων και τη δημιουργία και υλοποίηση των αντίστοιχων έμπειρων συστημάτων χρησιμοποιούμε διάφορα εργαλεία όπως: (α) Το ACRES, το οποίο είναι ένα εργαλείο αυτόματης παραγωγής έμπειρων συστημάτων με συντελεστές βεβαιότητας. Οι συντελεστές βεβαιότητος μπορούν να υπολογίζονται κατά δύο τρόπους και επίσης παράγονται δύο τύποι έμπειρων συστημάτων που στηρίζονται σε δύο διαφορετικές μεθόδους συνδυασμού των συντελεστών βεβαιότητας (κατά MYCIN και μιας γενίκευσης αυτής του MYCIN με χρήση βαρών που υπολογίζονται μέσω ενός γενετικού αλγορίθμου). (β) Το WEKA, το οποίο είναι ένα εργαλείο που περιέχει αλγόριθμους μηχανικής μάθησης. Συγκεκριμένα, στην εργασία χρησιμοποιούμε τον αλγόριθμο J48, μια υλοποίηση του γνωστού αλγορίθμου C4.5, που παράγει δένδρα απόφασης, δηλ. κανόνες. (γ) Το CLIPS, το οποίο είναι ένα κέλυφος για προγραμματισμό με κανόνες. Εδώ, εξάγονται οι κανόνες από το δέντρο απόφασης του WEKA και υλοποιούνται στο CLIPS με ενδεχόμενες μετατροπές. (δ) Το FuzzyCLIPS, το οποίο επίσης είναι ένα κέλυφος για την δημιουργία ασαφών ΕΣ. Είναι μια επέκταση του CLIPS που χρησιμοποιεί ασαφείς κανόνες και συντελεστές βεβαιότητος. Εδώ, το έμπειρο σύστημα που παράγεται μέσω του CLIPS μετατρέπεται σε ασαφές έμπειρο σύστημα με ασαφοποίηση κάποιων μεταβλητών. (ε) Το GUI Ant-Miner, το οποίο είναι ένα εργαλείο για την εξαγωγή κανόνων κατηγοριοποίησης από ένα δοσμένο σύνολο δεδομένων. με τη χρήση ενός μοντέλου ακολουθιακής κάλυψης, όπως ο αλγόριθμος AntMiner. Με βάση τις παραπάνω μεθόδους-εργαλεία δημιουργήθηκαν έμπειρα συστήματα από πέντε σύνολα δεδομένων κατηγοριοποίησης από τη βάση δεδομένων UCI Machine Learning Repository. Τα συστήματα αυτά αξιολογήθηκαν ως προς την ταξινόμηση με βάση γνωστές μετρικές (ορθότητα, ευαισθησία, εξειδίκευση και ακρίβεια). Από τη σύγκριση των μεθόδων και στα πέντε σύνολα δεδομένων, εξάγουμε τα παρακάτω συμπεράσματα: (α) Αν επιθυμούμε αποτελέσματα με μεγαλύτερη ακρίβεια και μεγάλη ταχύτητα, θα πρέπει μάλλον να στραφούμε στην εφαρμογή WEKA. (β) Αν θέλουμε να κάνουμε και παράλληλους υπολογισμούς, η μόνη εφαρμογή που μας παρέχει αυτή τη δυνατότητα είναι το FuzzyCLIPS, θυσιάζοντας όμως λίγη ταχύτητα και ακρίβεια. (γ) Όσον αφορά το GUI Ant-Miner, λειτουργεί τόσο καλά όσο και το WEKA όσον αφορά την ακρίβεια αλλά είναι πιο αργή μέθοδος. (δ) Σχετικά με το ACRES, λειτουργεί καλά όταν δουλεύουμε με υποσύνολα μεταβλητών, έτσι ώστε να παράγεται σχετικά μικρός αριθμός κανόνων και να καλύπτονται σχεδόν όλα τα στιγμιότυπα στο σύνολο έλεγχου. Στα σύνολα δεδομένων μας το ACRES δεν θεωρείται πολύ αξιόπιστο υπό την έννοια ότι αναγκαζόμαστε να δουλεύουμε με υποσύνολο μεταβλητών και όχι όλες τις μεταβλητές του συνόλου δεδομένων. Όσο πιο πολλές μεταβλητές πάρουμε ως υποσύνολο στο ACRES, τόσο πιο αργό γίνεται. / The aim of this thesis is the comparison of several classification methods that are based on knowledge representation with rules via the creation of expert systems from known data sets. For the application of those methods and the creation and implementation of the corresponding expert systems, we use various tools such as: (a) ACRES, which is a tool for automatic production of expert systems with certainty factors. The certainty factors can be calculated via two different methods and also two different types of expert systems can be produced based on different methods of certainty propagation (that of MYCIN and a generalized version of MYCIN one that uses weights calculated via a genetic algorithm). (b) WEKA, which is a tool that contains machine learning algorithms. Specifically, we use J48, an implementation of the known algorithm C4.5, which produces decision trees, which are coded rules. (c) CLIPS, which is a shell for rule based programming. Here, the rules encoded on the decision true produced by WEKA are extracted and codified in CLIPS with possible changes. (d) FuzzyCLIPS, which is a shell for creating fuzzy expert systems. It's an extension of CLIPS that uses fuzzy rules and certainty factors. Here, the expert system created via CLIPS is transferred to a fuzzy expert system by making some variables fuzzy. (e) GUI Ant-Miner, which is a tool for classification rules extraction from a given data set, using a sequential covering model, such as the AntMiner algorithm. Based on the above methods-tools, expert systems were created from five (5) classification data sets from the UCI Machine Learning Repository. Those systems have been evaluated according to their classification capabilities based on known metrics (accuracy, sensitivity, specificity and precision). From the comparison of the methods on the five data sets, we conclude the following: (a) if we want results with greater accuracy and high speed, we should probably turn into WEKA. (b) if we want to do parallel calculations too, the only tool that provides us this capability is FuzzyCLIPS, sacrificing little speed and accuracy. (c) With regards to GUI Ant-Miner, it works as well as WEKA in terms of accuracy, but it is slower. (d) About ACRES, it works well when we work with subsets of the variables, so that it produces a relatively small number or rules and covers almost all the instances of the test set. For our datasets, ACRES is not considered very reliable in the sense that we should work with subsets of variables, not all the variables of the dataset. The more variables we consider as a subset in ACRES, the slower it becomes.
67

Design and Analysis of Consistent Algorithms for Multiclass Learning Problems

Harish, Guruprasad Ramaswami January 2015 (has links) (PDF)
We consider the broad framework of supervised learning, where one gets examples of objects together with some labels (such as tissue samples labeled as cancerous or non-cancerous, or images of handwritten digits labeled with the correct digit in 0-9), and the goal is to learn a prediction model which given a new object, makes an accurate prediction. The notion of accuracy depends on the learning problem under study and is measured by a performance measure of interest. A supervised learning algorithm is said to be 'statistically consistent' if it returns an `optimal' prediction model with respect to the desired performance measure in the limit of infinite data. Statistical consistency is a fundamental notion in supervised machine learning, and therefore the design of consistent algorithms for various learning problems is an important question. While this has been well studied for simple binary classification problems and some other specific learning problems, the question of consistent algorithms for general multiclass learning problems remains open. We investigate several aspects of this question as detailed below. First, we develop an understanding of consistency for multiclass performance measures defined by a general loss matrix, for which convex surrogate risk minimization algorithms are widely used. Consistency of such algorithms hinges on the notion of 'calibration' of the surrogate loss with respect to target loss matrix; we start by developing a general understanding of this notion, and give both necessary conditions and sufficient conditions for a surrogate loss to be calibrated with respect to a target loss matrix. We then define a fundamental quantity associated with any loss matrix, which we term the `convex calibration dimension' of the loss matrix; this gives one measure of the intrinsic difficulty of designing convex calibrated surrogates for a given loss matrix. We derive lower bounds on the convex calibration dimension which leads to several new results on non-existence of convex calibrated surrogates for various losses. For example, our results improve on recent results on the non-existence of low dimensional convex calibrated surrogates for various subset ranking losses like the pairwise disagreement (PD) and mean average precision (MAP) losses. We also upper bound the convex calibration dimension of a loss matrix by its rank, by constructing an explicit, generic, least squares type convex calibrated surrogate, such that the dimension of the surrogate is at most the (linear algebraic) rank of the loss matrix. This yields low-dimensional convex calibrated surrogates - and therefore consistent learning algorithms - for a variety of structured prediction problems for which the associated loss is of low rank, including for example the precision @ k and expected rank utility (ERU) losses used in subset ranking problems. For settings where achieving exact consistency is computationally difficult, as is the case with the PD and MAP losses in subset ranking, we also show how to extend these surrogates to give algorithms satisfying weaker notions of consistency, including both consistency over restricted sets of probability distributions, and an approximate form of consistency over the full probability space. Second, we consider the practically important problem of hierarchical classification, where the labels to be predicted are organized in a tree hierarchy. We design a new family of convex calibrated surrogate losses for the associated tree-distance loss; these surrogates are better than the generic least squares surrogate in terms of easier optimization and representation of the solution, and some surrogates in the family also operate on a significantly lower dimensional space than the rank of the tree-distance loss matrix. These surrogates, which we term the `cascade' family of surrogates, rely crucially on a new understanding we develop for the problem of multiclass classification with an abstain option, for which we construct new convex calibrated surrogates that are of independent interest by themselves. The resulting hierarchical classification algorithms outperform the current state-of-the-art in terms of both accuracy and running time. Finally, we go beyond loss-based multiclass performance measures, and consider multiclass learning problems with more complex performance measures that are nonlinear functions of the confusion matrix and that cannot be expressed using loss matrices; these include for example the multiclass G-mean measure used in class imbalance settings and the micro F1 measure used often in information retrieval applications. We take an optimization viewpoint for such settings, and give a Frank-Wolfe type algorithm that is provably consistent for any complex performance measure that is a convex function of the entries of the confusion matrix (this includes the G-mean, but not the micro F1). The resulting algorithms outperform the state-of-the-art SVMPerf algorithm in terms of both accuracy and running time. In conclusion, in this thesis, we have developed a deep understanding and fundamental results in the theory of supervised multiclass learning. These insights have allowed us to develop computationally efficient and statistically consistent algorithms for a variety of multiclass learning problems of practical interest, in many cases significantly outperforming the state-of-the-art algorithms for these problems.
68

Traffic monitoring in home networks : from theory to practice / Supervision du trafic dans les réseaux domestiques : de la théorie à la pratique

Aouini, Zied 15 December 2017 (has links)
Les réseaux domestiques sont confrontés à une évolution continue et deviennent de plus en plus complexes. Leur complexité a évolué selon deux dimensions interdépendantes. D'une part, la topologie du réseau domestique devient plus complexe avec la multiplication des équipements et des technologies de connectivité. D'autre part, l'ensemble des services accessibles via le réseau domestique ne cesse de s’élargir. Un tel contexte a rendu la gestion du réseau domestique plus difficile pour les Fournisseurs d’Accès Internet (FAI) et les utilisateurs finaux. Dans ce manuscrit, nous nous concentrons sur la deuxième dimension de la complexité décrite ci-dessus liée au trafic circulant depuis/vers le réseau domestique. Notre première contribution consiste à proposer une architecture pour la supervision du trafic dans les réseaux domestiques. Nous fournissons une étude comparative de certains outils open source existants. Ensuite, nous effectuons une évaluation de performances expérimentale d’un sous ensemble des processus impliqués dans notre architecture. Sur la base des résultats obtenus, nous discutons les limites et les possibilités de déploiement de ce type de solution. Dans notre deuxième contribution, nous présentons notre analyse à large échelle des usages et du trafic résidentiel basée sur une trace de trafic réelle impliquant plus de 34 000 clients. Premièrement, nous présentons notre méthode de collecte et de traitement des données. Deuxièmement, nous présentons nos observations statistiques vis-à-vis des différentes couches de l’architecture Internet. Ensuite, nous effectuons une analyse subjective auprès de 645 clients résidentiels. Enfin, nos résultats fournissent une synthèse complète des usages et des caractéristiques des applications résidentielles. Dans notre troisième contribution, nous proposons une nouvelle méthode pour la classification en temps réel du trafic résidentiel. Notre méthode, laquelle est basée sur l’utilisation d’un algorithme d’apprentissage statistique de type C5.0, vise à combler les carences identifiées dans la littérature. Ensuite, nous détaillons notre implémentation d’une sonde légère sur un prototype de passerelle résidentielle capable de capturer, de suivre et d'identifier d’une manière fine les applications actives dans le réseau domestique. Cette implémentation nous permet, en outre, de valider nos principes de conception via un banc d'essai réaliste mis en place à cet effet. Les résultats obtenus indiquent que notre solution est efficace et faisable. / Home networks are facing a continuous evolution and are becoming more and more complex. Their complexity has evolved according to two interrelated dimensions. On the one hand, the home network topology (devices and connectivity technologies) tends to produce more complex configurations. On the other hand, the set of services accessed through the home network is growing in a tremendous fashion. Such context has made the home network management more challenging for both Internet Service Provider (ISP) and end-users. In this dissertation, we focus on the traffic dimension of the above described complexity. Our first contribution consists on proposing an architecture for traffic monitoring in home networks. We provide a comparative study of some existing open source tools. Then, we perform a testbed evaluation of the main software components implied in our architecture. Based on the experiments results, we discuss several deployment limits and possibilities. In our second contribution, we conduct a residential traffic and usages analysis based on real trace involving more than 34 000 customers. First, we present our data collection and processing methodology. Second, we present our findings with respect to the different layers of the TCP/IP protocol stack characteristics. Then, we perform a subjective analysis across 645 of residential customers. The results of both evaluations provide a complete synthesis of residential usage patterns and applications characteristics. In our third contribution, we propose a novel scheme for real-time residential traffic classification. Our scheme, which is based on a machine learning approach called C5.0, aims to fulfil the lacks identified in the literature. At this aim, our algorithm is evaluated using several traffic inputs. Then, we detail how we implemented a lightweight probe able to capture, track and identify finely applications running in the home network. This implementation allowed us to validate our designing principles upon realistic test conditions. The obtained results show clearly the efficiency and feasibility of our solution.
69

Analises de series temporais e modelagem baseada em regras nebulosas / Time series analysis and modeling based on fuzzy rules the school of eletrical and computer engineering

Luna Huamaní, Ivette Raymunda, 1978 10 May 2007 (has links)
Orientadores: Secundino Soares Filho, Rosangela Ballini / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-11T11:20:51Z (GMT). No. of bitstreams: 1 LunaHuamani_IvetteRaymunda_D.pdf: 1516017 bytes, checksum: 0b1789c54ac07dc411d69c82d77f8ac3 (MD5) Previous issue date: 2007 / Resumo: Este trabalho propõe uma metodologia baseada em regras nebulosas para a modelagem e previsão de séries temporais. Inicialmente, os dados são pré-processados para, a seguir, ocorrer a seleção de variáveis que serão utilizadas pelos modelos de série temporal. Para essa finalidade, nesta tese propõe-se um conjunto de aproximações necessárias para o cálculo do critério de informação mútua parcial, o qual é a base para o algoritmo de seleção de entradas utilizado. A próxima etapa corresponde à determinação da estrutura do modelo e ajuste dos parâmetros. Com o intuito de definir de forma automática a estrutura do modelo, de forma simultânea ao ajuste dos parâmetros, dois algoritmos de aprendizado construtivo - offiine e online são propostos. Ambos os algoritmos utilizam como base para o seu desenvolvimento o algoritmo da maximização da verossimilhança, assim como critérios de geração e punição (ou poda) de regras nebulosas. Finalmente, o modelo obtido é validado e aplicado .na previsão de um e vários passos à frente. Análises comparativas são apresentadas utilizando séries temporais sintéticas e de problemas reais. Os resultados mostram que as propostas deste trabalho são uma alternativa eficiente para a modelagem e previsão de séries temporais / Abstract: This work presents a methodology for time series modeling and forecasting. First, the methodology considers the data pre-processing and the system identification, which implies on the selection of a suitable set of input variables for modeling the time series. In order to achieve this task, this work proposes an algorithm for input selection and a set of approximations that are necessary for estimating the partia! mutual information criterion, which is the base of the algorithm used at this stage. Then, the mo deI is built and adjusted. With the aim of performing an automatic structure selection and parameters adjustment simultaneously, this thesis proposes two constructive learning algorithms, namely ofRine and online. These algorithms are based on the Expectation Maximization optimization technique, as well as on adding and pruning operators of fuzzy rules that are also proposed in this work. Finally, models are validated and applied to one-step ahead and multi-step ahead forecasting. Comparative analysis using synthetic and real time series are detailed. The results show the adequate performance of the proposed approach and presents it as a promising alternative for time series modeling and forecasting / Doutorado / Energia Eletrica / Doutor em Engenharia Elétrica
70

Efficient Algorithms for Structured Output Learning

Balamurugan, P January 2014 (has links) (PDF)
Structured output learning is the machine learning task of building a classifier to predict structured outputs. Structured outputs arise in several contexts in diverse applications like natural language processing, computer vision, bioinformatics and social networks. Unlike the simple two(or multi)-class outputs which belong to a set of distinct or univariate categories, structured outputs are composed of multiple components with complex interdependencies amongst them. As an illustrative example ,consider the natural language processing task of tagging a sentence with its corresponding part-of-speech tags. The part-of-speech tag sequence is an example of a structured output as it is made up of multiple components, the interactions among them being governed by the underlying properties of the language. This thesis provides efficient solutions for different problems pertaining to structured output learning. The classifier for structured outputs is generally built by learning a suitable model from a set of training examples labeled with their associated structured outputs. Discriminative techniques like Structural Support Vector Machines(Structural SVMs) and Conditional Random Fields(CRFs) are popular alternatives developed for structured output learning. The thesis contributes towards developing efficient training strategies for structural SVMs. In particular, an efficient sequential optimization method is proposed for structural SVMs, which is faster than several competing methods. An extension of the sequential method to CRFs is also developed. The sequential method is adapted to a variant of structural SVM with linear cumulative loss. The thesis also presents a systematic empirical evaluation of various training methods available for structured output learning, which will be useful to the practitioner. To train structural SVMs in the presence of a vast number of training examples without labels, the thesis develops a simple semi-supervised technique based on switching the labels of the components of the structured output. The proposed technique is general and its efficacy is demonstrated using experiments on different benchmark applications. Another contribution of the thesis is towards the design of fast algorithms for sparse structured output learning. Efficient alternating optimization algorithms are developed for sparse classifier design. These algorithms are shown to achieve sparse models faster, when compared to existing methods.

Page generated in 0.0822 seconds