• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 148
  • 36
  • 22
  • 15
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 290
  • 290
  • 97
  • 91
  • 78
  • 69
  • 57
  • 57
  • 56
  • 39
  • 39
  • 36
  • 34
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Σχεδιασμός και ανάπτυξη αλγορίθμου συσταδοποίησης μεγάλης κλίμακας δεδομένων

Γούλας, Χαράλαμπος January 2015 (has links)
Υπό το φάσμα της νέας, ανερχόμενης κοινωνίας της πληροφορίας, η σύγκλιση των υπολογιστών με τις τηλεπικοινωνίες έχει οδηγήσει στην συνεχώς αυξανόμενη παραγωγή και αποθήκευση τεράστιου όγκου δεδομένων σχεδόν για οποιονδήποτε τομέα της ανθρώπινης ενασχόλησης. Αν, λοιπόν, τα δεδομένα αποτελούν τα καταγεγραμμένα γεγονότα της ανθρώπινης ενασχόλησης, οι πληροφορίες αποτελούν τους κανόνες, που τα διέπουν. Και η κοινωνία στηρίζεται και αναζητά διακαώς νέες πληροφορίες. Το μόνο που απομένει, είναι η ανακάλυψη τους. Ο τομέας, που ασχολείται με την συστηματική ανάλυση των δεδομένων με σκοπό την εξαγωγή χρήσιμης γνώσης ονομάζεται μηχανική μάθηση. Υπό αυτό, λοιπόν, το πρίσμα, η παρούσα διπλωματική πραγματεύεται την μηχανική μάθηση ως μια ελπίδα των επιστημόνων να αποσαφηνίσουν τις δομές που διέπουν τα δεδομένα και να ανακαλύψουν και να κατανοήσουν τους κανόνες, που “κινούν” τον φυσικό κόσμο. Αρχικά, πραγματοποιείται μια πρώτη περιγραφή της μηχανικής μάθησης ως ένα από τα βασικότερα δομικά στοιχεία της τεχνητής νοημοσύνης, παρουσιάζοντας ταυτόχρονα μια πληθώρα προβλημάτων, στα οποία μπορεί να βρει λύση, ενώ γίνεται και μια σύντομη ιστορική αναδρομή της πορείας και των κομβικών της σημείων. Ακολούθως, πραγματοποιείται μια όσο το δυνατόν πιο εμπεριστατωμένη περιγραφή, μέσω χρήσης εκτεταμένης βιβλιογραφίας, σχεδιαγραμμάτων και λειτουργικών παραδειγμάτων των βασικότερων κλάδων της, όπως είναι η επιβλεπόμενη μάθηση (δέντρα αποφάσεων, νευρωνικά δίκτυα), η μη-επιβλεπόμενη μάθηση (συσταδοποίηση δεδομένων), καθώς και πιο εξειδικευμένων μορφών της, όπως είναι η ημί-επιβλεπόμενη μηχανική μάθηση και οι γενετικοί αλγόριθμοι. Επιπρόσθετα, σχεδιάζεται και υλοποιείται ένας νέος πιθανοτικός αλγόριθμος συσταδοποίησης (clustering) δεδομένων, ο οποίος ουσιαστικά αποτελεί ένα υβρίδιο ενός ιεραρχικού αλγορίθμου ομαδοποίησης και ενός αλγορίθμου διαμέρισης. Ο αλγόριθμος δοκιμάστηκε σε ένα πλήθος διαφορετικών συνόλων, πετυχαίνοντας αρκετά ενθαρρυντικά αποτελέσματα, συγκριτικά με άλλους γνωστούς αλγορίθμους, όπως είναι ο k-means και ο single-linkage. Πιο συγκεκριμένα, ο αλγόριθμος κατασκευάζει συστάδες δεδομένων, με μεγαλύτερη ομοιογένεια κατά πλειοψηφία σε σχέση με τους παραπάνω, ενώ το σημαντικότερο πλεονέκτημά του είναι ότι δεν χρειάζεται κάποια αντίστοιχη παράμετρο k για να λειτουργήσει. Τέλος, γίνονται προτάσεις τόσο για περαιτέρω βελτίωση του παραπάνω αλγορίθμου, όσο και για την ανάπτυξη νέων τεχνικών και μεθόδων, εναρμονισμένων με τις σύγχρονες τάσεις της αγοράς και προσανατολισμένων προς τις απαιτητικές ανάγκες της νέας, αναδυόμενης κοινωνίας της πληροφορίας. / In the spectrum of a new and emerging information society, the convergence of computers and telecommunication has led to a continuously increasing production and storage of huge amounts of data for almost any field of human engagement. So, if the data are recorded facts of human involvement, then information are the rules that govern them. And society depends on and looking earnestly for new information. All that remains is their discovery. The field of computer science, which deals with the systematic analysis of data in order to extract useful information, is called machine learning. In this light, therefore, this thesis discusses the machine learning as a hope of scientists to elucidate the structures that govern the data and discover and understand the rules that "move" the natural world. Firstly, a general description of machine learning, as one of the main components of artificial intelligence, is discussed, while presenting a variety of problems that machine learning can find solutions, as well as a brief historical overview of its progress. Secondly, a more detailed description of machine learning is presented by using extensive literature, diagrams, drawings and working examples of its major research areas, as is the supervised learning (decision trees, neural networks), the unsupervised learning (clustering algorithms) and more specialized forms, as is the semi-supervised machine learning and genetic algorithms. In addition to the above, it is planned and implemented a new probabilistic clustering algorithm, which is a hybrid of a hierarchical clustering algorithm and a partitioning algorithm. The algorithm was tested on a plurality of different datasets, achieving sufficiently encouraging results, as compared to other known algorithms, such as k-means and single-linkage. More specifically, the algorithm constructs data blocks, with greater homogeneity by majority with respect to the above, while the most important advantage is that it needs no corresponding parameter k to operate. Finally, suggestions are made in order to further improve the above algorithm, as well as to develop new techniques and methods in keeping with the current market trends, oriented to the demanding needs of this new, emerging information society.
182

Enhancements of pre-processing, analysis and presentation techniques in web log mining / Žiniatinklio įrašų gavybos paruošimo, analizės ir rezultatų pateikimo naudotojui tobulinimas

Pabarškaitė, Židrina 13 July 2009 (has links)
As Internet is becoming an important part of our life, more attention is paid to the information quality and how it is displayed to the user. The research area of this work is web data analysis and methods how to process this data. This knowledge can be extracted by gathering web servers’ data – log files, where all users’ navigational patters about browsing are recorded. The research object of the dissertation is web log data mining process. General topics that are related with this object: web log data preparation methods, data mining algorithms for prediction and classification tasks, web text mining. The key target of the thesis is to develop methods how to improve knowledge discovery steps mining web log data that would reveal new opportunities to the data analyst. While performing web log analysis, it was discovered that insufficient interest has been paid to web log data cleaning process. By reducing the number of redundant records data mining process becomes much more effective and faster. Therefore a new original cleaning framework was introduced which leaves records that only corresponds to the real user clicks. People tend to understand technical information more if it is similar to a human language. Therefore it is advantageous to use decision trees for mining web log data, as they generate web usage patterns in the form of rules which are understandable to humans. However, it was discovered that users browsing history length is different, therefore specific data... [to full text] / Internetui skverbiantis į mūsų gyvenimą, vis didesnis dėmesys kreipiamas į informacijos pateikimo kokybę, bei į tai, kaip informacija yra pateikta. Disertacijos tyrimų sritis yra žiniatinklio serverių kaupiamų duomenų gavyba bei duomenų pateikimo galutiniam naudotojui gerinimo būdai. Tam reikalingos žinios išgaunamos iš žiniatinklio serverio žurnalo įrašų, kuriuose fiksuojama informacija apie išsiųstus vartotojams žiniatinklio puslapius. Darbo tyrimų objektas yra žiniatinklio įrašų gavyba, o su šiuo objektu susiję dalykai: žiniatinklio duomenų paruošimo etapų tobulinimas, žiniatinklio tekstų analizė, duomenų analizės algoritmai prognozavimo ir klasifikavimo uždaviniams spręsti. Pagrindinis disertacijos tikslas – perprasti svetainių naudotojų elgesio formas, tiriant žiniatinklio įrašus, tobulinti paruošimo, analizės ir rezultatų interpretavimo etapų metodologijas. Darbo tyrimai atskleidė naujas žiniatinklio duomenų analizės galimybes. Išsiaiškinta, kad internetinių duomenų – žiniatinklio įrašų švarinimui buvo skirtas nepakankamas dėmesys. Parodyta, kad sumažinus nereikšmingų įrašų kiekį, duomenų analizės procesas tampa efektyvesnis. Todėl buvo sukurtas naujas metodas, kurį pritaikius žinių pateikimas atitinka tikruosius vartotojų maršrutus. Tyrimo metu nustatyta, kad naudotojų naršymo istorija yra skirtingų ilgių, todėl atlikus specifinį duomenų paruošimą – suformavus fiksuoto ilgio vektorius, tikslinga taikyti iki šiol nenaudotus praktikoje sprendimų medžių algoritmus... [toliau žr. visą tekstą]
183

Combining machine learning and rule-based approaches in Spanish syntactic generation

Melero Nogués, Maria Teresa 02 June 2006 (has links)
Aquesta tesi descriu una gramàtica de Generació que combina regles escrites a mà i tècniques d'aprenentatge automàtic. Aquesta gramàtica pertany a un sistema de Traducció Automàtica de qualitat comercial desenvolupat a Microsoft Research. La primera part presenta la gramàtica i les principals estratègies lingüístiques que aquesta gramàtica implementa. Els requeriments de robustesa que reclama l'ús real del sistema de TA, exigeix del Generador un esforç suplementari que es resol afegint un nivell de pre-generació, capaç de garantir la integritat de l'entrada, sense incorporar elements ad-hoc en les regles de la gramàtica. A la segona part, explorem l'ús dels classificadors d'arbres de decisió (DT) per tal d'aprendre automàticament una de les operacions que tenen lloc al mòdul de pre-generació, en concret la selecció lèxica del verb copulatiu en espanyol (ser o estar). Mostrem que és possible inferir a partir d'exemples els contextos per aquest fenòmen lingüístic no trivial, amb gran precisió. / This thesis describes a Spanish Generation grammar which combines hand-written rules and Machine Learning techniques. This grammar belongs to a full-scale commercial quality Machine Translation system developed at Microsoft Research. The first part presents the grammar and the linguistic strategies it embodies. The need for robustness in real-world situations in the everyday use of the MT system requires from the Generator an extra effort which is resolved by adding a Pre-Generation layer which is able to fix the input to Generation, without contaminating the grammar rules. In the second part we explore the use of Decision Tree classifiers (DT) for automatically learning one of the operations that take place in the Pre-Generation component, namely lexical selection of the Spanish copula (i.e. ser and estar). We show that it is possible to infer from examples the contexts for this non-trivial linguistic phenomenon with high accuracy.
184

Graph matching filtering databases of graphs using machine learning techniques

Irniger, Christophe-André January 2005 (has links)
Zugl.: Bern, Univ., Diss., 2005
185

Applications of knowledge discovery in quality registries : predicting recurrence of breast cancer and analyzing non-compliance with a clinical guideline /

Razavi, Amir Reza, January 2007 (has links)
Diss. (sammanfattning) Linköping : Linköpings universitet, 2007. / Härtill 5 uppsatser.
186

Combining Partial Least Squares and the Gradient-Boosting Method for Soil Property Retrieval Using Visible Near-Infrared Shortwave Infrared Spectra

Liu, Lanfa, Ji, Min, Buchroithner, Manfred F. 06 June 2018 (has links) (PDF)
Soil spectroscopy has experienced a tremendous increase in soil property characterisation, and can be used not only in the laboratory but also from the space (imaging spectroscopy). Partial least squares (PLS) regression is one of the most common approaches for the calibration of soil properties using soil spectra. Besides functioning as a calibration method, PLS can also be used as a dimension reduction tool, which has scarcely been studied in soil spectroscopy. PLS components retained from high-dimensional spectral data can further be explored with the gradient-boosted decision tree (GBDT) method. Three soil sample categories were extracted from the Land Use/Land Cover Area Frame Survey (LUCAS) soil library according to the type of land cover (woodland, grassland, and cropland). First, PLS regression and GBDT were separately applied to build the spectroscopic models for soil organic carbon (OC), total nitrogen content (N), and clay for each soil category. Then, PLS-derived components were used as input variables for the GBDT model. The results demonstrate that the combined PLS-GBDT approach has better performance than PLS or GBDT alone. The relative important variables for soil property estimation revealed by the proposed method demonstrated that the PLS method is a useful dimension reduction tool for soil spectra to retain target-related information.
187

Lexicographic refinements in possibilistic sequential decision-making models / Raffinements lexicographiques en prise de décision séquentielle possibiliste

El Khalfi, Zeineb 31 October 2017 (has links)
Ce travail contribue à la théorie de la décision possibiliste et plus précisément à la prise de décision séquentielle dans le cadre de la théorie des possibilités, à la fois au niveau théorique et pratique. Bien qu'attrayante pour sa capacité à résoudre les problèmes de décision qualitatifs, la théorie de la décision possibiliste souffre d'un inconvénient important : les critères d'utilité qualitatives possibilistes comparent les actions avec les opérateurs min et max, ce qui entraîne un effet de noyade. Pour surmonter ce manque de pouvoir décisionnel, plusieurs raffinements ont été proposés dans la littérature. Les raffinements lexicographiques sont particulièrement intéressants puisqu'ils permettent de bénéficier de l'arrière-plan de l'utilité espérée, tout en restant "qualitatifs". Cependant, ces raffinements ne sont définis que pour les problèmes de décision non séquentiels. Dans cette thèse, nous présentons des résultats sur l'extension des raffinements lexicographiques aux problèmes de décision séquentiels, en particulier aux Arbres de Décision et aux Processus Décisionnels de Markov possibilistes. Cela aboutit à des nouveaux algorithmes de planification plus "décisifs" que leurs contreparties possibilistes. Dans un premier temps, nous présentons des relations de préférence lexicographiques optimistes et pessimistes entre les politiques avec et sans utilités intermédiaires, qui raffinent respectivement les utilités possibilistes optimistes et pessimistes. Nous prouvons que les critères proposés satisfont le principe de l'efficacité de Pareto ainsi que la propriété de monotonie stricte. Cette dernière garantit la possibilité d'application d'un algorithme de programmation dynamique pour calculer des politiques optimales. Nous étudions tout d'abord l'optimisation lexicographique des politiques dans les Arbres de Décision possibilistes et les Processus Décisionnels de Markov à horizon fini. Nous fournissons des adaptations de l'algorithme de programmation dynamique qui calculent une politique optimale en temps polynomial. Ces algorithmes sont basés sur la comparaison lexicographique des matrices de trajectoires associées aux sous-politiques. Ce travail algorithmique est complété par une étude expérimentale qui montre la faisabilité et l'intérêt de l'approche proposée. Ensuite, nous prouvons que les critères lexicographiques bénéficient toujours d'une fondation en termes d'utilité espérée, et qu'ils peuvent être capturés par des utilités espérées infinitésimales. La dernière partie de notre travail est consacrée à l'optimisation des politiques dans les Processus Décisionnels de Markov (éventuellement infinis) stationnaires. Nous proposons un algorithme d'itération de la valeur pour le calcul des politiques optimales lexicographiques. De plus, nous étendons ces résultats au cas de l'horizon infini. La taille des matrices augmentant exponentiellement (ce qui est particulièrement problématique dans le cas de l'horizon infini), nous proposons un algorithme d'approximation qui se limite à la partie la plus intéressante de chaque matrice de trajectoires, à savoir les premières lignes et colonnes. Enfin, nous rapportons des résultats expérimentaux qui prouvent l'efficacité des algorithmes basés sur la troncation des matrices. / This work contributes to possibilistic decision theory and more specifically to sequential decision-making under possibilistic uncertainty, at both the theoretical and practical levels. Even though appealing for its ability to handle qualitative decision problems, possibilisitic decision theory suffers from an important drawback: qualitative possibilistic utility criteria compare acts through min and max operators, which leads to a drowning effect. To overcome this lack of decision power, several refinements have been proposed in the literature. Lexicographic refinements are particularly appealing since they allow to benefit from the expected utility background, while remaining "qualitative". However, these refinements are defined for the non-sequential decision problems only. In this thesis, we present results on the extension of the lexicographic preference relations to sequential decision problems, in particular, to possibilistic Decision trees and Markov Decision Processes. This leads to new planning algorithms that are more "decisive" than their original possibilistic counterparts. We first present optimistic and pessimistic lexicographic preference relations between policies with and without intermediate utilities that refine the optimistic and pessimistic qualitative utilities respectively. We prove that these new proposed criteria satisfy the principle of Pareto efficiency as well as the property of strict monotonicity. This latter guarantees that dynamic programming algorithm can be used for calculating lexicographic optimal policies. Considering the problem of policy optimization in possibilistic decision trees and finite-horizon Markov decision processes, we provide adaptations of dynamic programming algorithm that calculate lexicographic optimal policy in polynomial time. These algorithms are based on the lexicographic comparison of the matrices of trajectories associated to the sub-policies. This algorithmic work is completed with an experimental study that shows the feasibility and the interest of the proposed approach. Then we prove that the lexicographic criteria still benefit from an Expected Utility grounding, and can be represented by infinitesimal expected utilities. The last part of our work is devoted to policy optimization in (possibly infinite) stationary Markov Decision Processes. We propose a value iteration algorithm for the computation of lexicographic optimal policies. We extend these results to the infinite-horizon case. Since the size of the matrices increases exponentially (which is especially problematic in the infinite-horizon case), we thus propose an approximation algorithm which keeps the most interesting part of each matrix of trajectories, namely the first lines and columns. Finally, we reports experimental results that show the effectiveness of the algorithms based on the cutting of the matrices.
188

Vers une approche hybride mêlant arbre de classification et treillis de Galois pour de l'indexation d'images / Towards an hybrid model between decision trees and Galois lattice for image indexing and classification

Girard, Nathalie 05 July 2013 (has links)
La classification d'images s'articule généralement autour des deux étapes que sont l'étape d'extraction de signatures suivie de l'étape d'analyse des données extraites, ces dernières étant généralement quantitatives. De nombreux modèles de classification ont été proposés dans la littérature, le choix du modèle le plus adapté est souvent guidé par les performances en classification ainsi que la lisibilité du modèle. L'arbre de classification et le treillis de Galois sont deux modèles symboliques connus pour leur lisibilité. Dans sa thèse [Guillas 2007], Guillas a utilisé efficacement les treillis de Galois pour la classification d'images, et des liens structurels forts avec les arbres de classification ont été mis en évidence. Les travaux présentés dans ce manuscrit font suite à ces résultats, et ont pour but de définir un modèle hybride entre ces deux modèles, qui réunissent leurs avantages (leur lisibilité respective, la robustesse du treillis et le faible espace mémoire de l'arbre). A ces fins, l'étude des liens existants entre les deux modèles a permis de mettre en avant leurs différences. Tout d'abord, le type de discrétisation, les arbres utilisent généralement une discrétisation locale tandis que les treillis, initialement définis pour des données binaires, utilisent une discrétisation globale. A partir d'une étude des propriétés des treillis dichotomiques (treillis définis après une discrétisation), nous proposons une discrétisation locale pour les treillis permettant d'améliorer ses performances en classification et de diminuer sa complexité structurelle. Puis, le processus de post-élagage mis en œuvre dans la plupart des arbres a pour objectif de diminuer la complexité de ces derniers, mais aussi d'augmenter leurs performances en généralisation. Les simplifications de la structure de treillis (exponentielle en la taille de données dans les pires cas), quant à elles, sont motivées uniquement par une diminution de la complexité structurelle. En combinant ces deux simplifications, nous proposons une simplification de la structure du treillis obtenue après notre discrétisation locale et aboutissant à un modèle de classification hybride qui profite de la lisibilité des deux modèles tout en étant moins complexe que le treillis mais aussi performant que celui-ci. / Image classification is generally based on two steps namely the extraction of the image signature, followed by the extracted data analysis. Image signature is generally numerical. Many classification models have been proposed in the literature, among which most suitable choice is often guided by the classification performance and the model readability. Decision trees and Galois lattices are two symbolic models known for their readability. In her thesis {Guillas 2007}, Guillas efficiently used Galois lattices for image classification. Strong structural links between decision trees and Galois lattices have been highlighted. Accordingly, we are interested in comparing models in order to design a hybrid model between those two. The hybrid model will combine the advantages (robustness of the lattice, low memory space of the tree and readability of both). For this purpose, we study the links between the two models to highlight their differences. Firstly, the discretization type where decision trees generally use a local discretization while Galois lattices, originally defined for binary data, use a global discretization. From the study of the properties of dichotomic lattice (specific lattice defined after discretization), we propose a local discretization for lattice that allows us to improve its classification performances and reduces its structural complexity. Then, the process of post-pruning implemented in most of the decision trees aims to reduce the complexity of the latter, but also to improve their classification performances. Lattice filtering is solely motivated by a decrease in the structural complexity of the structures (exponential in the size of data in the worst case). By combining these two processes, we propose a simplification of the lattice structure constructed after our local discretization. This simplification leads to a hybrid classification model that takes advantage of both decision trees and Galois lattice. It is as readable as the last two, while being less complex than the lattice but also efficient.
189

Srovnání statistických metod pro vývoj skóringových modelů / Comparison of statistical methods for the scoring models development

Mrázková, Adéla January 2018 (has links)
The aim of this thesis is to introduce and summarize the process of scoring model development in general and then basic statistical approaches used to resolve this problem, which are in particular logistic regression, neural networks and decision trees (random forests). Application of described methods on a real dataset provided by PROFI CREDIT Czech, a.s. follows, including discussion of some implementation issues and their resolution. Obtained results are discussed and compared.
190

Improving armed conflict prediction using machine learning : ViEWS+

Helle, Valeria, Negus, Andra-Stefania, Nyberg, Jakob January 2018 (has links)
Our project, ViEWS+, expands the software functionality of the Violence EarlyWarning System (ViEWS). ViEWS aims to predict the probabilities of armed conflicts in the next 36 months using machine learning. Governments and policy-makers may use conflict predictions to decide where to deliver aid and resources, potentially saving lives. The predictions use conflict data gathered by ViEWS, which includes variables like past conflicts, child mortality and urban density. The large number of variables raises the need for a selection tool to remove those that are irrelevant for conflict prediction. Before our work, the stakeholders used their experience and some guesswork to pick the variables, and the predictive function with its parameters. Our goals were to improve the efficiency, in terms of speed, and correctness of the ViEWS predictions. Three steps were taken. Firstly, we made an automatic variable selection tool. This helps researchers use fewer, more relevant variables, to save time and resources. Secondly, we compared prediction functions, and identified the best for the purpose of predicting conflict. Lastly, we tested how parameter values affect the performance of the chosen functions, so as to produce good predictions but also reduce the execution time. The new tools improved both the execution time and the predictive correctness of the system compared to the results obtained prior to our project. It is now nine times faster than before, and its correctness has improved by a factor of three. We believe our work leads to more accurate conflict predictions, and as ViEWS has strong connections to the European Union, we hope that decision makers can benefit from it when trying to prevent conflicts. / I detta projekt, vilket vi valt att benämna ViEWS+, har vi förbättrat olika aspekter av ViEWS (Violence Early-Warning System), ett system som med maskinlärning försöker förutsäga var i världen väpnade konflikter kommer uppstå. Målet med ViEWS är att kunna förutsäga sannolikheten för konflikter så långt som 36 månader i framtiden. Målet med att förutsäga sannoliketen för konflikter är att politiker och beslutsfattare ska kunna använda dessa kunskaper för att förhindra dem.  Indata till systemet är konfliktdata med ett stort antal egenskaper, så som tidigare konflikter, barnadödlighet och urbanisering. Dessa är av varierande användbarhet, vilket skapar ett behov för att sålla ut de som inte är användbara för att förutsäga framtida konflikter. Innan vårt projekt har forskarna som använder ViEWS valt ut egenskaper för hand, vilket blir allt svårare i och med att fler introduceras. Forskargruppen hade även ingen formell metodik för att välja parametervärden till de maskinlärningsfunktioner de använder. De valde parametrar baserat på erfarenhet och känsla, något som kan leda till onödigt långa exekveringstider och eventuellt sämre resultat beroende på funktionen som används. Våra mål med projektet var att förbättra systemets produktivitet, i termer av exekveringstid och säkerheten i förutsägelserna. För att uppnå detta utvecklade vi analysverktyg för att försöka lösa de existerande problemen. Vi har utvecklat ett verktyg för att välja ut färre, mer användbara, egenskaper från datasamlingen. Detta gör att egenskaper som inte tillför någon viktig information kan sorteras bort vilket sparar exekveringstid. Vi har även jämfört prestandan hos olika maskinlärningsfunktioner, för att identifiera de bäst lämpade för konfliktprediktion. Slutligen har vi implementerat ett verktyg för att analysera hur resultaten från funktionerna varierar efter valet av parametrar. Detta gör att man systematiskt kan bestämma vilka parametervärden som bör väljas för att garantera bra resultat samtidigt som exekveringstid hålls nere. Våra resultat visar att med våra förbättringar sänkes exekveringstiden med en faktor av omkring nio och förutsägelseförmågorna höjdes med en faktor av tre. Vi hoppas att vårt arbete kan leda till säkrare föutsägelser och vilket i sin tur kanske leder till en fredligare värld.

Page generated in 0.0807 seconds