• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 9
  • 1
  • 1
  • Tagged with
  • 61
  • 27
  • 18
  • 17
  • 14
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

[en] E-AUTOMFIS: INTERPRETABLE MODEL FOR TIME SERIES FORECASTING USING ENSEMBLE LEARNING OF FUZZY INFERENCE SYSTEM / [pt] E-AUTOMFIS: MODELO INTERPRETÁVEL PARA PREVISÃO DE SÉRIES MULTIVARIADAS USANDO COMITÊS DE SISTEMAS DE INFERÊNCIA FUZZY

THIAGO MEDEIROS CARVALHO 17 June 2021 (has links)
[pt] Por definição, a série temporal representa o comportamento de uma variável em função do tempo. Para o processo de previsão de séries, o modelo deve ser capaz de aprender a dinâmica temporal das variáveis para obter valores futuros. Contudo, prever séries temporais com exatidão é uma tarefa que vai além de escolher o modelo mais complexo, e portanto a etapa de análise é um processo fundamental para orientar o ajuste do modelo. Especificamente em problemas multivariados, o AutoMFIS é um modelo baseado na lógica fuzzy, desenvolvido para introduzir uma explicabilidade dos resultados através de regras semanticamente compreensíveis. Mesmo com características promissoras e positivas, este sistema possui limitações que tornam sua utilização impraticável em problemas com bases de dados com alta dimensionalidade. E com a presença cada vez maior de bases de dados mais volumosas, é necessário que a síntese automática de sistemas fuzzy seja adaptada para abranger essa nova classe de problemas de previsão. Por conta desta necessidade, a presente dissertação propõe a extensão do modelo AutoMFIS para a previsão de séries temporais com alta dimensionalidade, chamado de e-AutoMFIS. Apresentase uma nova metodologia, baseada em comitê de previsores, para o aprendizado distribuído de geração de regras fuzzy. Neste trabalho, são descritas as características importantes do modelo proposto, salientando as modificações realizadas para aprimorar tanto a previsão quanto a interpretabilidade do sistema. Além disso, também é avaliado o seu desempenho em problemas reais, comparando-se a acurácia dos resultados com as de outras técnicas descritas na literatura. Por fim, em cada problema selecionado também é considerado o aspecto da interpretabilidade, discutindo-se os critérios utilizados para a análise de explicabilidade. / [en] By definition, the time series represents the behavior of a variable as a time function. For the series forecasting process, the model must be able to learn the temporal dynamics of the variables in order to obtain consistent future values. However, an accurate time series prediction is a task that goes beyond choosing the most complex (or promising) model that is applicable to the type of problem, and therefore the analysis step is a fundamental procedure to guide the adaptation of a model. Specifically, in multivariate problems, AutoMFIS is a model based on fuzzy logic, developed not only to give accurate forecasts but also to introduce the explainability of results through semantically understandable rules. Even with such promising characteristics, this system has shown practical limitations in problems that involve datasets of high dimensionality. With the increasing demand formethods to deal with large datasets, it should be great that approaches for the automatic synthesis of fuzzy systems could be adapted to cover a new class of forecasting problems. This dissertation proposes an extension of the base model AutoMFIS modeling method for time series forecasting with high dimensionality data, named as e-AutoMFIS. Based on the Ensemble learning theory, this new methodology applies distributed learning to generate fuzzy rules. The main characteristics of the proposed model are described, highlighting the changes in order to improve both the accuracy and the interpretability of the system. The proposed model is also evaluated in different case studies, in which the results are compared in terms of accuracy against the results produced by other methods in the literature. In addition, in each selected problem, the aspect of interpretability is also assessed, which is essential for explainability evaluation.
42

Explaining Automated Decisions in Practice : Insights from the Swedish Credit Scoring Industry / Att förklara utfall av AI system för konsumenter : Insikter från den svenska kreditupplyssningsindustrin

Matz, Filip, Luo, Yuxiang January 2021 (has links)
The field of explainable artificial intelligence (XAI) has gained momentum in recent years following the increased use of AI systems across industries leading to bias, discrimination, and data security concerns. Several conceptual frameworks for how to reach AI systems that are fair, transparent, and understandable have been proposed, as well as a number of technical solutions improving some of these aspects in a research context. However, there is still a lack of studies examining the implementation of these concepts and techniques in practice. This research aims to bridge the gap between prominent theory within the area and practical implementation, exploring the implementation and evaluation of XAI models in the Swedish credit scoring industry, and proposes a three-step framework for the implementation of local explanations in practice. The research methods used consisted of a case study with the model development at UC AB as a subject and an experiment evaluating the consumers' levels of trust and system understanding as well as the usefulness, persuasive power, and usability of the explanation for three different explanation prototypes developed. The framework proposed was validated by the case study and highlighted a number of key challenges and trade-offs present when implementing XAI in practice. Moreover, the evaluation of the XAI prototypes showed that the majority of consumers prefers rulebased explanations, but that preferences for explanations is still dependent on the individual consumer. Recommended future research endeavors include studying a longterm XAI project in which the models can be evaluated by the open market and the combination of different XAI methods in reaching a more personalized explanation for the consumer. / Under senare år har antalet AI implementationer stadigt ökat i flera industrier. Dessa implementationer har visat flera utmaningar kring nuvarande AI system, specifikt gällande diskriminering, otydlighet och datasäkerhet vilket lett till ett intresse för förklarbar artificiell intelligens (XAI). XAI syftar till att utveckla AI system som är rättvisa, transparenta och begripliga. Flera konceptuella ramverk har introducerats för XAI som presenterar etiska såväl som politiska perspektiv och målbilder. Dessutom har tekniska metoder utvecklats som gjort framsteg mot förklarbarhet i forskningskontext. Däremot saknas det fortfarande studier som undersöker implementationer av dessa koncept och tekniker i praktiken. Denna studie syftar till att överbrygga klyftan mellan den senaste teorin inom området och praktiken genom en fallstudie av ett företag i den svenska kreditupplysningsindustrin. Detta genom att föreslå ett ramverk för implementation av lokala förklaringar i praktiken och genom att utveckla tre förklaringsprototyper. Rapporten utvärderar även prototyperna med konsumenter på följande dimensioner: tillit, systemförståelse, användbarhet och övertalningsstyrka. Det föreslagna ramverket validerades genom fallstudien och belyste ett antal utmaningar och avvägningar som förekommer när XAI system utvecklas för användning i praktiken. Utöver detta visar utvärderingen av prototyperna att majoriteten av konsumenter föredrar regelbaserade förklaringar men indikerar även att preferenser mellan konsumenter varierar. Rekommendationer för framtida forskning är dels en längre studie, vari en XAI modell introduceras på och utvärderas av den fria marknaden, dels forskning som kombinerar olika XAI metoder för att generera mer personliga förklaringar för konsumenter.
43

Methodology Development for Topology Optimization of Power Transfer Unit Housing Structures / Metodutveckling för topologioptimering av växellådshusstrukturer" i kraftöverföringsenheter

Palanisamy, Povendhan January 2020 (has links)
Simulation driven design is a method and process that has been developed over many years, and with today’s advanced software, the possibility to embed simulation into the design process has become a reality. The advantages of using simulation driven design in the product development process is well known and compared to a more traditional design process, the simulation driven design process can give the user the possibility to explore, optimize and design products with reduced lead time.  One of the methods that is applied in simulation driven design is the use of topology optimization (structural optimization). Topology optimization is something that GKN uses in the design process. Due to the complexity of the products GKN design and manufacture, the output from the topology optimization lacks good design interpretability and the design process requires a lot of time and effort.  The purpose of the thesis is to explore different simulation tools used for topology optimization and improve the methodology and process with higher design interpretability for a static topology optimization. This requires a good understanding of the component and the product development process. It is imperative that the topology result must have high design interpretability, and the visualization of the result must show the formation of clear rib structures.  The software’s used for performing topology optimization in this thesis are Inspire, SimLab, HyperMesh, and OptiStruct (HyperWorks suite). Static topology optimization is conducted, and manufacturing constraints for the casting process are considered. The methodology developed is robust for similar gearbox housing structures, and the process is set up to be efficient. The proposed method is verified by implementing it on a housing structure.  The resulting concept from the topology optimization is deemed to have higher design interpretability which improves knowledge transfer in the design process when compared to the current topology results. The weight of the product is reduced, and a more optimum design is reached with a lesser number of iterations. / Simuleringsdriven design är en metod och process som har utvecklats i många år, och med dagens avancerade programvaror har möjligheten att få in simulering direkt i designprocessen blivit verklighet. Fördelarna med att använda simuleringsdriven design i produktutvecklingsprocessen är välkända och jämfört med en mer traditionell designprocess kan den simuleringsdrivna designprocessen ge användaren möjlighet att utforska, optimera och designa produkter med reducerade ledtider som följd.  En av de metoder som tillämpas i simuleringsdriven design är användning av topologioptimering (strukturoptimering). Topologioptimering är något som GKN använder i designprocessen. På grund av komplexiteten hos produkterna GKN designar och tillverkar kräver designprocessen mycket ingenjörsarbete och tid. Produktionen har också problem med att tolka topologioptimeringsresultaten.. Syftet med avhandlingen är att utforska olika simuleringsverktyg som används för topologioptimering och förbättra metodiken och processen för att öka designtolkningen av en statisk topologioptimering. Detta kräver en god förståelse för komponenten och produktutvecklingsprocessen. För att förbättra osäkerheterna i resultaten från optimeringen, är det nödvändigt att dessa resultat är lätta att tolka, och visualiseringen av resultaten ska vara tydliga och visa hur lastvägarna går och därmed vart ribbor ska läggas.  Programvarorna som användes för att utföra topologioptimering i denna avhandling är Inspire, SimLab, HyperMesh och OptiStruct (HyperWorks suite). Statisk topologioptimering är utförd och tillverkningsbegränsningar för gjutningsprocesser har inkluderats.  Den metod som utvecklats är robust för liknande växellådshusstrukturer, och processen som föreslås är mera effektiv. Den föreslagna metoden har verifierats genom att den tillämpats för ett växellådshus.  Det resulterande topologikonceptet antas ha en bättre designtolkningsbarhet, vilket möjliggör en förbättrad kommunikation och kunskapsöverföring i konstruktionsprocessen, jämfört med den nuvarande processen. Produktens vikt minskas, och en mer optimal design nås med färre iterationer.
44

Hybrid Ensemble Methods: Interpretible Machine Learning for High Risk Aeras / Hybrida ensemblemetoder: Tolkningsbar maskininlärning för högriskområden

Ulvklo, Maria January 2021 (has links)
Despite the access to enormous amounts of data, there is a holdback in the usage of machine learning in the Cyber Security field due to the lack of interpretability of ”Black­box” models and due to heterogenerous data. This project presents a method that provide insights in the decision making process in Cyber Security classification. Hybrid Ensemble Methods (HEMs), use several weak learners trained on single data features and combines the output of these in a neural network. In this thesis HEM preforms phishing website classification with high accuracy, along with interpretability. The ensemble of predictions boosts the accuracy with 8%, giving a final prediction accuracy of 93 %, which indicates that HEM are able to reconstruct correlations between the features after the interpredability stage. HEM provides information about which weak learners trained on specific information that are valuable for the classification. No samples were disregarded despite missing features. Cross validation were made across 3 random seeds and the results showed to be steady with a variance of 0.22%. An important finding was that the methods performance did not significantly change when disregarding the worst of the weak learners, meaning that adding models trained on bad data won’t sabotage the prediction. The findings of these investigations indicates that Hybrid Ensamble methods are robust and flexible. This thesis represents an attempt to construct a smarter way of making predictions, where the usage of several forms of information can be combined, in an artificially intelligent way. / Trots tillgången till enorma mängder data finns det ett bakslag i användningen av maskininlärning inom cybersäkerhetsområdet på grund av bristen på tolkning av ”Blackbox”-modeller och på grund av heterogen data. Detta projekt presenterar en metod som ger insikt i beslutsprocessen i klassificering inom cyber säkerhet. Hybrid Ensemble Methods (HEMs), använder flera svaga maskininlärningsmodeller som är tränade på enstaka datafunktioner och kombinerar resultatet av dessa i ett neuralt nätverk. I denna rapport utför HEM klassificering av nätfiskewebbplatser med hög noggrannhet, men med vinsten av tolkningsbarhet. Sammansättandet av förutsägelser ökar noggrannheten med 8 %, vilket ger en slutgiltig prediktionsnoggrannhet på 93 %, vilket indikerar att HEM kan rekonstruera korrelationer mellan funktionerna efter tolkbarhetsstadiet. HEM ger information om vilka svaga maskininlärningsmodeller, som tränats på specifik information, som är värdefulla för klassificeringen. Inga datapunkter ignorerades trots saknade datapunkter. Korsvalidering gjordes över 3 slumpmässiga dragningar och resultaten visade sig vara stabila med en varians på 0.22 %. Ett viktigt resultat var att metodernas prestanda inte förändrades nämnvärt när man bortsåg från de sämsta av de svaga modellerna, vilket innebär att modeller tränade på dålig data inte kommer att sabotera förutsägelsen. Resultaten av dessa undersökningar indikerar att Hybrid Ensamble-metoder är robusta och flexibla. Detta projekt representerar ett försök att konstruera ett smartare sätt att göra klassifieringar, där användningen av flera former av information kan kombineras, på ett artificiellt intelligent sätt.
45

Geração genética multiobjetivo de sistemas fuzzy usando a abordagem iterativa

Cárdenas, Edward Hinojosa 28 June 2011 (has links)
Made available in DSpace on 2016-06-02T19:05:54Z (GMT). No. of bitstreams: 1 3998.pdf: 3486824 bytes, checksum: f1c040adfdc7d0672bc93a058f8a413d (MD5) Previous issue date: 2011-06-28 / Financiadora de Estudos e Projetos / The goal of this work is to study, expand and evaluate the use of multiobjective genetic algorithms and the iterative rule learning approach in fuzzy system generation, especially, in fuzzy rule-based systems, both in automatic fuzzy rule generation from datasets and in fuzzy sets optimization. This work investigates the use of multi-objective genetic algorithms with a focus on the trade-off between accuracy and interpretability, considered contradictory objectives in the representation of fuzzy systems. With this purpose, we propose and implement an evolutive multi-objective genetic model composed of three stages. In the first stage uniformly distributed fuzzy sets are created. In the second stage, the rule base is generated by using an iterative rule learning approach and a multiobjective genetic algorithm. Finally the fuzzy sets created in the first stage are optimized through a multi-objective genetic algorithm. The proposed model was evaluated with a number of benchmark datasets and the results were compared to three other methods found in the literature. The results obtained with the optimization of the fuzzy sets were compared to the result of another fuzzy set optimizer found in the literature. Statistical comparison methods usually applied in similar context show that the proposed method has an improved classification rate and interpretability in comparison with the other methods. / O objetivo deste trabalho é estudar, expandir e avaliar o uso dos algoritmos genéticos multiobjetivo e a abordagem iterativa na geração de sistemas fuzzy, mais especificamente para sistemas fuzzy baseados em regras, tanto na geração automática da base de regras fuzzy a partir de conjuntos de dados, como a otimização dos conjuntos fuzzy. Esse trabalho investiga o uso dos algoritmos genéticos multiobjetivo com enfoque na questão de balanceamento entre precisão e interpretabilidade, ambos considerados contraditórios entre si na representação de sistemas fuzzy. Com este intuito, é proposto e implementado um modelo evolutivo multiobjetivo genético composto por três etapas. Na primeira etapa são criados os conjuntos fuzzy uniformemente distribuídos. Na segunda etapa é tratada a geração da base de regras usando a abordagem iterativa e um algoritmo genético multiobjetivo. Por fim, na terceira etapa os conjuntos fuzzy criados na primeira etapa são otimizados mediante um algoritmo genético multiobjetivo. O modelo desenvolvido foi avaliado em diversos conjuntos de dados benchmark e os resultados obtidos foram comparados com outros três métodos, que geram regras de classificação, encontrados na literatura. Os resultados obtidos após a otimização dos conjuntos fuzzy foram comparados com resultados de outro otimizador de conjuntos fuzzy encontrado na literatura. Métodos estatísticos de comparação usualmente aplicados em contextos semelhantes mostram uma melhor taxa de classificação e interpretabilidade do método proposto com relação a outros métodos.
46

Learning discrete word embeddings to achieve better interpretability and processing efficiency

Beland-Leblanc, Samuel 12 1900 (has links)
L’omniprésente utilisation des plongements de mot dans le traitement des langues naturellesest la preuve de leur utilité et de leur capacité d’adaptation a une multitude de tâches. Ce-pendant, leur nature continue est une importante limite en terme de calculs, de stockage enmémoire et d’interprétation. Dans ce travail de recherche, nous proposons une méthode pourapprendre directement des plongements de mot discrets. Notre modèle est une adaptationd’une nouvelle méthode de recherche pour base de données avec des techniques dernier crien traitement des langues naturelles comme les Transformers et les LSTM. En plus d’obtenirdes plongements nécessitant une fraction des ressources informatiques nécéssaire à leur sto-ckage et leur traitement, nos expérimentations suggèrent fortement que nos représentationsapprennent des unités de bases pour le sens dans l’espace latent qui sont analogues à desmorphèmes. Nous appelons ces unités dessememes, qui, de l’anglaissemantic morphemes,veut dire morphèmes sémantiques. Nous montrons que notre modèle a un grand potentielde généralisation et qu’il produit des représentations latentes montrant de fortes relationssémantiques et conceptuelles entre les mots apparentés. / The ubiquitous use of word embeddings in Natural Language Processing is proof of theirusefulness and adaptivity to a multitude of tasks. However, their continuous nature is pro-hibitive in terms of computation, storage and interpretation. In this work, we propose amethod of learning discrete word embeddings directly. The model is an adaptation of anovel database searching method using state of the art natural language processing tech-niques like Transformers and LSTM. On top of obtaining embeddings requiring a fractionof the resources to store and process, our experiments strongly suggest that our representa-tions learn basic units of meaning in latent space akin to lexical morphemes. We call theseunitssememes, i.e., semantic morphemes. We demonstrate that our model has a greatgeneralization potential and outputs representation showing strong semantic and conceptualrelations between related words.
47

Utvärdering av den upplevda användbarheten hos CySeMoL och EAAT med hjälp av ramverk för ändamålet och ISO/IEC 25010:2011

Frost, Per January 2013 (has links)
This report describes a study aimed at uncovering flaws and finding potential improvements from when the modelling tool EAAT is used in conjunction with the modelling language CySeMoL. The study was performed by developing a framework and applying it on CySeMoL and EAAT in real life context networks. The framework was developed in order to increase the number of flaws uncovered as well as gather potential improvements to both EAAT and CySeMoL. The basis of the framework is a modified version of the Quality in use model from ISO/IEC 25010:2011 standard. Upon the characteristics and sub characteristics of this modified model different values for measuring usability where attached. The purpose of these values is to measure usability from the perspectives of both creating and interpreting models. Furthermore these values are based on several different sources on how to measure usability. The complete contents of the framework and the underlying ideas, upon which the framework is based, are presented in this report. The framework in this study was designed in order to enable it to be used universally with any modelling language in conjunction with a modelling tool. Its design is also not limited to the field of computer security and computer networks, although that is the intended context of CySeMoL as well as the context described in this report. However, utilization outside the intended area of usage will most likely require some modifications, in order to work in a fully satisfying. Several flaws where uncovered regarding the usability of CySeMoL and EAAT, but this is also accompanied by several recommendations on how to improve both CySeMoL and EAAT. Because of the outline of the framework, the most severe flaws have been identified and recommendations on how to rectify these shortcomings have been suggested.
48

Wavebender GAN : Deep architecture for high-quality and controllable speech synthesis through interpretable features and exchangeable neural synthesizers / Wavebender GAN : Djup arkitektur för kontrollerbar talsyntes genom tolkningsbara attribut och utbytbara neurala syntessystem

Döhler Beck, Gustavo Teodoro January 2021 (has links)
Modeling humans’ speech is a challenging task that originally required a coalition between phoneticians and speech engineers. Yet, the latter, disengaged from phoneticians, have strived for evermore natural speech synthesis in the absence of an awareness of speech modelling due to data- driven and ever-growing deep learning models. By virtue of decades of detachment between phoneticians and speech engineers, this thesis presents a deep learning architecture, alleged Wavebender GAN, that predicts mel- spectrograms that are processed by a vocoder, HiFi-GAN, to synthesize speech. Wavebender GAN pushes for progress in both speech science and technology, allowing phoneticians to manipulate stimuli and test phonological models supported by high-quality synthesized speeches generated through interpretable low-level signal properties. This work sets a new step of cooperation for phoneticians and speech engineers. / Att modellera mänskligt tal är en utmanande uppgift som ursprungligen krävde en samverkan mellan fonetiker och taltekniker. De senare har dock, utan att vara kopplade till fonetikerna, strävat efter en allt mer naturlig talsyntes i avsaknad av en djup medvetenhet om talmodellering på grund av datadrivna och ständigt växande modeller fördjupinlärning. Med anledning av decennier av distansering mellan fonetiker och taltekniker presenteras i denna avhandling en arkitektur för djupinlärning, som påstås vara Wavebender GAN, som förutsäger mel-spektrogram som tas emot av en vocoder, HiFi-GAN, för att syntetisera tal. Wavebender GAN driver på för framsteg inom både tal vetenskap och teknik, vilket gör det möjligt för fonetiker att manipulera stimulus och testa fonologiska modeller som stöds av högkvalitativa syntetiserade tal som genereras genom tolkningsbara signalegenskaper på lågnivå. Detta arbete inleder en ny era av samarbete för fonetiker och taltekniker.
49

Automatic Classification of Full- and Reduced-Lead Electrocardiograms Using Morphological Feature Extraction

Hammer, Alexander, Scherpf, Matthieu, Ernst, Hannes, Weiß, Jonas, Schwensow, Daniel, Schmidt, Martin 26 August 2022 (has links)
Cardiovascular diseases are the global leading cause of death. Automated electrocardiogram (ECG) analysis can support clinicians to identify abnormal excitation of the heart and prevent premature cardiovascular death. An explainable classification is particularly important for support systems. Our contribution to the PhysioNet/CinC Challenge 2021 (team name: ibmtPeakyFinders) therefore pursues an approach that is based on interpretable features to be as explainable as possible. To meet the challenge goal of developing an algorithm that works for both 12-lead and reduced lead ECGs, we processed each lead separately. We focused on signal processing techniques based on template delineation that yield the template's fiducial points to take the ECG waveform morphology into account. In addition to beat intervals and amplitudes obtained from the template, various heart rate variability and QT interval variability features were extracted and supplemented by signal quality indices. Our classification approach utilized a decision tree ensemble in a one-vs-rest approach. The model parameters were determined using an extensive grid search. Our approach achieved challenge scores of 0.47, 0.47, 0.34, 0.40, and 0.41 on hidden 12-, 6-, 4-, 3-, and 2-lead test sets, respectively, which corresponds to the ranks 12, 10, 23, 18, and 16 out of 39 teams.
50

Interpreting Multivariate Time Series for an Organization Health Platform

Saluja, Rohit January 2020 (has links)
Machine learning-based systems are rapidly becoming popular because it has been realized that machines are more efficient and effective than humans at performing certain tasks. Although machine learning algorithms are extremely popular, they are also very literal and undeviating. This has led to a huge research surge in the field of interpretability in machine learning to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process. Moreover, in most real-world problems just making predictions using machine learning algorithms only solves the problem partially. Time series is one of the most popular and important data types because of its dominant presence in the fields of business, economics, and engineering. Despite this, interpretability in time series is still relatively unexplored as compared to tabular, text, and image data. With the growing research in the field of interpretability in machine learning, there is also a pressing need to be able to quantify the quality of explanations produced after interpreting machine learning models. Due to this reason, evaluation of interpretability is extremely important. The evaluation of interpretability for models built on time series seems completely unexplored in research circles. This thesis work focused on achieving and evaluating model agnostic interpretability in a time series forecasting problem.  The use case discussed in this thesis work focused on finding a solution to a problem faced by a digital consultancy company. The digital consultancy wants to take a data-driven approach to understand the effect of various sales related activities in the company on the sales deals closed by the company. The solution involved framing the problem as a time series forecasting problem to predict the sales deals and interpreting the underlying forecasting model. The interpretability was achieved using two novel model agnostic interpretability techniques, Local interpretable model- agnostic explanations (LIME) and Shapley additive explanations (SHAP). The explanations produced after achieving interpretability were evaluated using human evaluation of interpretability. The results of the human evaluation studies clearly indicate that the explanations produced by LIME and SHAP greatly helped lay humans in understanding the predictions made by the machine learning model. The human evaluation study results also indicated that LIME and SHAP explanations were almost equally understandable with LIME performing better but with a very small margin. The work done during this project can easily be extended to any time series forecasting or classification scenario for achieving and evaluating interpretability. Furthermore, this work can offer a very good framework for achieving and evaluating interpretability in any machine learning-based regression or classification problem. / Maskininlärningsbaserade system blir snabbt populära eftersom man har insett att maskiner är effektivare än människor när det gäller att utföra vissa uppgifter. Även om maskininlärningsalgoritmer är extremt populära, är de också mycket bokstavliga. Detta har lett till en enorm forskningsökning inom området tolkbarhet i maskininlärning för att säkerställa att maskininlärningsmodeller är tillförlitliga, rättvisa och kan hållas ansvariga för deras beslutsprocess. Dessutom löser problemet i de flesta verkliga problem bara att göra förutsägelser med maskininlärningsalgoritmer bara delvis. Tidsserier är en av de mest populära och viktiga datatyperna på grund av dess dominerande närvaro inom affärsverksamhet, ekonomi och teknik. Trots detta är tolkningsförmågan i tidsserier fortfarande relativt outforskad jämfört med tabell-, text- och bilddata. Med den växande forskningen inom området tolkbarhet inom maskininlärning finns det också ett stort behov av att kunna kvantifiera kvaliteten på förklaringar som produceras efter tolkning av maskininlärningsmodeller. Av denna anledning är utvärdering av tolkbarhet extremt viktig. Utvärderingen av tolkbarhet för modeller som bygger på tidsserier verkar helt outforskad i forskarkretsar. Detta uppsatsarbete fokuserar på att uppnå och utvärdera agnostisk modelltolkbarhet i ett tidsserieprognosproblem.  Fokus ligger i att hitta lösningen på ett problem som ett digitalt konsultföretag står inför som användningsfall. Det digitala konsultföretaget vill använda en datadriven metod för att förstå effekten av olika försäljningsrelaterade aktiviteter i företaget på de försäljningsavtal som företaget stänger. Lösningen innebar att inrama problemet som ett tidsserieprognosproblem för att förutsäga försäljningsavtalen och tolka den underliggande prognosmodellen. Tolkningsförmågan uppnåddes med hjälp av två nya tekniker för agnostisk tolkbarhet, lokala tolkbara modellagnostiska förklaringar (LIME) och Shapley additiva förklaringar (SHAP). Förklaringarna som producerats efter att ha uppnått tolkbarhet utvärderades med hjälp av mänsklig utvärdering av tolkbarhet. Resultaten av de mänskliga utvärderingsstudierna visar tydligt att de förklaringar som produceras av LIME och SHAP starkt hjälpte människor att förstå förutsägelserna från maskininlärningsmodellen. De mänskliga utvärderingsstudieresultaten visade också att LIME- och SHAP-förklaringar var nästan lika förståeliga med LIME som presterade bättre men med en mycket liten marginal. Arbetet som utförts under detta projekt kan enkelt utvidgas till alla tidsserieprognoser eller klassificeringsscenarier för att uppnå och utvärdera tolkbarhet. Dessutom kan detta arbete erbjuda en mycket bra ram för att uppnå och utvärdera tolkbarhet i alla maskininlärningsbaserade regressions- eller klassificeringsproblem.

Page generated in 0.0651 seconds