• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 9
  • 1
  • 1
  • Tagged with
  • 67
  • 30
  • 19
  • 18
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Learning discrete word embeddings to achieve better interpretability and processing efficiency

Beland-Leblanc, Samuel 12 1900 (has links)
L’omniprésente utilisation des plongements de mot dans le traitement des langues naturellesest la preuve de leur utilité et de leur capacité d’adaptation a une multitude de tâches. Ce-pendant, leur nature continue est une importante limite en terme de calculs, de stockage enmémoire et d’interprétation. Dans ce travail de recherche, nous proposons une méthode pourapprendre directement des plongements de mot discrets. Notre modèle est une adaptationd’une nouvelle méthode de recherche pour base de données avec des techniques dernier crien traitement des langues naturelles comme les Transformers et les LSTM. En plus d’obtenirdes plongements nécessitant une fraction des ressources informatiques nécéssaire à leur sto-ckage et leur traitement, nos expérimentations suggèrent fortement que nos représentationsapprennent des unités de bases pour le sens dans l’espace latent qui sont analogues à desmorphèmes. Nous appelons ces unités dessememes, qui, de l’anglaissemantic morphemes,veut dire morphèmes sémantiques. Nous montrons que notre modèle a un grand potentielde généralisation et qu’il produit des représentations latentes montrant de fortes relationssémantiques et conceptuelles entre les mots apparentés. / The ubiquitous use of word embeddings in Natural Language Processing is proof of theirusefulness and adaptivity to a multitude of tasks. However, their continuous nature is pro-hibitive in terms of computation, storage and interpretation. In this work, we propose amethod of learning discrete word embeddings directly. The model is an adaptation of anovel database searching method using state of the art natural language processing tech-niques like Transformers and LSTM. On top of obtaining embeddings requiring a fractionof the resources to store and process, our experiments strongly suggest that our representa-tions learn basic units of meaning in latent space akin to lexical morphemes. We call theseunitssememes, i.e., semantic morphemes. We demonstrate that our model has a greatgeneralization potential and outputs representation showing strong semantic and conceptualrelations between related words.
52

Utvärdering av den upplevda användbarheten hos CySeMoL och EAAT med hjälp av ramverk för ändamålet och ISO/IEC 25010:2011

Frost, Per January 2013 (has links)
This report describes a study aimed at uncovering flaws and finding potential improvements from when the modelling tool EAAT is used in conjunction with the modelling language CySeMoL. The study was performed by developing a framework and applying it on CySeMoL and EAAT in real life context networks. The framework was developed in order to increase the number of flaws uncovered as well as gather potential improvements to both EAAT and CySeMoL. The basis of the framework is a modified version of the Quality in use model from ISO/IEC 25010:2011 standard. Upon the characteristics and sub characteristics of this modified model different values for measuring usability where attached. The purpose of these values is to measure usability from the perspectives of both creating and interpreting models. Furthermore these values are based on several different sources on how to measure usability. The complete contents of the framework and the underlying ideas, upon which the framework is based, are presented in this report. The framework in this study was designed in order to enable it to be used universally with any modelling language in conjunction with a modelling tool. Its design is also not limited to the field of computer security and computer networks, although that is the intended context of CySeMoL as well as the context described in this report. However, utilization outside the intended area of usage will most likely require some modifications, in order to work in a fully satisfying. Several flaws where uncovered regarding the usability of CySeMoL and EAAT, but this is also accompanied by several recommendations on how to improve both CySeMoL and EAAT. Because of the outline of the framework, the most severe flaws have been identified and recommendations on how to rectify these shortcomings have been suggested.
53

Wavebender GAN : Deep architecture for high-quality and controllable speech synthesis through interpretable features and exchangeable neural synthesizers / Wavebender GAN : Djup arkitektur för kontrollerbar talsyntes genom tolkningsbara attribut och utbytbara neurala syntessystem

Döhler Beck, Gustavo Teodoro January 2021 (has links)
Modeling humans’ speech is a challenging task that originally required a coalition between phoneticians and speech engineers. Yet, the latter, disengaged from phoneticians, have strived for evermore natural speech synthesis in the absence of an awareness of speech modelling due to data- driven and ever-growing deep learning models. By virtue of decades of detachment between phoneticians and speech engineers, this thesis presents a deep learning architecture, alleged Wavebender GAN, that predicts mel- spectrograms that are processed by a vocoder, HiFi-GAN, to synthesize speech. Wavebender GAN pushes for progress in both speech science and technology, allowing phoneticians to manipulate stimuli and test phonological models supported by high-quality synthesized speeches generated through interpretable low-level signal properties. This work sets a new step of cooperation for phoneticians and speech engineers. / Att modellera mänskligt tal är en utmanande uppgift som ursprungligen krävde en samverkan mellan fonetiker och taltekniker. De senare har dock, utan att vara kopplade till fonetikerna, strävat efter en allt mer naturlig talsyntes i avsaknad av en djup medvetenhet om talmodellering på grund av datadrivna och ständigt växande modeller fördjupinlärning. Med anledning av decennier av distansering mellan fonetiker och taltekniker presenteras i denna avhandling en arkitektur för djupinlärning, som påstås vara Wavebender GAN, som förutsäger mel-spektrogram som tas emot av en vocoder, HiFi-GAN, för att syntetisera tal. Wavebender GAN driver på för framsteg inom både tal vetenskap och teknik, vilket gör det möjligt för fonetiker att manipulera stimulus och testa fonologiska modeller som stöds av högkvalitativa syntetiserade tal som genereras genom tolkningsbara signalegenskaper på lågnivå. Detta arbete inleder en ny era av samarbete för fonetiker och taltekniker.
54

Automatic Classification of Full- and Reduced-Lead Electrocardiograms Using Morphological Feature Extraction

Hammer, Alexander, Scherpf, Matthieu, Ernst, Hannes, Weiß, Jonas, Schwensow, Daniel, Schmidt, Martin 26 August 2022 (has links)
Cardiovascular diseases are the global leading cause of death. Automated electrocardiogram (ECG) analysis can support clinicians to identify abnormal excitation of the heart and prevent premature cardiovascular death. An explainable classification is particularly important for support systems. Our contribution to the PhysioNet/CinC Challenge 2021 (team name: ibmtPeakyFinders) therefore pursues an approach that is based on interpretable features to be as explainable as possible. To meet the challenge goal of developing an algorithm that works for both 12-lead and reduced lead ECGs, we processed each lead separately. We focused on signal processing techniques based on template delineation that yield the template's fiducial points to take the ECG waveform morphology into account. In addition to beat intervals and amplitudes obtained from the template, various heart rate variability and QT interval variability features were extracted and supplemented by signal quality indices. Our classification approach utilized a decision tree ensemble in a one-vs-rest approach. The model parameters were determined using an extensive grid search. Our approach achieved challenge scores of 0.47, 0.47, 0.34, 0.40, and 0.41 on hidden 12-, 6-, 4-, 3-, and 2-lead test sets, respectively, which corresponds to the ranks 12, 10, 23, 18, and 16 out of 39 teams.
55

Interpreting Multivariate Time Series for an Organization Health Platform

Saluja, Rohit January 2020 (has links)
Machine learning-based systems are rapidly becoming popular because it has been realized that machines are more efficient and effective than humans at performing certain tasks. Although machine learning algorithms are extremely popular, they are also very literal and undeviating. This has led to a huge research surge in the field of interpretability in machine learning to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process. Moreover, in most real-world problems just making predictions using machine learning algorithms only solves the problem partially. Time series is one of the most popular and important data types because of its dominant presence in the fields of business, economics, and engineering. Despite this, interpretability in time series is still relatively unexplored as compared to tabular, text, and image data. With the growing research in the field of interpretability in machine learning, there is also a pressing need to be able to quantify the quality of explanations produced after interpreting machine learning models. Due to this reason, evaluation of interpretability is extremely important. The evaluation of interpretability for models built on time series seems completely unexplored in research circles. This thesis work focused on achieving and evaluating model agnostic interpretability in a time series forecasting problem.  The use case discussed in this thesis work focused on finding a solution to a problem faced by a digital consultancy company. The digital consultancy wants to take a data-driven approach to understand the effect of various sales related activities in the company on the sales deals closed by the company. The solution involved framing the problem as a time series forecasting problem to predict the sales deals and interpreting the underlying forecasting model. The interpretability was achieved using two novel model agnostic interpretability techniques, Local interpretable model- agnostic explanations (LIME) and Shapley additive explanations (SHAP). The explanations produced after achieving interpretability were evaluated using human evaluation of interpretability. The results of the human evaluation studies clearly indicate that the explanations produced by LIME and SHAP greatly helped lay humans in understanding the predictions made by the machine learning model. The human evaluation study results also indicated that LIME and SHAP explanations were almost equally understandable with LIME performing better but with a very small margin. The work done during this project can easily be extended to any time series forecasting or classification scenario for achieving and evaluating interpretability. Furthermore, this work can offer a very good framework for achieving and evaluating interpretability in any machine learning-based regression or classification problem. / Maskininlärningsbaserade system blir snabbt populära eftersom man har insett att maskiner är effektivare än människor när det gäller att utföra vissa uppgifter. Även om maskininlärningsalgoritmer är extremt populära, är de också mycket bokstavliga. Detta har lett till en enorm forskningsökning inom området tolkbarhet i maskininlärning för att säkerställa att maskininlärningsmodeller är tillförlitliga, rättvisa och kan hållas ansvariga för deras beslutsprocess. Dessutom löser problemet i de flesta verkliga problem bara att göra förutsägelser med maskininlärningsalgoritmer bara delvis. Tidsserier är en av de mest populära och viktiga datatyperna på grund av dess dominerande närvaro inom affärsverksamhet, ekonomi och teknik. Trots detta är tolkningsförmågan i tidsserier fortfarande relativt outforskad jämfört med tabell-, text- och bilddata. Med den växande forskningen inom området tolkbarhet inom maskininlärning finns det också ett stort behov av att kunna kvantifiera kvaliteten på förklaringar som produceras efter tolkning av maskininlärningsmodeller. Av denna anledning är utvärdering av tolkbarhet extremt viktig. Utvärderingen av tolkbarhet för modeller som bygger på tidsserier verkar helt outforskad i forskarkretsar. Detta uppsatsarbete fokuserar på att uppnå och utvärdera agnostisk modelltolkbarhet i ett tidsserieprognosproblem.  Fokus ligger i att hitta lösningen på ett problem som ett digitalt konsultföretag står inför som användningsfall. Det digitala konsultföretaget vill använda en datadriven metod för att förstå effekten av olika försäljningsrelaterade aktiviteter i företaget på de försäljningsavtal som företaget stänger. Lösningen innebar att inrama problemet som ett tidsserieprognosproblem för att förutsäga försäljningsavtalen och tolka den underliggande prognosmodellen. Tolkningsförmågan uppnåddes med hjälp av två nya tekniker för agnostisk tolkbarhet, lokala tolkbara modellagnostiska förklaringar (LIME) och Shapley additiva förklaringar (SHAP). Förklaringarna som producerats efter att ha uppnått tolkbarhet utvärderades med hjälp av mänsklig utvärdering av tolkbarhet. Resultaten av de mänskliga utvärderingsstudierna visar tydligt att de förklaringar som produceras av LIME och SHAP starkt hjälpte människor att förstå förutsägelserna från maskininlärningsmodellen. De mänskliga utvärderingsstudieresultaten visade också att LIME- och SHAP-förklaringar var nästan lika förståeliga med LIME som presterade bättre men med en mycket liten marginal. Arbetet som utförts under detta projekt kan enkelt utvidgas till alla tidsserieprognoser eller klassificeringsscenarier för att uppnå och utvärdera tolkbarhet. Dessutom kan detta arbete erbjuda en mycket bra ram för att uppnå och utvärdera tolkbarhet i alla maskininlärningsbaserade regressions- eller klassificeringsproblem.
56

Towards gradient faithfulness and beyond

Buono, Vincenzo, Åkesson, Isak January 2023 (has links)
The riveting interplay of industrialization, informalization, and exponential technological growth of recent years has shifted the attention from classical machine learning techniques to more sophisticated deep learning approaches; yet its intrinsic black-box nature has been impeding its widespread adoption in transparency-critical operations. In this rapidly evolving landscape, where the symbiotic relationship between research and practical applications has never been more interwoven, the contribution of this paper is twofold: advancing gradient faithfulness of CAM methods and exploring new frontiers beyond it. In the first part, we theorize three novel gradient-based CAM formulations, aimed at replacing and superseding traditional Grad-CAM-based methods by tackling and addressing the intricately and persistent vanishing and saturating gradient problems. As a consequence, our work introduces novel enhancements to Grad-CAM that reshape the conventional gradient computation by incorporating a customized and adapted technique inspired by the well-established and provably Expected Gradients’ difference-from-reference approach. Our proposed techniques– Expected Grad-CAM, Expected Grad-CAM++and Guided Expected Grad-CAM– as they operate directly on the gradient computation, rather than the recombination of the weighing factors, are designed as a direct and seamless replacement for Grad-CAM and any posterior work built upon it. In the second part, we build on our prior proposition and devise a novel CAM method that produces both high-resolution and class-discriminative explanation without fusing other methods, while addressing the issues of both gradient and CAM methods altogether. Our last and most advanced proposition, Hyper Expected Grad-CAM, challenges the current state and formulation of visual explanation and faithfulness and produces a new type of hybrid saliencies that satisfy the notion of natural encoding and perceived resolution. By rethinking faithfulness and resolution is possible to generate saliencies which are more detailed, localized, and less noisy, but most importantly that are composed of only concepts that are encoded by the layerwise models’ understanding. Both contributions have been quantitatively and qualitatively compared and assessed in a 5 to 10 times larger evaluation study on the ILSVRC2012 dataset against nine of the most recent and performing CAM techniques across six metrics. Expected Grad-CAM outperformed not only the original formulation but also more advanced methods, resulting in the second-best explainer with an Ins-Del score of 0.56. Hyper Expected Grad-CAM provided remarkable results across each quantitative metric, yielding a 0.15 increase in insertion when compared to the highest-scoring explainer PolyCAM, totaling to an Ins-Del score of 0.72.
57

[en] A FUZZY INFERENCE SYSTEM WITH AUTOMATIC RULE EXTRACTION FOR GAS PATH DIAGNOSIS OF AVIATION GAS TURBINES / [pt] SISTEMA DE INFERÊNCIA FUZZY COM EXTRAÇÃO AUTOMÁTICA DE REGRAS PARA DIAGNÓSTICO DE DESEMPENHO DE TURBINAS A GÁS AERONÁUTICAS

TAIRO DOS PRAZERES TEIXEIRA 14 December 2016 (has links)
[pt] Turbinas a gás são equipamentos muito complexos e caros. No caso de falha em uma turbina, há obviamente perdas diretas, mas as indiretas são normalmente muito maiores, uma vez que tal equipamento é crítico para a operação de instalações industriais, aviões e veículos pesados. Portanto, é fundamental que turbinas a gás sejam providas com um sistema eficiente de monitoramento e diagnóstico. Isto é especialmente relevante no Brasil, cuja frota de turbinas tem crescido muito nos últimos anos, devido, principalmente, ao aumento do número de usinas termelétricas e ao crescimento da aviação civil. Este trabalho propõe um Sistema de Inferência Fuzzy (SIF) com extração automática de regras para diagnóstico de desempenho de turbinas a gás aeronáuticas. O sistema proposto faz uso de uma abordagem residual – medições da turbina real são comparadas frente a uma referência de turbina saudável – para tratamento dos dados brutos de entrada para os módulos de detecção e isolamento, que, de forma hierárquica, são responsáveis por detectar e isolar falhas em nível de componentes, sensores e atuadores. Como dados reais de falhas em turbinas a gás são de difícil acesso e de obtenção cara, a metodologia é validada frente a uma base de dados de falhas simuladas por um software especialista. Os resultados mostram que o SIF é capaz de detectar e isolar corretamente falhas, além de fornecer interpretabilidade linguística, característica importante no processo de tomada de decisão no contexto de manutenção. / [en] A Gas turbine is a complex and expensive equipment. In case of a failure indirect losses are typically much larger than direct ones, since such equipment plays a critical role in the operation of industrial installations, aircrafts, and heavy vehicles. Therefore, it is vital that gas turbines be provided with an efficient monitoring and diagnostic system. This is especially relevant in Brazil, where the turbines fleet has risen substantially in recent years, mainly due to the increasing number of thermal power plants and to the growth of civil aviation. This work proposes a Fuzzy Inference System (FIS) with automatic rule extraction for gas path diagnosis. The proposed system makes use of a residual approach – gas path measurements are compared to a healthy engine reference – for preprocessing raw input data that are forwarded to the detection and isolation modules. These operate in a hierarchical manner and are responsible for fault detection and isolation in components, sensors and actuators. Since gas turbines failure data are difficult to access and expensive to obtain, the methodology is validated by using a database fault simulated by a specialist software. The results show that the SIF is able to correctly detect and isolate failures and to provide linguistic interpretability, which is an important feature in the decision-making process regarding maintenance.
58

[en] ON MACHINE LEARNING TECHNIQUES TOWARD PATH LOSS MODELING IN 5G AND BEYOND WIRELESS SYSTEMS / [pt] SOBRE TÉCNICAS DE APRENDIZADO DE MÁQUINA EM DIREÇÃO À MODELAGEM DE PERDA DE PROPAGAÇÃO EM SISTEMAS SEM FIO 5G E ALÉM

YOIZ ELEDUVITH NUNEZ RUIZ 09 November 2023 (has links)
[pt] A perda de percurso (PL) é um parâmetro essencial em modelos de propagação e crucial na determinação da área de cobertura de sistemas móveis. Os métodos de aprendizado de máquina (ML) tornaram-se ferramentas promissoras para a previsão de propagação de rádio. No entanto, ainda existem alguns desafios para sua implantação completa, relacionados à seleção das entradas mais significativas do modelo, à compreensão de suas contribuições para as previsões do modelo e à avaliação adicional da capacidade de generalização para amostras desconhecidas. Esta tese tem como objetivo projetar modelos de PL baseados em ML otimizados para diferentes aplicações das tecnologias 5G e além. Essas aplicações abrangem links de ondas milimétricas (mmWave) para ambientes indoor e outdoor na faixa de frequência de 26,5 a 40 GHz, cobertura de macrocélulas no espectro sub-6 GHz e comunicações veiculares usando campanhas de medições desenvolvidas em CETUC, Rio de Janeiro, Brazil. Vários algoritmos de ML são explorados, como redes neurais artificiais (ANN), regressão de vetor de suporte (SVR), floresta aleatória (RF) e aumento de árvore de gradiente (GTB). Além disso, estendemos dois modelos empíricos para mmWave com previsão de PL melhorada. Propomos uma metodologia para seleção robusta de modelos de ML e uma metodologia para selecionar os preditores mais adequados para as máquinas consideradas com base na melhoria de desempenho e na interpretabilidade do modelo. Além disso, para o canal veículo-veículo (V2V), uma técnica de rede neural convolucional (CNN) também é proposta usando uma abordagem de aprendizado por transferência para lidar com conjuntos de dados pequenos. Os testes de generalização propostos mostram a capacidade dos modelos de ML de aprender o padrão entre as entradas do modelo e a PL, mesmo em ambientes e cenários mais desafiadores de amostras desconhecidas. / [en] Path loss (PL) is an essential parameter in propagation models and critical in determining mobile systems’ coverage area. Machine learning (ML) methods have become promising tools for radio propagation prediction. However, there are still some challenges for its full deployment, concerning to selection of the most significant model s inputs, understanding their contributions to the model s predictions, and a further evaluation of the generalization capacity for unknown samples. This thesis aims to design optimized ML-based PL models for different applications for the 5G and beyond technologies. These applications encompass millimeter wave (mmWave) links for indoor and outdoor environments in the frequency band from 26.5 to 40 GHz, macrocell coverage in the sub-6 GHz spectrum, and vehicular communications using measurements campaign carried out by the Laboratory of Radio-propagation, CETUC, in Rio de Janeiro, Brazil. Several ML algorithms are exploited, such as artificial neural network (ANN), support vector regression (SVR), random forest (RF), and gradient tree boosting (GTB). Furthermore, we have extended two empirical models for mmWave with improved PL prediction. We proposes a methodology for robust ML model selection and a methodology to select the most suitable predictors for the machines considered based on performance improvement and the model’s interpretability. In adittion, for the vehicle-to-vehicle (V2V) channel, a convolutional neural network (CNN) technique is also proposed using a transfer learning approach to deal with small datasets. The generalization tests proposed shows the ability of the ML models to learn the pattern between the model’s inputs and PL, even in more challenging environments and scenarios of unknown samples.
59

Interpretability and Accuracy in Electricity Price Forecasting : Analysing DNN and LEAR Models in the Nord Pool and EPEX-BE Markets

Margarida de Mendoça de Atayde P. de Mascarenhas, Maria January 2023 (has links)
Market prices in the liberalized European electricity system play a crucial role in promoting competition, ensuring grid stability, and maximizing profits for market participants. Accurate electricity price forecasting algorithms have, therefore, become increasingly important in this competitive market. However, existing evaluations of forecasting models primarily focus on overall accuracy, overlooking the underlying causality of the predictions. The thesis explores two state-of-the-art forecasters, the deep neural network (DNN) and the Lasso Estimated AutoRegressive (LEAR) models, in the EPEX-BE and Nord Pool markets. The aim is to understand if their predictions can be trusted in more general settings than the limited context they are trained in. If the models produce poor predictions in extreme conditions or if their predictions are inconsistent with reality, they cannot be relied upon in the real world where these forecasts are used in downstream decision-making activities. The results show that for the EPEX-BE market, the DNN model outperforms the LEAR model in terms of overall accuracy. However, the LEAR model performs better in predicting negative prices, while the DNN model performs better in predicting price spikes. For the Nord Pool market, a simpler DNN model is more accurate for price forecasting. In both markets, the models exhibit behaviours inconsistent with reality, making it challenging to trust the models’ predictions. Overall, the study highlights the importance of understanding the underlying causality of forecasting models and the limitations of relying solely on overall accuracy metrics. / Priserna på den liberaliserade europeiska elmarknaden spelar en avgörande roll för att främja konkurrens, säkerställa stabilitet i elnätet och maximera aktörernas vinster. Exakta prisprognoalgoritmer har därför blivit allt viktigare på denna konkurrensutsatta marknad. Existerande utvärderingar av prognosverktyg fokuserar emellertid på den övergripande noggrannheten och förbiser de underliggande orsakssambanden i prognoserna. Denna rapport utforskar två moderna prognosverktyg, DNN (Deep Neural Network) och LEAR (Lasso Estimated AutoRegressive) på elmarknaderna i Belgien respektive Norden. Målsättningen är att förstå om deras prognoser är pålitliga i mer allmänna sammanhang än det begränsade sammahang som de är tränade i. Om modellerna producerar dåliga prognoser under extrema förhållanden eller om deras prognoser inte överensstämmer med verkligheten så kan man inte förlita sig på dem i den verkliga världen, där prognoserna ligger till grund för beslutsfattande aktiviteter. Resultaten för Belgien visar att DNN-modellen överträffar LEAR-modellen när det gäller övergripande noggrannhet. LEAR-modellen presterar dock bättre när det gäller att förutse negativa priser, medan DNN-modellen presterar bättre när det gäller prisspikar. På den nordiska elmarknaden är en enklare DNN-modell mer noggrann för prisprognoser. På båda marknaden visar modellerna beteenden som inte överensstämmer med verkligheten, vilket gör det utmanande att lita på modellernas prognoser. Sammantaget belyser studien vikten av att förstå de underliggande orsakssambanden i prognosmodellerna och begränsningarna med att enbart förlita sig på övergripande mått på noggrannhet.
60

Investigating the Attribution Quality of LSTM with Attention and SHAP : Going Beyond Predictive Performance / En undersökning av attributionskvaliteten av LSTM med attention och SHAP : Bortom prediktiv prestanda

Kindbom, Hannes January 2021 (has links)
Estimating each marketing channel’s impact on conversion can help advertisers develop strategies and spend their marketing budgets optimally. This problem is often referred to as attribution modelling, and it is gaining increasing attention in both the industry and academia as access to online tracking data improves. Focusing on achieving higher predictive performance, the Long Short- Term Memory (LSTM) architecture is currently trending as a data-driven solution to attribution modelling. However, such deep neural networks have been criticised for being difficult to interpret. Interpretability is critical, since channel attributions are generally obtained by studying how a model makes a binary conversion prediction given a sequence of clicks or views of ads in different channels. Therefore, this degree project studies and compares the quality of LSTM attributions, calculated with SHapleyAdditive exPlanations (SHAP), attention and fractional scores to three baseline models. The fractional score is the mean difference in a model’s predicted conversion probability with and without a channel. Furthermore, a synthetic data generator based on a Poisson process is developed and validated against real data to measure attribution quality as the Mean Absolute Error (MAE) between calculated attributions and the true causal relationships between channel clicks and conversions. The experimental results demonstrate that the quality of attributions is not unambiguously reflected by the predictive performance of LSTMs. In general, it is not possible to assume a high attribution quality solely based on high predictive performance. For example, all models achieve ~82% accuracy on real data, whereas LSTM Fractional and SHAP produce the lowest attribution quality of 0:0566 and 0:0311 MAE respectively. This can be compared to an improved MAE of 0:0058, which is obtained with a Last-Touch Attribution (LTA) model. The attribution quality also varies significantly depending on which attribution calculation method is used for the LSTM. This suggests that the ongoing quest for improved accuracy may be questioned and that it is not always justified to use an LSTM when aiming for high quality attributions. / Genom att estimera påverkan varje marknadsföringskanal har på konverteringar, kan annonsörer utveckla strategier och spendera sina marknadsföringsbudgetar optimalt. Det här kallas ofta attributionsmodellering och det får alltmer uppmärksamhet i både näringslivet och akademin när tillgången till spårningsinformation ökar online. Med fokus på att uppnå högre prediktiv prestanda är Long Short-Term Memory (LSTM) för närvarande en populär datadriven lösning inom attributionsmodellering. Sådana djupa neurala nätverk har dock kritiserats för att vara svårtolkade. Tolkningsbarhet är viktigt, då kanalattributioner generellt fås genom att studera hur en modell gör en binär konverteringsprediktering givet en sekvens av klick eller visningar av annonser i olika kanaler. Det här examensarbetet studerar och jämför därför kvaliteten av en LSTMs attributioner, beräknade med SHapley Additive exPlanations (SHAP), attention och fractional scores mot tre grundmodeller. Fractional scores beräknas som medelvärdesdifferensen av en modells predikterade konverteringssannolikhet med och utan en viss kanal. Därutöver utvecklas en syntetisk datagenerator baserad på en Poissonprocess, vilken valideras mot verklig data. Generatorn används för att kunna mäta attributionskvalitet som Mean Absolute Error (MAE) mellan beräknade attributioner och de verkliga kausala sambanden mellan kanalklick och konverteringar. De experimentella resultaten visar att attributionskvaliteten inte entydigt avspeglas av en LSTMs prediktiva prestanda. Det är generellt inte möjligt att anta en hög attributionskvalitet enbart baserat på en hög prediktiv prestanda. Alla modeller uppnår exempelvis ~82% prediktiv träffsäkerhet på verklig data, medan LSTM Fractional och SHAP ger den lägsta attributionskvaliteten på 0:0566 respektive 0:0311 MAE. Det här kan jämföras mot en förbättrad MAE på 0:0058, som erhålls med en Last-touch-modell. Kvaliteten på attributioner varierar också signifikant beroende på vilket metod för attributionsberäkning som används för LSTM. Det här antyder att den pågående strävan efter högre prediktiv träffsäkerhet kan ifrågasättas och att det inte alltid är berättigat att använda en LSTM när attributioner av hög kvalitet eftersträvas.

Page generated in 0.0983 seconds