• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • Tagged with
  • 9
  • 9
  • 9
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Assessment of Predictive Models for Improving Default Settings in Streaming Services / Bedömning av prediktiva modeller för att förbättra standardinställningar i streamingtjänster

Lattouf, Mouzeina January 2020 (has links)
Streaming services provide different settings where customers can choose a sound and video quality based on personal preference. The majority of users never make an active choice; instead, they get a default quality setting which is chosen automatically for them based on some parameters, like internet connection quality. This thesis explores personalising the default audio setting, intending to improve the user experience. It achieves this by leveraging machine learning trained on the fraction of users that have made active choices in changing the quality setting. The assumption that user similarity in users who make an active choice can be leveraged to impact user experience was the idea behind this thesis work. It was issued to study which type of data from different categories: demographic, product and consumption is most predictive of a user's taste in sound quality. A case study was conducted to achieve the goals for this thesis. Five predictive model prototypes were trained, evaluated, compared and analysed using two different algorithms: XGBoost and Logistic Regression, and targeting two regions: Sweden and Brazil. Feature importance analysis was conducted using SHapley Additive exPlanations(SHAP), a unified framework for interpreting predictions with a game theoretic approach, and by measuring coefficient weights to determine the most predictive features. Besides exploring the feature impact, the thesis also answers how reasonable it is to generalise these models to non-selecting users by performing hypothesis testing. The project also covered bias analysis between users with and without active quality settings and how that affects the models. The models with XGBoost had higher performance. The results showed that demographic and product data had a higher impact on model predictions in both regions. Although, different regions did not have the same data features as most predictive, so there were differences observed in feature importance between regions and also between platforms. The results of hypothesis testing did not indicate a valid reason to consider the models to work for non-selective users. However, the method is negatively affected by other factors such as small changes in big datasets that impact the statistical significance. Data bias in some data features was found, which indicated a correlation but not the causation behind the patterns. The results of this thesis additionally show how machine learning can improve user experience in regards to default sound quality settings, by leveraging models on user similarity in users who have changed the sound quality to the most suitable for them. / Streamingtjänster erbjuder olika inställningar där kunderna kan välja ljud- och videokvalitet baserat på personliga preferenser. Majoriteten av användarna gör aldrig ett aktivt val; de tilldelas istället en standardkvalitetsinställning som väljs automatiskt baserat på vissa parametrar, som internetanslutningskvalitet. Denna avhandling undersöker anpassning av standardljudinställningen, med avsikt att förbättra användarupplevelsen. Detta uppnås genom att tillämpa maskininlärning på den andel användare som har aktivt ändrat kvalitetsinställningen. Antagandet att användarlikhet hos användare som gör ett aktivt val kan utnyttjas för att påverka användarupplevelsen var tanken bakom detta examensarbete. Det utfärdades för att studera vilken typ av data från olika kategorier: demografi, produkt och konsumtion är mest förutsägande för användarens smak i ljudkvalitet. En fallstudie genomfördes för att uppnå målen för denna avhandling. Fem prediktiva modellprototyper tränades, utvärderades, jämfördes och analyserades med två olika algoritmer: XGBoost och Logistisk Regression, och inriktade på två regioner: Sverige och Brasilien. Analys av funktionsvikt genomfördes med SHapley Additive exPlanations (SHAP), en enhetlig ram för att tolka förutsägelser med en spelteoretisk metod, och genom att mäta koefficientvikter för att bestämma de mest prediktiva funktionerna. Förutom att utforska funktionens påverkan, svarar avhandlingen också på hur rimligt det är att generalisera dessa modeller för icke-selektiva användare genom att utföra hypotesprövning. Projektet omfattade också biasanalys mellan användare med och utan aktiva kvalitetsinställningar och hur det påverkar modellerna. Modellerna med XGBoost hade högre prestanda. Resultaten visade att demografisk data och produktdata hade en högre inverkan på modellförutsägelser i båda regionerna. Däremot hade olika regioner inte samma datafunktioner som mest prediktiva, skillnader observerades i funktionsvikt mellan regioner och även mellan plattformar. Resultaten av hypotesprövningen indikerade inte på vägande anledning för att anse att modellerna skulle fungera för icke-selektiva användare. Däremot har metoden påverkats negativt av andra faktorer som små förändringar i stora datamängder som påverkar den statistiska signifikansen. Data bias hittades i vissa datafunktioner, vilket indikerade en korrelation men inte orsaken bakom mönstren. Resultaten av denna avhandling visar dessutom hur maskininlärning kan förbättra användarupplevelsen när det gäller standardinställningar för ljudkvalitet, genom att utnyttja modeller för användarlikhet hos användare som har ändrat ljudkvaliteten till det mest lämpliga för dem.
2

Predicting and Interpreting Students Performance using Supervised Learning and Shapley Additive Explanations

January 2019 (has links)
abstract: Due to large data resources generated by online educational applications, Educational Data Mining (EDM) has improved learning effects in different ways: Students Visualization, Recommendations for students, Students Modeling, Grouping Students, etc. A lot of programming assignments have the features like automating submissions, examining the test cases to verify the correctness, but limited studies compared different statistical techniques with latest frameworks, and interpreted models in a unified approach. In this thesis, several data mining algorithms have been applied to analyze students’ code assignment submission data from a real classroom study. The goal of this work is to explore and predict students’ performances. Multiple machine learning models and the model accuracy were evaluated based on the Shapley Additive Explanation. The Cross-Validation shows the Gradient Boosting Decision Tree has the best precision 85.93% with average 82.90%. Features like Component grade, Due Date, Submission Times have higher impact than others. Baseline model received lower precision due to lack of non-linear fitting. / Dissertation/Thesis / Masters Thesis Computer Science 2019
3

Explainable AI methods for credit card fraud detection : Evaluation of LIME and SHAP through a User Study

Ji, Yingchao January 2021 (has links)
In the past few years, Artificial Intelligence (AI) has evolved into a powerful tool applied in multi-disciplinary fields to resolve sophisticated problems. As AI becomes more powerful and ubiquitous, oftentimes the AI methods also become opaque, which might lead to trust issues for the users of the AI systems as well as fail to meet the legal requirements of AI transparency. In this report, the possibility of making a credit-card fraud detection support system explainable to users is investigated through a quantitative survey. A publicly available credit card dataset was used. Deep Learning and Random Forest were the two Machine Learning (ML) methodsimplemented and applied on the credit card fraud dataset, and the performance of their results was evaluated in terms of their accuracy, recall, sufficiency, and F1 score. After that, two explainable AI (XAI) methods - SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) were implemented and applied to the results obtained from these two ML methods. Finally, the XAI results were evaluated through a quantitative survey. The results from the survey revealed that the XAI explanations can slightly increase the users' impression of the system's ability to reason and LIME had a slight advantage over SHAP in terms of explainability. Further investigation of visualizing data pre-processing and the training process is suggested to offer deep explanations for users.
4

Explainable AI techniques for sepsis diagnosis : Evaluating LIME and SHAP through a user study

Norrie, Christian January 2021 (has links)
Articial intelligence has had a large impact on many industries and transformed some domains quite radically. There is tremendous potential in applying AI to the eld of medical diagnostics. A major issue with applying these techniques to some domains is an inability for AI models to provide an explanation or justication for their predictions. This creates a problem wherein a user may not trust an AI prediction, or there are legal requirements for justifying decisions that are not met. This thesis overviews how two explainable AI techniques (Shapley Additive Explanations and Local Interpretable Model-Agnostic Explanations) can establish a degree of trust for the user in the medical diagnostics eld. These techniques are evaluated through a user study. User study results suggest that supplementing classications or predictions with a post-hoc visualization increases interpretability by a small margin. Further investigation and research utilizing a user study surveyor interview is suggested to increase interpretability and explainability of machine learning results.
5

Counterfactual and Causal Analysis for AI-based Modulation and Coding Scheme Selection / Kontrafaktisk och orsaksanalys för AI-baserad modulerings- och kodningsval

Hao, Kun January 2023 (has links)
Artificial Intelligence (AI) has emerged as a transformative force in wireless communications, driving innovation to address the complex challenges faced by communication systems. In this context, the optimization of limited radio resources plays a crucial role, and one important aspect is the Modulation and Coding Scheme (MCS) selection. AI solutions for MCS selection have been predominantly characterized as black-box models, which suffer from limited explainability and consequently hinder trust in these algorithms. Moreover, the majority of existing research primarily emphasizes enhancing explainability without concurrently improving the model’s performance which makes performance and explainability a trade-off. This work aims to address these issues by employing eXplainable AI (XAI), particularly counterfactual and causal analysis, to increase the explainability and trustworthiness of black-box models. We propose CounterFactual Retrain (CF-Retrain), the first method that utilizes counterfactual explanations to improve model performance and make the process of performance enhancement more explainable. Additionally, we conduct a causal analysis and compare the results with those obtained from an analysis based on the SHapley Additive exPlanations (SHAP) value feature importance. This comparison leads to the proposal of novel hypotheses and insights for model optimization in future research. Our results show that employing CF-Retrain can reduce the Mean Absolute Error (MAE) of the black-box model by 4% while utilizing only 14% of the training data. Moreover, increasing the amount of training data yields even more pronounced improvements in MAE, providing a certain level of explainability. This performance enhancement is comparable to or even superior to using a more complex model. Furthermore, by introducing causal analysis to the mainstream SHAP value feature importance, we provide a novel hypothesis and explanation of feature importance based on causal analysis. This approach can serve as an evaluation criterion for assessing the model’s performance. / Artificiell intelligens (AI) har dykt upp som en transformativ kraft inom trådlös kommunikation, vilket driver innovation för att möta de komplexa utmaningar som kommunikationssystem står inför. I detta sammanhang spelar optimeringen av begränsade radioresurser en avgörande roll, och en viktig aspekt är valet av Modulation and Coding Scheme (MCS). AI-lösningar för val av modulering och kodningsschema har övervägande karaktäriserats som black-box-modeller, som lider av begränsad tolkningsbarhet och följaktligen hindrar förtroendet för dessa algoritmer. Dessutom betonar majoriteten av befintlig forskning i första hand att förbättra förklaringsbarheten utan att samtidigt förbättra modellens prestanda, vilket gör prestanda och tolkningsbarhet till en kompromiss. Detta arbete syftar till att ta itu med dessa problem genom att använda XAI, särskilt kontrafaktisk och kausal analys, för att öka tolkningsbarheten och pålitligheten hos svarta-box-modeller. Vi föreslår CF-Retrain, den första metoden som använder kontrafaktiska förklaringar för att förbättra modellens prestanda och göra processen med prestandaförbättring mer tolkningsbar. Dessutom gör vi en orsaksanalys och jämför resultaten med de som erhålls från en analys baserad på värdeegenskapens betydelse. Denna jämförelse leder till förslaget av nya hypoteser och insikter för modelloptimering i framtida forskning. Våra resultat visar att användning av CF-Retrain kan minska det genomsnittliga absoluta felet för black-box-modellen med 4% samtidigt som man använder endast 14% av träningsdata. Dessutom ger en ökning av mängden träningsdata ännu mer uttalade förbättringar av Mean Absolute Error (MAE), vilket ger en viss grad av tolkningsbarhet. Denna prestandaförbättring är jämförbar med eller till och med överlägsen att använda en mer komplex modell. Dessutom, genom att introducera kausal analys till de vanliga Shapley-tillsatsförklaringarna värdesätter egenskapens betydelse, ger vi en ny hypotes och tolkning av egenskapens betydelse baserad på kausalanalys. Detta tillvägagångssätt kan fungera som ett utvärderingskriterium för att bedöma modellens prestanda.
6

Interpreting Multivariate Time Series for an Organization Health Platform

Saluja, Rohit January 2020 (has links)
Machine learning-based systems are rapidly becoming popular because it has been realized that machines are more efficient and effective than humans at performing certain tasks. Although machine learning algorithms are extremely popular, they are also very literal and undeviating. This has led to a huge research surge in the field of interpretability in machine learning to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process. Moreover, in most real-world problems just making predictions using machine learning algorithms only solves the problem partially. Time series is one of the most popular and important data types because of its dominant presence in the fields of business, economics, and engineering. Despite this, interpretability in time series is still relatively unexplored as compared to tabular, text, and image data. With the growing research in the field of interpretability in machine learning, there is also a pressing need to be able to quantify the quality of explanations produced after interpreting machine learning models. Due to this reason, evaluation of interpretability is extremely important. The evaluation of interpretability for models built on time series seems completely unexplored in research circles. This thesis work focused on achieving and evaluating model agnostic interpretability in a time series forecasting problem.  The use case discussed in this thesis work focused on finding a solution to a problem faced by a digital consultancy company. The digital consultancy wants to take a data-driven approach to understand the effect of various sales related activities in the company on the sales deals closed by the company. The solution involved framing the problem as a time series forecasting problem to predict the sales deals and interpreting the underlying forecasting model. The interpretability was achieved using two novel model agnostic interpretability techniques, Local interpretable model- agnostic explanations (LIME) and Shapley additive explanations (SHAP). The explanations produced after achieving interpretability were evaluated using human evaluation of interpretability. The results of the human evaluation studies clearly indicate that the explanations produced by LIME and SHAP greatly helped lay humans in understanding the predictions made by the machine learning model. The human evaluation study results also indicated that LIME and SHAP explanations were almost equally understandable with LIME performing better but with a very small margin. The work done during this project can easily be extended to any time series forecasting or classification scenario for achieving and evaluating interpretability. Furthermore, this work can offer a very good framework for achieving and evaluating interpretability in any machine learning-based regression or classification problem. / Maskininlärningsbaserade system blir snabbt populära eftersom man har insett att maskiner är effektivare än människor när det gäller att utföra vissa uppgifter. Även om maskininlärningsalgoritmer är extremt populära, är de också mycket bokstavliga. Detta har lett till en enorm forskningsökning inom området tolkbarhet i maskininlärning för att säkerställa att maskininlärningsmodeller är tillförlitliga, rättvisa och kan hållas ansvariga för deras beslutsprocess. Dessutom löser problemet i de flesta verkliga problem bara att göra förutsägelser med maskininlärningsalgoritmer bara delvis. Tidsserier är en av de mest populära och viktiga datatyperna på grund av dess dominerande närvaro inom affärsverksamhet, ekonomi och teknik. Trots detta är tolkningsförmågan i tidsserier fortfarande relativt outforskad jämfört med tabell-, text- och bilddata. Med den växande forskningen inom området tolkbarhet inom maskininlärning finns det också ett stort behov av att kunna kvantifiera kvaliteten på förklaringar som produceras efter tolkning av maskininlärningsmodeller. Av denna anledning är utvärdering av tolkbarhet extremt viktig. Utvärderingen av tolkbarhet för modeller som bygger på tidsserier verkar helt outforskad i forskarkretsar. Detta uppsatsarbete fokuserar på att uppnå och utvärdera agnostisk modelltolkbarhet i ett tidsserieprognosproblem.  Fokus ligger i att hitta lösningen på ett problem som ett digitalt konsultföretag står inför som användningsfall. Det digitala konsultföretaget vill använda en datadriven metod för att förstå effekten av olika försäljningsrelaterade aktiviteter i företaget på de försäljningsavtal som företaget stänger. Lösningen innebar att inrama problemet som ett tidsserieprognosproblem för att förutsäga försäljningsavtalen och tolka den underliggande prognosmodellen. Tolkningsförmågan uppnåddes med hjälp av två nya tekniker för agnostisk tolkbarhet, lokala tolkbara modellagnostiska förklaringar (LIME) och Shapley additiva förklaringar (SHAP). Förklaringarna som producerats efter att ha uppnått tolkbarhet utvärderades med hjälp av mänsklig utvärdering av tolkbarhet. Resultaten av de mänskliga utvärderingsstudierna visar tydligt att de förklaringar som produceras av LIME och SHAP starkt hjälpte människor att förstå förutsägelserna från maskininlärningsmodellen. De mänskliga utvärderingsstudieresultaten visade också att LIME- och SHAP-förklaringar var nästan lika förståeliga med LIME som presterade bättre men med en mycket liten marginal. Arbetet som utförts under detta projekt kan enkelt utvidgas till alla tidsserieprognoser eller klassificeringsscenarier för att uppnå och utvärdera tolkbarhet. Dessutom kan detta arbete erbjuda en mycket bra ram för att uppnå och utvärdera tolkbarhet i alla maskininlärningsbaserade regressions- eller klassificeringsproblem.
7

<b>Explaining Generative Adversarial Network Time Series Anomaly Detection using Shapley Additive Explanations</b>

Cher Simon (18324174) 10 July 2024 (has links)
<p dir="ltr">Anomaly detection is an active research field that widely applies to commercial applications to detect unusual patterns or outliers. Time series anomaly detection provides valuable insights into mission and safety-critical applications using ever-growing temporal data, including continuous streaming time series data from the Internet of Things (IoT), sensor networks, healthcare, stock prices, computer metrics, and application monitoring. While Generative Adversarial Networks (GANs) demonstrate promising results in time series anomaly detection, the opaque nature of generative deep learning models lacks explainability and hinders broader adoption. Understanding the rationale behind model predictions and providing human-interpretable explanations are vital for increasing confidence and trust in machine learning (ML) frameworks such as GANs. This study conducted a structured and comprehensive assessment of post-hoc local explainability in GAN-based time series anomaly detection using SHapley Additive exPlanations (SHAP). Using publicly available benchmarking datasets approved by Purdue’s Institutional Review Board (IRB), this study evaluated state-of-the-art GAN frameworks identifying their advantages and limitations for time series anomaly detection. This study demonstrated a systematic approach in quantifying the extent of GAN-based time series anomaly explainability, providing insights for businesses when considering adopting generative deep learning models. The presented results show that GANs capture complex time series temporal distribution and are applicable for anomaly detection. The analysis from this study shows SHAP can identify the significance of contributing features within time series data and derive post-hoc explanations to quantify GAN-detected time series anomalies.</p>
8

Multi-fidelity Machine Learning for Perovskite Band Gap Predictions

Panayotis Thalis Manganaris (16384500) 16 June 2023 (has links)
<p>A wide range of optoelectronic applications demand semiconductors optimized for purpose.</p> <p>My research focused on data-driven identification of ABX3 Halide perovskite compositions for optimum photovoltaic absorption in solar cells.</p> <p>I trained machine learning models on previously reported datasets of halide perovskite band gaps based on first principles computations performed at different fidelities.</p> <p>Using these, I identified mixtures of candidate constituents at the A, B or X sites of the perovskite supercell which leveraged how mixed perovskite band gaps deviate from the linear interpolations predicted by Vegard's law of mixing to obtain a selection of stable perovskites with band gaps in the ideal range of 1 to 2 eV for visible light spectrum absorption.</p> <p>These models predict the perovskite band gap using the composition and inherent elemental properties as descriptors.</p> <p>This enables accurate, high fidelity prediction and screening of the much larger chemical space from which the data samples were drawn.</p> <p><br></p> <p>I utilized a recently published density functional theory (DFT) dataset of more than 1300 perovskite band gaps from four different levels of theory, added to an experimental perovskite band gap dataset of \textasciitilde{}100 points, to train random forest regression (RFR), Gaussian process regression (GPR), and Sure Independence Screening and Sparsifying Operator (SISSO) regression models, with data fidelity added as one-hot encoded features.</p> <p>I found that RFR yields the best model with a band gap root mean square error of 0.12 eV on the total dataset and 0.15 eV on the experimental points.</p> <p>SISSO provided compound features and functions for direct prediction of band gap, but errors were larger than from RFR and GPR.</p> <p>Additional insights gained from Pearson correlation and Shapley additive explanation (SHAP) analysis of learned descriptors suggest the RFR models performed best because of (a) their focus on identifying and capturing relevant feature interactions and (b) their flexibility to represent nonlinear relationships between such interactions and the band gap.</p> <p>The best model was deployed for predicting experimental band gap of 37785 hypothetical compounds.</p> <p>Based on this, we identified 1251 stable compounds with band gap predicted to be between 1 and 2 eV at experimental accuracy, successfully narrowing the candidates to about 3% of the screened compositions.</p>
9

Explainable Reinforcement Learning for Gameplay

Costa Sánchez, Àlex January 2022 (has links)
State-of-the-art Machine Learning (ML) algorithms show impressive results for a myriad of applications. However, they operate as a sort of a black box: the decisions taken are not human-understandable. There is a need for transparency and interpretability of ML predictions to be wider accepted in society, especially in specific fields such as medicine or finance. Most of the efforts so far have focused on explaining supervised learning. This project aims to use some of these successful explainability algorithms and apply them to Reinforcement Learning (RL). To do so, we explain the actions of a RL agent playing Atari’s Breakout game, using two different explainability algorithms: Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME). We successfully implement both algorithms, which yield credible and insightful explanations of the mechanics of the agent. However, we think the final presentation of the results is sub-optimal for the final user, as it is not intuitive at first sight. / De senaste algoritmerna för maskininlärning (ML) visar imponerande resultat för en mängd olika tillämpningar. De fungerar dock som ett slags ”svart låda”: de beslut som fattas är inte begripliga för människor. Det finns ett behov av öppenhet och tolkningsbarhet för ML-prognoser för att de ska bli mer accepterade i samhället, särskilt inom specifika områden som medicin och ekonomi. De flesta insatser hittills har fokuserat på att förklara övervakad inlärning. Syftet med detta projekt är att använda några av dessa framgångsrika algoritmer för att förklara och tillämpa dem på förstärkning lärande (Reinforcement Learning, RL). För att göra detta förklarar vi handlingarna hos en RL-agent som spelar Ataris Breakout-spel med hjälp av två olika förklaringsalgoritmer: Shapley Additive Explanations (SHAP) och Local Interpretable Model-agnostic Explanations (LIME). Vi genomför framgångsrikt båda algoritmerna, som ger trovärdiga och insiktsfulla förklaringar av agentens mekanik. Vi anser dock att den slutliga presentationen av resultaten inte är optimal för slutanvändaren, eftersom den inte är intuitiv vid första anblicken. / Els algoritmes d’aprenentatge automàtic (Machine Learning, ML) d’última generació mostren resultats impressionants per a moltes aplicacions. Tot i això, funcionen com una mena de caixa negra: les decisions preses no són comprensibles per a l’ésser humà. Per tal que les prediccion preses mitjançant ML siguin més acceptades a la societat, especialment en camps específics com la medicina o les finances, cal transparència i interpretabilitat. La majoria dels esforços que s’han fet fins ara s’han centrat a explicar l’aprenentatge supervisat (supervised learning). Aquest projecte pretén utilitzar alguns d’aquests existosos algoritmes d’explicabilitat i aplicar-los a l’aprenentatge per reforç (Reinforcement Learning, RL). Per fer-ho, expliquem les accions d’un agent de RL que juga al joc Breakout d’Atari utilitzant dos algoritmes diferents: explicacions additives de Shapley (SHAP) i explicacions model-agnòstiques localment interpretables (LIME). Hem implementat amb èxit tots dos algoritmes, que produeixen explicacions creïbles i interessants de la mecànica de l’agent. Tanmateix, creiem que la presentació final dels resultats no és òptima per a l’usuari final, ja que no és intuïtiva a primera vista.

Page generated in 0.3697 seconds