• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

What do you mean? : The consequences of different stakeholders’ logics in machine learning and how disciplinary differences should be managed within an organization

Eliasson, Nina January 2022 (has links)
This research paper identifies the disciplinary differences of stakeholders and its effects on working cross-functional in the context of machine learning. This study specifically focused on 1) how stakeholders with disciplinary differences interpret a search system, and 2) how the multi-disciplines should be managed in an organization. This was studied through 12 interviews with stakeholders from design disciplines, product management, data science and machine learning engineering, followed by a focus group with a participant from each of the different disciplines. The findings were analyzed through a thematic analysis and institutional logics and concluded that the different logics had a high impact on the stakeholders’ understanding of the search system. The research also concluded that bridging the gap between the multi-disciplinary stakeholders are of high importance in context of machine learning.
2

Explainable AI by Training Introspection / Explainable AI by Training Introspection

Dastkarvelayati, Rozhin, Ghafourian, Soudabeh January 2023 (has links)
Deep Neural Networks (DNNs) are known as black box algorithmsthat lack transparency and interpretability for humans. eXplainableArtificial Intelligence (XAI) is introduced to tackle this problem. MostXAI methods are utilized post-training, providing explanations of themodel to clarify its predictions and inner workings for human understanding. However, there is a shortage of methods that utilize XAIduring training to not only observe the model’s behavior but alsoexploit this information for the benefit of the model.In our approach, we propose a novel method that leverages XAIduring the training process itself. Incorporating feedback from XAIcan give us insights into important features of input data that impact model decisions. This work explores focusing more on specificfeatures during training, which could potentially improve model performance introspectively throughout the training phase. We analyzethe stability of feature explanations during training and find thatthe model’s attention to specific features is consistent in the MNISTdataset. However, unimportant features lack stability. The OCTMNIST dataset, on the other hand, has stable explanations for important features but less consistent explanations for less significant features. Based on this observation, two types of masks, namely fixedand dynamic, are applied to the model’s structure using XAI’s feedback with minimal human intervention. These masks identify themore important features from the less important ones and set the pixels associated with less significant features to zero. The fixed mask isgenerated based on XAI feedback after the model is fully trained, andthen it is applied to the output of the first convolutional layer of a newmodel (with the same architecture), which is trained from scratch. Onthe other hand, the dynamic mask is generated based on XAI feedback during training, and it is applied to the model while the modelis still training. As a result, these masks are changing during different epochs. Examining these two methods on both deep and shallowmodels, we find that both masking methods, particularly the fixedone, reduce the focus of all models on the least important parts of theinput data. This results in improved accuracy and loss in all models.As a result, this approach enhances the model’s interpretability andperformance by incorporating XAI into the training process.
3

ARTIFICIAL INTELLIGENCE APPLICATIONS FOR IDENTIFYING KEY FEATURES TO REDUCE BUILDING ENERGY CONSUMPTION

Lakmini Rangana Senarathne (16642119) 07 August 2023 (has links)
<p>The International Energy Agency (IEA) estimates that residential and commercial buildings consume 40% of global energy and emit 24% of CO2. A building's design parameters and location significantly impact its energy usage. Adjusting the building parameters and features in an optimum way helps to reduce energy usage and to build energy-efficient buildings. Hence, analyzing the impact of influencing factors is critical to reduce building energy usage.</p> <p>Towards this, artificial intelligence applications, such as Explainable Artificial Intelligence (XAI) and machine learning (ML) identified the key building features to reduce building energy. This is done by analyzing the efficiencies of various building features that impact building energy consumption. For this, the relative importance of input features impacting commercial building energy usage is investigated. Also analyzed is the parametric analysis of the impact of input variables on residential building energy usage. Furthermore, the dependencies and relationships between the design variables of residential buildings were examined. Finally, the study analyzed the impact of location features on cooling energy usage in commercial buildings.</p> <p>For the purpose of energy consumption data analysis, three datasets, named the Commercial Building Energy Consumption Survey (CBECS) datasets gathered in 2012 and 2018, University of California Irvine (UCI) energy efficiency dataset, and Commercial Load Data (CLD) were utilized. For this, Python and WEKA were used. Random Forest, Linear Regression, Bayesian Networks, and Logistic Regression predicted energy consumption using datasets. Moreover, statistical tests, such as the Wilcoxon-rank sum test were analyzed for the significant differences between specific datasets. Shapash, a Python library, created the feature important graphs.</p> <p>The results indicated that cooling degree days are the most important feature in predicting cooling load with contribution values 34.29% (2018) and 19.68% (2012). Also, analyzing the impact of building parameters on energy usage indicated that 50% of overall height reduction achieves a reduction of heating load by 64.56% and cooling load by 57.47%. Also, the Wilcoxon-rank sum test indicated that the location of the building also impacts energy consumption with a 0.05 error margin. The proposed analysis is beneficial for real-world applications and energy-efficient building construction.</p>
4

Exploring attribution methods explaining atrial fibrillation predictions from sinus ECGs : Attributions in Scale, Time and Frequency / Undersökning av attributionsmetoder för att förklara förmaksflimmerprediktioner från EKG:er i sinusrytm : Attribution i skala, tid och frekvens

Sörberg, Svante January 2021 (has links)
Deep Learning models are ubiquitous in machine learning. They offer state-of- the-art performance on tasks ranging from natural language processing to image classification. The drawback of these complex models is their black box nature. It is difficult for the end-user to understand how a model arrives at its prediction from the input. This is especially pertinent in domains such as medicine, where being able to trust a model is paramount. In this thesis, ways of explaining a model predicting paroxysmal atrial fibrillation from sinus electrocardiogram (ECG) data are explored. Building on the concept of feature attributions, the problem is approached from three distinct perspectives: time, scale, and frequency. Specifically, one method based on the Integrated Gradients framework and one method based on Shapley values are used. By perturbing the data, retraining the model, and evaluating the retrained model on the perturbed data, the degree of correspondence between the attributions and the meaningful information in the data is evaluated. Results indicate that the attributions in scale and frequency are somewhat consistent with the meaningful information in the data, while the attributions in time are not. The conclusion drawn from the results is that the task of predicting atrial fibrillation for the model in question becomes easier as the level of scale is increased slightly, and that high-frequency information is either not meaningful for the task of predicting atrial fibrillation, or that if it is, the model is unable to learn from it. / Djupinlärningsmodeller förekommer på många håll inom maskininlärning. De erbjuder bästa möjliga prestanda i olika domäner såsom datorlingvistik och bildklassificering. Nackdelen med dessa komplexa modeller är deras “svart låda”-egenskaper. Det är svårt för användaren att förstå hur en modell kommer fram till sin prediktion utifrån indatan. Detta är särskilt relevant i domäner såsom sjukvård, där tillit till modellen är avgörande. I denna uppsats utforskas sätt att förklara en modell som predikterar paroxysmalt förmaksflimmer från elektrokardiogram (EKG) som uppvisar normal sinusrytm. Med utgångspunkt i feature attribution (särdragsattribution) angrips problemet från tre olika perspektiv: tid, skala och frekvens. I synnerhet används en metod baserad på Integrated Gradients och en metod baserad på Shapley-värden. Genom att perturbera datan, träna om modellen, och utvärdera den omtränader modellen på den perturberade datan utvärderas graden av överensstämmelse mellan attributionerna och den meningsfulla informationen i datan. Resultaten visar att attributioner i skala- och frekvensdomänerna delvis stämmer överens med den meningsfulla informationen i datan, medan attributionerna i tidsdomänen inte gör det. Slutsatsen som dras utifrån resultaten är att uppgiften att prediktera förmaksflimmer blir enklare när skalnivån ökas något, samt att högre frekvenser antingen inte är betydelsefullt för att prediktera förmaksflimmer, eller att om det är det, så saknar modellen förmågan att lära sig detta.
5

Explainable Artificial Intelligence for Radio Resource Management Systems : A diverse feature importance approach / Förklarande Artificiell Intelligens inom System för Hantering av Radioresurser : Metoder för klassifisering av betydande predikatorer

Marcu, Alexandru-Daniel January 2022 (has links)
The field of wireless communications is arguably one of the most rapidly developing technological fields. Therefore, with each new advancement in this field, the complexity of wireless systems can grow significantly. This phenomenon is most visible in mobile communications, where the current 5G and 6G radio access networks (RANs) have reached unprecedented complexity levels to satisfy diverse increasing demands. In such increasingly complex environments, managing resources is becoming more and more challenging. Thus, experts employed performant artificial intelligence (AI) techniques to aid radio resource management (RRM) decisions. However, these AI techniques are often difficult to understand by humans, and may receive unimportant inputs which unnecessarily increase their complexity. In this work, we propose an explainability pipeline meant to be used for increasing humans’ understanding of AI models for RRM, as well as for reducing the complexity of these models, without loss of performance. To achieve this, the pipeline generates diverse feature importance explanations of the models with the help of three explainable artificial intelligence (XAI) methods: Kernel SHAP, CERTIFAI, and Anchors, and performs an importance-based feature selection using one of three different strategies. In the case of Anchors, we formulate and utilize a new way of computing feature importance scores, since no current publication in the XAI literature suggests a way to do this. Finally, we applied the proposed pipeline to a reinforcement learning (RL)- based RRM system. Our results show that we could reduce the complexity of the RL model between ∼ 27.5% and ∼ 62.5% according to different metrics, without loss of performance. Moreover, we showed that the explanations produced by our pipeline can be used to answer some of the most common XAI questions about our RL model, thus increasing its understandability. Lastly, we achieved an unprecedented result showing that our RL agent could be completely replaced with Anchors rules when taking RRM decisions, without a significant loss of performance, but with a considerable gain in understandability. / Området trådlös kommunikation är ett av de snabbast utvecklande tekniska områdena, och varje framsteg riskerar att medföra en signifikant ökning av komplexiteten för trådlösa nätverk. Det här fenomenet är som tydligast i mobil kommunikaiton, framför allt inom 5G och 6G radioaccessnätvärk (RANs) som har nåt nivåer av komplexitet som saknar motstycke. Detta för att uppfylla de ökande kraven som ställs på systemet. I dessa komplexa system blir resurshantering ett ökande problem, därför används nu artificiell intelligens (AI) allt mer för att ta beslut om hantering av radioresurser (RRM). Dessa AI tekniker är dock ofta svåra att förstå för människor, och kan således ges oviktig input vilket leder till att öka AI modellernas komplexitet. I detta arbete föreslås en förklarande pipeline vars mål är att användas för att öka människors förståelse av AI modeller för RRM. Målet är även att minska modellernas komplexitet, utan att förlora prestanda. För att åstadkomma detta genererar pipelinen förklaringar av betydande predikatorer för modellen med hjälp av tre metoder för förklarande artificiell intelligens (XAI). Dessa tre metoder är, Kernel SHAP, CERTIFAI och Anchors. Sedan görs ett predikatorurval baserat på predikatorbetydelse med en av dessa tre metoder. För metoden Anchors formuleras ett nytt sätt att beräkna betydelsen hos predikatorer, eftersom tidigare forskning inte föreslår någon metod för detta. Slutligen appliceras den föreslagna pipelinen på en förstärkt inlärnings- (RL) baserat RRM system. Resultaten visar att komplexiteten av RL modellen kunde reduceras med mellan ∼ 27, 5% och ∼ 62, 5% baserat på olika nyckeltal:er, utan att förlora någon prestanda. Utöver detta visades även att förklaringarna som producerats kan användas för att svara på de vanligaste XAI frågoran om RL modellen, och på det viset har även förståelsen för modellen ökat. Sistnämnt uppnåddes enastående resultat som visade att RL modellen helt kunde ersättas med regler producerade av Anchor-metoden för beslut inom RRM, utan någon störra förlust av prestanda, men med an stor vinst i förståelse.

Page generated in 0.118 seconds