• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Research on a Heart Disease Prediction Model Based on the Stacking Principle

Li, Jianeng January 2020 (has links)
In this study, the prediction model based on the Stacking principle is called the Stacking fusion model. Little evidence demonstrates that the Stacking fusion model possesses better prediction performance in the field of heart disease diagnosis than other classification models. Since this model belongs to the family of ensemble learning models, which has a bad interpretability, it should be used with caution in medical diagnoses. The purpose of this study is to verify whether the Stacking fusion model has better prediction performance than stand-alone machine learning models and other ensemble classifiers in the field of heart disease diagnosis, and to find ways to explain this model. This study uses experiment and quantitative analysis to evaluate the prediction performance of eight models in terms of prediction ability, algorithmic stability, false negative rate and run-time. It is proved that the Stacking fusion model with Naive Bayes classifier, XGBoost and Random forest as the first-level learners is superior to other classifiers in prediction ability. The false negative rate of this model is also outstanding. Furthermore, the Stacking fusion model is explained from the working principle of the model and the SHAP framework. The SHAP framework explains this model’s judgement of the important factors that influence heart disease and the relationship between the value of these factors and the probability of disease. Overall, two research problems in this study help reveal the prediction performance and reliability of the cardiac disease prediction model based on the Stacking principle. This study provides practical and theoretical support for hospitals to use the Stacking principle in the diagnosis of heart disease.
2

Evaluation of Explainable AI Techniques for Interpreting Machine Learning Models

Muhammad, Al Jaber Al Shwali January 2024 (has links)
Denna undersökning utvärderar tillvägagångssätt inom "Explainable Artificial Intelligence" (XAI), särskilt "Local Interpretable Model Agnostic Explanations" (LIME) och 'Shapley Additive Explanations' (SHAP), genom att implementera dem i maskininlärningsmodeller som används inom cybersäkerhetens brandväggssystem. Prioriteten är att förbättra förståelsen av flervals klassificerings uppgift inom brandvägg hantering. I takt med att dagens AI-system utvecklas, sprids och tar en större roll i kritiska beslutsprocesser, blir transparens och förståelighet alltmer avgörande. Denna studie demonstrerar genom detaljerad analys och metodisk experimentell utvärdering hur SHAP och LIME belyser effekten av olika egenskaper på modellens prognoser, vilket i sin tur ökar tilliten till beslut som drivs av AI. Resultaten visar, hur funktioner såsom "Elapsed Time (sec)”, ”Network Address Translation” (NAT) källa och "Destination ports" ansenlig påverkar modellens resultat, vilket demonstreras genom analys av SHAP-värden. Dessutom erbjuder LIME detaljerade insikter i den lokala beslutsprocessen, vilket förbättrar vår förståelse av modellens beteende på individuell nivå. Studiet betonar betydelsen av XAI för att minska klyftan mellan AI operativa mekanismer och användarens förståelse, vilket är avgörande för felsökning samt för att säkerställa rättvisa, ansvar och etisk integritet i AI-implementeringar. Detta gör studiens implikationer betydande, då den ger en grund för framtida forskning om transparens i AI-system inom olika sektorer. / This study evaluates the explainable artificial intelligence (XAI) methods, specifically Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP), by applying them to machine learning models used in cybersecurity firewall systems and focusing on multi-class classification tasks within firewall management to improve their interpretability. As today's AI systems become more advanced, widespread, and involved in critical decision-making, transparency and interpretability have become essential. Through accurate analysis and systematic experimental evaluation, this study illustrates how SHAP and LIME clarify the impact of various features on model predictions, thereby leading to trust in AI-driven decisions. The results indicate that features such as Elapsed Time (sec), Network Address Translation (NAT) source, and Destination ports markedly affect model outcomes, as demonstrated by SHAP value analysis. Additionally, LIME offers detailed insights into the local decision making process, enhancing our understanding of model behavior at the individual level. The research underlines the importance of XAI in reducing the gap between AI operational mechanisms and user understanding, which is critical for debugging, and ensuring fairness, responsibility, and ethical integrity in AI implementations. This makes the implications of this study substantial, providing a basis for future research into the transparency of AI systems across different sectors.
3

Utvärdering av den upplevda användbarheten hos CySeMoL och EAAT med hjälp av ramverk för ändamålet och ISO/IEC 25010:2011

Frost, Per January 2013 (has links)
This report describes a study aimed at uncovering flaws and finding potential improvements from when the modelling tool EAAT is used in conjunction with the modelling language CySeMoL. The study was performed by developing a framework and applying it on CySeMoL and EAAT in real life context networks. The framework was developed in order to increase the number of flaws uncovered as well as gather potential improvements to both EAAT and CySeMoL. The basis of the framework is a modified version of the Quality in use model from ISO/IEC 25010:2011 standard. Upon the characteristics and sub characteristics of this modified model different values for measuring usability where attached. The purpose of these values is to measure usability from the perspectives of both creating and interpreting models. Furthermore these values are based on several different sources on how to measure usability. The complete contents of the framework and the underlying ideas, upon which the framework is based, are presented in this report. The framework in this study was designed in order to enable it to be used universally with any modelling language in conjunction with a modelling tool. Its design is also not limited to the field of computer security and computer networks, although that is the intended context of CySeMoL as well as the context described in this report. However, utilization outside the intended area of usage will most likely require some modifications, in order to work in a fully satisfying. Several flaws where uncovered regarding the usability of CySeMoL and EAAT, but this is also accompanied by several recommendations on how to improve both CySeMoL and EAAT. Because of the outline of the framework, the most severe flaws have been identified and recommendations on how to rectify these shortcomings have been suggested.
4

Combined Actuarial Neural Networks in Actuarial Rate Making / Kombinerade aktuariska neurala nätverk i aktuarisk tariffanalys

Gustafsson, Axel, Hansén, Jacob January 2021 (has links)
Insurance is built on the principle that a group of people contributes to a common pool of money which will be used to cover the costs for individuals who suffer from the insured event. In a competitive market, an insurance company will only be profitable if their pricing reflects the covered risks as good as possible. This thesis investigates the recently proposed Combined Actuarial Neural Network (CANN), a model nesting the traditional Generalised Linear Model (GLM) used in insurance pricing into a Neural Network (NN). The main idea of utilising NNs for insurance pricing is to model interactions between features that the GLM is unable to capture. The CANN model is analysed in a commercial insurance setting with respect to two research questions. The first research question, RQ 1, seeks to answer if the CANN model can outperform the underlying GLM with respect to error metrics and actuarial model evaluation tools. The second research question, RQ 2, seeks to identify existing interpretability methods that can be applied to the CANN model and also showcase how they can be applied. The results for RQ 1 show that CANN models are able to consistently outperform the GLM with respect to chosen model evaluation tools. A literature search is conducted to answer RQ 2, identifying interpretability methods that either are applicable or are possibly applicable to the CANN model. One interpretability method is also proposed in this thesis specifically for the CANN model, using model-fitted averages on two-dimensional segments of the data. Three interpretability methods from the literature search and the one proposed in this thesis are demonstrated, illustrating how these may be applied. / Försäkringar bygger på principen att en grupp människor bidrar till en gemensam summa pengar som används för att täcka kostnader för individer som råkar ut för den försäkrade händelsen. I en konkurrensutsatt marknad kommer försäkringsbolag endast vara lönsamma om deras prissättning är så bra som möjligt. Denna uppsats undersöker den nyligen föreslagna Combined Actuarial Neural Network (CANN) modellen som bygger in en Generalised Linear Model (GLM) i ett neuralt nätverk, i en praktiskt och kommersiell försäkringskontext med avseende på två forskningsfrågor. Huvudidén för en CANN modell är att fånga interaktioner mellan variabler, vilket en GLM inte automatiskt kan göra. Forskningsfråga 1 ämnar undersöka huruvida en CANN modell kan prestera bättre än en GLM med avseende på utvalda statistiska prestationsmått och modellutvärderingsverktyg som används av aktuarier. Forskningsfråga 2 ämnar identifiera några tolkningsverktyg som kan appliceras på CANN modellen samt demonstrera hur de kan användas. Resultaten för Forskningsfråga 1 visar att CANN modellen kan prestera bättre än en GLM. En literatursökning genomförs för att svara på Forskningsfråga 2, och ett antal tolkningsverktyg identifieras. Ett tolkningsverktyg föreslås också i denna uppsats specifikt för att tolka CANN modellen. Tre av tolkningsverktygen samt det utvecklade verktyget demonstreras för att visa hur de kan användas för att tolka CANN modellen.
5

Combined Actuarial Neural Networks in Actuarial Rate Making / Kombinerade aktuariska neurala nätverk i aktuarisk tariffanalys

Gustafsson, Axel, Hansen, Jacob January 2021 (has links)
Insurance is built on the principle that a group of people contributes to a common pool of money which will be used to cover the costs for individuals who suffer from the insured event. In a competitive market, an insurance company will only be profitable if their pricing reflects the covered risks as good as possible. This thesis investigates the recently proposed Combined Actuarial Neural Network (CANN), a model nesting the traditional Generalised Linear Model (GLM) used in insurance pricing into a Neural Network (NN). The main idea of utilising NNs for insurance pricing is to model interactions between features that the GLM is unable to capture. The CANN model is analysed in a commercial insurance setting with respect to two research questions. The first research question, RQ 1, seeks to answer if the CANN model can outperform the underlying GLM with respect to error metrics and actuarial model evaluation tools. The second research question, RQ 2, seeks to identify existing interpretability methods that can be applied to the CANN model and also showcase how they can be applied. The results for RQ 1 show that CANN models are able to consistently outperform the GLM with respect to chosen model evaluation tools. A literature search is conducted to answer RQ 2, identifying interpretability methods that either are applicable or are possibly applicable to the CANN model. One interpretability method is also proposed in this thesis specifically for the CANN model, using model-fitted averages on two-dimensional segments of the data. Three interpretability methods from the literature search and the one proposed in this thesis are demonstrated, illustrating how these may be applied. / Försäkringar bygger på principen att en grupp människor bidrar till en gemensam summa pengar som används för att täcka kostnader för individer som råkar ut för den försäkrade händelsen. I en konkurrensutsatt marknad kommer försäkringsbolag endast vara lönsamma om deras prissättning är så bra som möjligt. Denna uppsats undersöker den nyligen föreslagna Combined Actuarial Neural Network (CANN) modellen som bygger in en Generalised Linear Model (GLM) i ett neuralt nätverk, i en praktiskt och kommersiell försäkringskontext med avseende på två forskningsfrågor. Huvudidén för en CANN modell är att fånga interaktioner mellan variabler, vilket en GLM inte automatiskt kan göra. Forskningsfråga 1 ämnar undersöka huruvida en CANN modell kan prestera bättre än en GLM med avseende på utvalda statistiska prestationsmått och modellutvärderingsverktyg som används av aktuarier. Forskningsfråga 2 ämnar identifiera några tolkningsverktyg som kan appliceras på CANN modellen samt demonstrera hur de kan användas. Resultaten för Forskningsfråga 1 visar att CANN modellen kan prestera bättre än en GLM. En literatursökning genomförs för att svara på Forskningsfråga 2, och ett antal tolkningsverktyg identifieras. Ett tolkningsverktyg föreslås också i denna uppsats specifikt för att tolka CANN modellen. Tre av tolkningsverktygen samt det utvecklade verktyget demonstreras för att visa hur de kan användas för att tolka CANN modellen.

Page generated in 0.0891 seconds