• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 40
  • 21
  • 18
  • 17
  • 13
  • 10
  • 10
  • 10
  • 9
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Interpretable Machine Learning for Insurance Risk Pricing / Förståbar Maskinlärning för Riskprissättning Inom Försäkring

Darke, Felix January 2023 (has links)
This Master's Thesis project set out with the objective to propose a machine learning model for predicting insurance risk at the level of an individual coverage, and compare it towards the existing models used by the project provider Gjensidige Försäkring. Due to interpretability constraints, it was found that this problem can be translated into a standard tabular regression task, with well defined target distributions. However, it was early identified that the set of feasible models do not contain pure black box models such as XGBoost, LightGBM and CatBoost which are typical choices for tabular data regression. In the report, we explicitly formulate the interpretability constraints in sharp mathematical language. It is concluded that interpretability can be ensured by enforcing a particular structure on the Hilbert space across which we are looking for the model.  Using this formalism, we consider two different approaches for fitting high performing models that maintain interpretability, where we conclude that gradient boosted regression tree based Generalized Additive Models in general, and the Explainable Boosting Machine in particular, is a promising model candidate consisting of functions within the Hilbert space of interest. The other approach considered is the basis expansion approach, which is currently used at the project provider. We make the argument that the gradient boosted regression tree approach used by the Explainable Boosting Machine is a more suitable model type for an automated, data driven modelling approach which is likely to generalize well outside of the training set. Finally, we perform an empirical study on three different internal datasets, where the Explainable Boosting Machine is compared towards the current production models. We find that the Explainable Boosting Machine systematically outperforms the current models on unseen test data. There are many potential ways to explain this, but the main hypothesis brought forward in the report is that the sequential model fitting procedure allowed by the regression tree approach allows us to effectively explore a larger portion of the Hilbert space which contains all permitted models in comparison to the basis expansion approach. / Detta mastersexamensarbete utgår från målsättningen att föreslå en maskinlärningsmodell för att förutspå försäkringsrisk, på nivån av enskilda försäkringar. Denna modell ska sedan jämföras mot nuvarande modeller som används hos Gjensidige Försäkring, som tillhandahåller projektet. Detta problem kan formuleras som ett traditionellt regressionsproblem på tabulär data, med väldefinerade målfördelningar. På grund av begränsningar kring krav på modellens förståbarhet identifierades det tidigt i projektet att mängden av tillåtna modeller inte innehåller ren black box modeller som XGBoost, LightGBM eller CatBoost, vilket är typiska förstahandsval för den här problemklassen. I rapporten formulerar vi förståbarhetskraven i skarpt, matematiskt språk, och drar slutsatsen att önskad förståbarhet kan uppnås genom en specifik struktur på det Hilbertrum där vi letar efter den optimala modellen. Utifrån denna formalism evaluerar vi två olika metoder för att anpassa modeller med god prestanda som uppnår önskade förståbarhetskrav. Vi drar slutsatsen att Generalized Additive Models anpassade till datat genom gradientboostade regressionsträd i allmänhet, och Explainable Boosting Machine i synnerhet är en lovande modellkandidat bestående av funktioner i vårt Hilbertrum av intresse. Vi utvärderar dessutom ett tillvägagångssätt för att anpassa Generalized Additive Models till datat genom basexpansioner, vilket är den metod som primärt används idag hos Gjensidige Försäkring. Vi argumenterar för att metoder som bygger på gradientboostade regressionsträd, såsom Explainable Boosting Machine, är mer lämplig för ett automatiserbart, datadrivet arbetssätt till att bygga modeller som generaliserar väl utanför träningsdatat.  Slutligen genomför vi en empirisk studie på tre olika interna dataset, där Explainable Boosting Machine jämförs mot nuvarande produktionsmodeller, vilka bygger på den tidigare nämnda basexpansionsmetodiken. Vi finner att Explainable Boosting Machine systematiskt överpresterar kontra nuvarande modeller på osedd testdata. Det finns många potentiella förklaringar till detta, men den huvudsakliga hypotsen som diskuteras i denna rapport är att den gradientboostade regressionsträdsmetodiken gör det möjligt att effektivt utforska en större delmängd av det Hilbertrum som innehåller alla tillåtna modeller i jämförelse med basexpansionsmetodiken.
22

Un wiki sémantique pour la gestion des connaissances décisionnelles : application à la cancérologie / A Semantic Wiki for Decision Knowledge Management : Application in Oncology

Meilender, Thomas 28 June 2013 (has links)
Les connaissances décisionnelles sont un type particulier de connaissances dont le but est de décrire des processus de prise de décision. En cancérologie, ces connaissances sont généralement regroupées dans des guides de bonnes pratiques cliniques. Leur publication est assurée par des organismes médicaux suite à un processus d'édition collaboratif complexe. L'informatisation des guides a conduit à la volonté de formaliser l'ensemble des connaissances contenues de manière à pouvoir alimenter des systèmes d'aide à la décision. Ainsi, leur édition peut être vue comme une problématique d'acquisition des connaissances. Dans ce contexte, le but de cette thèse est de proposer des méthodes et des outils permettant de factoriser l'édition des guides et leur formalisation. Le premier apport de cette thèse est l'intégration des technologies du Web social et sémantique dans le processus d'édition. La création du wiki sémantique OncoLogiK a permis de mettre en oeuvre cette proposition. Ainsi, un retour d'expérience et des méthodes sont présentés pour la migration depuis une solution Web statique. Le deuxième apport consiste à proposer une solution pour exploiter les connaissances décisionnelles présentes dans les guides. Ainsi, le framework KCATOS définit un langage d'arbres de décision simple pour lequel une traduction reposant sur les technologies du Web sémantique est développée. KCATOS propose en outre un éditeur d'arbres, permettant l'édition collaborative en ligne. Le troisième apport consiste à concilier dans un même système les approches pour la création des guides de bonnes pratiques informatisés : l'approche s'appuyant sur les connaissances symbolisée par KCATOS et l'approche documentaire d'OncoLogiK. Leur fonctionnement conjoint permet de proposer une solution bénéficiant des avantages des deux approches. De nombreuses perspectives sont exposées. La plupart d'entre elles visent à améliorer les services aux utilisateurs et l'expressivité de la base de connaissances. En prenant en compte le travail effectué et les perspectives, un modèle réaliste visant à faire du projet Kasimir un système d'aide à la décision complet est proposé / Decision knowledge is a particular type of knowledge that aims at describing the processes of decision making. In oncology, this knowledge is generally grouped into clinical practice guidelines. The publication of the guidelines is provided by medical organizations as a result of complex collaborative editing processes. The computerization of guides has led to the desire of formalizing the knowledge so as to supply decision-support systems. Thus, editing can be seen as a knowledge acquisition issue. In this context, this thesis aims at proposing methods and tools for factorizing editing guides and their formalization. The first contribute on of this thesis is the integration of social semantic web technologies in the editing process. The creation of the semantic wiki OncoLogiK allows to implement this proposal. Thus, a feedback and methods are presented for the migration from a static web solution. The second contribution consists in a solution to exploit the knowledge present in the decision-making guides. Thus, KcatoS framework defines a simple decision tree language for which a translation based on semantic web technologies is developed. KcatoS also proposes an editor of trees, allowing collaborative editing online. The third contribution is to combine in a single system approaches for the creation of clinical guidelines: the approach based on the knowledge symbolized by KcatoS and the documentary approach symbolized by OncoLogiK. Their joint operation can propose a solution benefiting from the advantages of both approaches. Many future works are proposed. Most of them aim at improving services to users and the expressiveness of the knowledge base. Taking into account the work and prospects, a realistic model to create a decision-support system based on clinical guidelines is proposed
23

Gha rung pa Lha'i rgyal mtshan as a Scholar and Defender of the Jo nang Tradition: a Study of His Lamp That Illuminates The Expanse of Reality with an Annotated Translation and Critical Edition of the Text

Duoji, Nyingcha 06 June 2014 (has links)
During the fourteenth century, with the rise of Dol po pa Shes rab rgyal mtshan (1292-1361), the gzhan stong philosophical tradition became a source of great controversy in Tibet. Dol po pa taught this new philosophical tradition for the first time to the wider Tibetan intellectual community. As Dol po pa's Jo nang teachings attracted an audience, many other philosophical giants of the day, such as Bu ston Rin chen grub (1290-1364), Red mda' ba Gzhon nu blo gros (1349-1412/13), and their students composed polemical works to refute Jo nang tradition. Lamp that Illuminates the Expanse of Reality was composed in the midst of this controversy to defend the Jo nang point of view. In it, its author, Gha rung pa Lha'i rgyal mtshan (1319-1402/03), attempts to prove that the Jo nang philosophical tradition is the definitive teaching and the quickest path to achieve the Buddhahood.
24

Designing guideline-based workflow-integrated electronic health records

Barretto, Sistine January 2005 (has links)
The recent trend in health care has been on the development and implementation of clinical guidelines to support and comply with evidence-based care. Evidence-based care is established with a view to improve the overall quality of care for patients, reduce costs, and address medico-legal issues. One of the main questions addressed by this thesis is how to support guideline-based care. It is recognised that this is better achieved by taking into consideration the provider workflow. However, workflow support remains a challenging (and hence rarely seen) accomplishment in practice, particularly in the context of chronic disease management (CDM). Our view is that guidelines can be knowledge-engineered into four main artefacts: electronic health record (EHR) content, computer-interpretable guideline (CiG), workflow and hypermedia. The next question is then how to coordinate and make use of these artefacts in a health information system (HIS). We leverage the EHR since we view this as the core component to any HIS. / PhD Doctorate
25

Étude de quelques liens entre les groupes de rang de Morley fini et les groupes algébriques linéaires / On links between finite Morley and algebraic groups

Tindzogho Ntsiri, Jules 25 June 2013 (has links)
Cette thèse traite essentiellement des liens qui peuvent exister entreles groupes de rang de Morley fini et les groupes algébriques linéaires. Eneffet, nous y établissons quelques propriétés algébriques aux K-groupes ;d'ailleurs une étude de linéarité sur ces groupes est dressée et permeten particulier d'obtenir une généralisation du théorème de Levi sur ladécomposition des groupes algébriques. Ensuite, nous étudions dans ununivers de rang de Morley fini, une action définissable de SL2(K) surun groupe abélien SL2(K)-minimal V où K est un corps définissable decaractéristique positive p > 0. À cet effet, nous montrons que le rang deMorley rk(V ) de V est pair et multiple de rk(K). Enfin, nous analysonssous quelles conditions, étant donné G un groupe algébrique sur un corpsalgébriquement clos de caractéristique non nulle, le quotient G=Z(G) estdéfinissablement linéaire.Par ailleurs, nous montrons sous certaines hypothèses le groupe desautomorphismes définissables d'un K*-groupe simple est interprétable. / This thesis essentially focuses on relationships that may exist betweengroups of finite Morley rank and linear algebraic groups. Indeed, weestablish some algebraic properties to K-groups; while a linearity studyon these groups is drawn and allows in particular to obtain an analogueto Levi decomposition theorem of algebraic groups. Next, in a univers offinite Morley rank, we study a definable action of SL2(K) on an abeliangroup V such as V is SL2(K)-minimal, where K is an definable field ofnonzero characteristic. For that purpose, we show that Morley rank ofV denoted rk(V ) is even and multiple of rk(K). Finally, we analyze theconditions under which, given an algebraic group G over an algebraicallyfield of nonzero characteristic, the quotient G=Z(G) is definably linear.Besides, we show under certain assymptions that the group of definable automorphism of a simple K*-group is interpretable.
26

Computerized protocols for the supervision of mechanically ventilated patients in critical care / Protocoles automatisés pour la surveillance de patients ventilés en soins intensifs

Saihi, Kaouther 16 December 2014 (has links)
Dans le secteur de la santé et particulièrement en unité des soins intensifs, diverses situations cliniques sont rencontrées et l'interprétation d'une grande quantité de données, y compris celles fournies par les équipements tels que moniteurs et ventilateurs, est exigée pour une prise de décision appropriée. La disparité entre cette quantité importante d'information et la capacité humaine limitée crée une variabilité inutile à la décision clinique. Pour faire face au problème, les experts médicaux ont défini des stratégies en vue de promouvoir une pratique fondée sur les données probantes. Cette méthode est devenue un standard pour la pratique clinique et a montré beaucoup d'avantages en menant à la définition de directives spécifiques ou des protocoles précis à appliquer dans certaines situations. Cependant, l'utilisation de directives/protocoles, particulièrement dans les soins intensifs, exige une participation continue des professionnels au chevet du malade et est ainsi difficile à appliquer en pratique clinique. La définition d'assistants informatisés est une solution technologique intéressante à explorer pour faciliter l'introduction des protocoles dans la routine clinique. En ventilation mécanique, on assiste à une prise de conscience croissante sur le potentiel de l'informatisation et son applicabilité au-delà de la recherche et plus concrètement dans le soutien du clinicien dans sa prise de décision quotidienne. Ceci à travers la prise en charge des tâches répétitives et la proposition de suggestions. Ce domaine constitue un environnement idéal pour de telles applications surtout que les ventilateurs de réanimation son aujourd'hui des équipements électroniques sophistiqués qui peuvent embarquer des protocoles informatisés. L'objectif de cette thèse était d'explorer les aspects de développement, déploiement et d'efficacité des « contrôleurs intelligents » en ventilation mécanique afin d'accélérer leur création et leur adoption. Pour examiner les phases de développement et de déploiement, nous nous sommes concentrés sur l'utilisation et l'extension du SmartCare®, une plateforme logicielle qui facilite l'automatisation des procédures thérapeutiques en ventilation mécanique à partir de la modélisation des connaissances expertes jusqu'à leur exécution en temps réel dans un équipement médical. A travers une approche ascendante, en se basant particulièrement sur notre expérience pratique dans le design de contrôleurs intelligents et après l'examen de divers contrôleurs existants, l'objectif était de définir un catalogue de pièces maitresses pour la représentation des protocoles en ventilation mécanique. L'utilisation d'une ontologie du domaine assure une formalisation saine de ces pièces.Sur base de cette approche, nous avons développé un contrôleur pour l'oxygénation testé au chevet du malade. Nous rapportons ses performances comparées à la pratique standard / In healthcare, especially in critical care, various clinical situations are encountered and a huge amount of data, including those provided by equipment such as monitors and ventilators, are required for an appropriate decision-making. The mismatch between this vast amount of information and the human capability creates unnecessary variability in clinical decision. To cope with this problem, medical experts have defined specific strategy called evidence based medicine. This method has become the standard of practice and showed many benefits by leading to the definition of specific guidelines or precise protocols to follow in specific situations. However, the use of guidelines/protocols, especially in critical care, requires the continuous involvement of professionals at the patient's bedside strongly limiting their application in practice. The introduction of computerized assistants for implementing such guidelines/protocols may be an interesting technological solution. In mechanical ventilation where various protocols are available there is a growing acceptance that such computerization might be useful beyond research, in assisting clinicians in their daily decision making by taking over some routine tasks or providing suggestions. Moreover, this domain constitutes an ideal environment because mechanical ventilators are presently powerful electronic equipments in which computerized protocols can be efficiently embedded. The objective of this thesis was to explore several aspects of the development, deployment, and effectiveness of computerized protocols or smart controllers in mechanical ventilation in order to accelerate their creation and adoption. For this purpose, we focused on the use and the extension of SmartCare®, a computer framework for the automation of respiratory therapy starting from clinical knowledge modelling to execution in real time of specific routines embedded into medical products [1]. Through a reengineering approach, from practical experience in smart controller design and investigation of existing controllers, the objective was to define a catalogue of building blocks to facilitate the creation of new controllers. The modeling of such blocks using dedicated domain ontology ensures a sound formalization. To prove the effectiveness of such a generic approach, we built a smart controller for oxygenation tested on the patient's bedside. We reported its performance compared to standard therapy
27

Explainable AI techniques for sepsis diagnosis : Evaluating LIME and SHAP through a user study

Norrie, Christian January 2021 (has links)
Articial intelligence has had a large impact on many industries and transformed some domains quite radically. There is tremendous potential in applying AI to the eld of medical diagnostics. A major issue with applying these techniques to some domains is an inability for AI models to provide an explanation or justication for their predictions. This creates a problem wherein a user may not trust an AI prediction, or there are legal requirements for justifying decisions that are not met. This thesis overviews how two explainable AI techniques (Shapley Additive Explanations and Local Interpretable Model-Agnostic Explanations) can establish a degree of trust for the user in the medical diagnostics eld. These techniques are evaluated through a user study. User study results suggest that supplementing classications or predictions with a post-hoc visualization increases interpretability by a small margin. Further investigation and research utilizing a user study surveyor interview is suggested to increase interpretability and explainability of machine learning results.
28

Applying Machine Learning to Explore Nutrients Predictive of Cardiovascular Disease Using Canadian Linked Population-Based Data / Machine Learning to Predict Cardiovascular Disease with Nutrition

Morgenstern, Jason D. January 2020 (has links)
McMaster University MASTER OF PUBLIC HEALTH (2020) Hamilton, Ontario (Health Research Methods, Evidence, and Impact) TITLE: Applying Machine Learning to Determine Nutrients Predictive of Cardiovascular Disease Using Canadian Linked Population-Based Data AUTHOR: Jason D. Morgenstern, B.Sc. (University of Guelph), M.D. (Western University) SUPERVISOR: Professor L.N. Anderson, NUMBER OF PAGES: xv, 121 / The use of big data and machine learning may help to address some challenges in nutritional epidemiology. The first objective of this thesis was to explore the use of machine learning prediction models in a hypothesis-generating approach to evaluate how detailed dietary features contribute to CVD risk prediction. The second objective was to assess the predictive performance of the models. A population-based retrospective cohort study was conducted using linked Canadian data from 2004 – 2018. Study participants were adults age 20 and older (n=12 130 ) who completed the 2004 Canadian Community Health Survey, Cycle 2.2, Nutrition (CCHS 2.2). Statistics Canada has linked the CCHS 2.2 data to the Discharge Abstracts Database and the Canadian Vital Statistics Death database, which were used to determine cardiovascular outcomes (stroke or ischemic heart disease events or deaths). Conditional inference forests were used to develop models. Then, permutation feature importance (PFI) and accumulated local effects (ALEs) were calculated to explore contributions of nutrients to predicted disease. Supplement-use (median PFI (M)=4.09 x 10-4, IQR=8.25 x 10-7 – 1.11 x 10-3) and caffeine (M=2.79 x 10-4, IQR= -9.11 x 10-5 – 5.86 x 10-4) had the highest median PFIs for nutrition-related features. Supplement-use was associated with decreased predicted risk of CVD (accumulated local effects range (ALER)= -3.02 x 10-4 – 2.76 x 10-4) and caffeine was associated with increased predicted risk (ALER= -9.96 x 10-4 – 0.035). The best-performing model had a logarithmic loss of 0.248. Overall, many non-linear relationships were observed, including threshold, j-shaped, and u-shaped. The results of this exploratory study suggest that applying machine learning to the nutritional epidemiology of CVD, particularly using big datasets, may help elucidate risks and improve predictive models. Given the limited application thus far, work such as this could lead to improvements in public health recommendations and policy related to dietary behaviours. / Thesis / Master of Public Health (MPH) / This work explores the potential for machine learning to improve the study of diet and disease. In chapter 2, opportunities are identified for big data to make diet easier to measure. Also, we highlight how machine learning could find new, complex relationships between diet and disease. In chapter 3, we apply a machine learning algorithm, called conditional inference forests, to a unique Canadian dataset to predict whether people developed strokes or heart attacks. This dataset included responses to a health survey conducted in 2004, where participants’ responses have been linked to administrative databases that record when people go to hospital or die up until 2017. Using these techniques, we identified aspects of nutrition that predicted disease, including caffeine, alcohol, and supplement-use. This work suggests that machine learning may be helpful in our attempts to understand the relationships between diet and health.
29

Implementing Machine Learning in the Credit Process of a Learning Organization While Maintaining Transparency Using LIME

Malmberg, Jacob, Nystad Öhman, Marcus, Hotti, Alexandra January 2018 (has links)
To determine whether a credit limit for a corporate client should be changed, a financial institution writes a PM containingtext and financial data that then is assessed by a credit committee which decides whether to increase the limit or not. To make thisprocess more efficient, machine learning algorithms was used to classify the credit PMs instead of a committee. Since most machinelearning algorithms are black boxes, the LIME framework was used to find the most important features driving the classification. Theresults of this study show that credit memos can be classified with high accuracy and that LIME can be used to indicate which parts ofthe memo had the biggest impact. This implicates that the credit process could be improved by utilizing machine learning, whilemaintaining transparency. However, machine learning may disrupt learning processes within the organization. / För att bedöma om en kreditlimit för ett företag ska förändras eller inte skriver ett finansiellt institut ett PM innehållande text och finansiella data. Detta PM granskas sedan av en kreditkommitté som beslutar om limiten ska förändras eller inte. För att effektivisera denna process användes i denna rapport maskininlärning istället för en kreditkommitté för att besluta om limiten ska förändras. Eftersom de flesta maskininlärningsalgoritmer är svarta lådor så användes LIME-ramverket för att hitta de viktigaste drivarna bakom klassificeringen. Denna studies resultat visar att kredit-PM kan klassificeras med hög noggrannhet och att LIME kan visa vilken del av ett PM som hade störst påverkan vid klassificeringen. Implikationerna av detta är att kreditprocessen kan förbättras av maskininlärning, utan att förlora transparens. Maskininlärning kan emellertid störa lärandeprocesser i organisationen, varför införandet av dessa algoritmer bör vägas mot hur betydelsefullt det är att bevara och utveckla kunskap inom organisationen.
30

On the impact of geospatial features in real estate appraisal with interpretable algorithms / Om påverkan av geospatiala variabler i fastighetsvärdering med tolkbara algoritmer

Jäger, Simon January 2021 (has links)
Real estate appraisal is the means of defining the market value of land and property affixed to it. Many different features determine the market value of a property. For example, the distance to the nearest park or the travel time to the central business district may be significant when determining its market value. The use of machine learning in real estate appraisal requires algorithm accuracy and interpretability. Related research often defines these two properties as a trade-off and suggests that more complex algorithms may outperform intrinsically interpretable algorithms. This study tests these claims by examining the impact of geospatial features on interpretable algorithms in real estate appraisal. The experiments use property transactions from Oslo, Norway, and adds relative and global geospatial features for all properties using geocoding and spherical distance calculations. Such as the distance to the nearest park or the city center. The experiment implements three intrinsically interpretable algorithms; a linear regression algorithm, a decision tree algorithm, and a RuleFit algorithm. For comparison, it also implements two artificial neural network algorithms as a baseline. This study measures the impact of geospatial features using the algorithm performance by the coefficient of determination and the mean absolute error for the algorithm without and with geospatial features. Then, the individual impact of each geospatial feature is measured using four feature importance measures; mean decrease impurity, input variable importance, mean decrease accuracy, and Shapley values. The statistically significant results show that geospatial features improve algorithm performance. The improvement of algorithm performance is not unique to interpretable algorithms but occurs for all algorithms. Furthermore, it shows that interpretable algorithms are not axiomatically inferior to the tested artificial neural network algorithms. The distance to the city center and a nearby hospital are, on average, the most important geospatial features. While important for algorithm performance, precisely what the geospatial features capture remains for future examination. / Fastighetsvärdering är ett sätt att bestämma marknadsvärdet på mark och egendom som anbringas på den. Flera olika variabler påverkar marknadsvärdet för en fastighet. Avståndet till närmaste park eller restiden till det centrala affärsdistriktet kan till exempel vara betydande när man bestämmer ett marknadsvärde. Användningen av maskininlärning vid fastighetsvärdering kräver noggrannhet och tolkbarhet hos algoritmer. Relaterad forskning definierar ofta dessa två egenskaper som en kompromiss och föreslår att mer komplexa algoritmer kan överträffa tolkbara algoritmer. Den här studien testar dessa påståenden genom att undersöka påverkan av geospatiala variabler på tolkbara algoritmer i fastighetsvärdering. Experimentet använder fastighetstransaktioner från Oslo i Norge, och lägger till relativa och globala geospatiala variabler för alla fastigheter med hjälp av geokodning och sfäriska avståndsberäkningar. Såsom avståndet till närmaste park eller stadens centrum. Experimentet implementerar tre tolkbara algoritmer; en linjär regressionsalgoritm, en beslutsträdalgoritm och en RuleFit-algoritm. Som jämförelse implementerar den också två artificiella neuronnätsalgoritmer som en baslinje. Studien mäter påverkan av geospatiala variabler med algoritmprestanda genom determinationskoefficienten och det genomsnittliga absoluta felet för algoritmen med och utan geospatiala variabler. Därefter mäts den individuella påverkan av varje geospatial variabel med hjälp av fyra mått på variabelbetydelse; mean decrease impurity, input variabel importance, mean decrease accuracy och Shapley-värden. De statistiskt signifikanta resultaten visar att geospatiala variabler förbättrar algoritmers prestanda. Förbättringen av algoritmprestanda är inte unik för tolkningsbara algoritmer utan sker för alla algoritmer. Dessutom visar resultatet att tolkningsbara algoritmer inte är sämre än de testade artificiella neuronnätsalgoritmerna. Avståndet till stadens centrum och det närmaste sjukhuset är i genomsnitt de viktigaste geospatiala variablerna. Även om de geospatial variablerna är viktiga för algoritmprestanda, kvarstår frågan om vad exakt de betyder för framtida granskning.

Page generated in 0.0921 seconds