• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 1
  • Tagged with
  • 18
  • 18
  • 10
  • 9
  • 9
  • 8
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On Effectively Creating Ensembles of Classifiers : Studies on Creation Strategies, Diversity and Predicting with Confidence

Löfström, Tuwe January 2015 (has links)
An ensemble is a composite model, combining the predictions from several other models. Ensembles are known to be more accurate than single models. Diversity has been identified as an important factor in explaining the success of ensembles. In the context of classification, diversity has not been well defined, and several heuristic diversity measures have been proposed. The focus of this thesis is on how to create effective ensembles in the context of classification. Even though several effective ensemble algorithms have been proposed, there are still several open questions regarding the role diversity plays when creating an effective ensemble. Open questions relating to creating effective ensembles that are addressed include: what to optimize when trying to find an ensemble using a subset of models used by the original ensemble that is more effective than the original ensemble; how effective is it to search for such a sub-ensemble; how should the neural networks used in an ensemble be trained for the ensemble to be effective? The contributions of the thesis include several studies evaluating different ways to optimize which sub-ensemble would be most effective, including a novel approach using combinations of performance and diversity measures. The contributions of the initial studies presented in the thesis eventually resulted in an investigation of the underlying assumption motivating the search for more effective sub-ensembles. The evaluation concluded that even if several more effective sub-ensembles exist, it may not be possible to identify which sub-ensembles would be the most effective using any of the evaluated optimization measures. An investigation of the most effective ways to train neural networks to be used in ensembles was also performed. The conclusions are that effective ensembles can be obtained by training neural networks in a number of different ways but that high average individual accuracy or much diversity both would generate effective ensembles. Several findings regarding diversity and effective ensembles presented in the literature in recent years are also discussed and related to the results of the included studies. When creating confidence based predictors using conformal prediction, there are several open questions regarding how data should be utilized effectively when using ensembles. Open questions related to predicting with confidence that are addressed include: how can data be utilized effectively to achieve more efficient confidence based predictions using ensembles; how do problems with class imbalance affect the confidence based predictions when using conformal prediction? Contributions include two studies where it is shown in the first that the use of out-of-bag estimates when using bagging ensembles results in more effective conformal predictors and it is shown in the second that a conformal predictor conditioned on the class labels to avoid a strong bias towards the majority class is more effective on problems with class imbalance. The research method used is mainly inspired by the design science paradigm, which is manifested by the development and evaluation of artifacts. / En ensemble är en sammansatt modell som kombinerar prediktionerna från flera olika modeller. Det är välkänt att ensembler är mer träffsäkra än enskilda modeller. Diversitet har identifierats som en viktig faktor för att förklara varför ensembler är så framgångsrika. Diversitet hade fram tills nyligen inte definierats entydigt för klassificering vilket resulterade i att många heuristiska diverstitetsmått har föreslagits. Den här avhandlingen fokuserar på hur klassificeringsensembler kan skapas på ett ändamålsenligt (eng. effective) sätt. Den vetenskapliga metoden är huvudsakligen inspirerad av design science-paradigmet vilket lämpar sig väl för utveckling och evaluering av IT-artefakter. Det finns sedan tidigare många framgångsrika ensembleralgoritmer men trots det så finns det fortfarande vissa frågetecken kring vilken roll diversitet spelar vid skapande av välpresterande (eng. effective) ensemblemodeller. Några av de frågor som berör diversitet som behandlas i avhandlingen inkluderar: Vad skall optimeras när man söker efter en delmängd av de tillgängliga modellerna för att försöka skapa en ensemble som är bättre än ensemblen bestående av samtliga modeller; Hur väl fungerar strategin att söka efter sådana delensembler; Hur skall neurala nätverk tränas för att fungera så bra som möjligt i en ensemble? Bidraget i avhandlingen inkluderar flera studier som utvärderar flera olika sätt att finna delensembler som är bättre än att använda hela ensemblen, inklusive ett nytt tillvägagångssätt som utnyttjar en kombination av både diversitets- och prestandamått. Resultaten i de första studierna ledde fram till att det underliggande antagandet som motiverar att söka efter delensembler undersöktes. Slutsatsen blev, trots att det fanns flera delensembler som var bättre än hela ensemblen, att det inte fanns något sätt att identifiera med tillgänglig data vilka de bättre delensemblerna var. Vidare undersöktes hur neurala nätverk bör tränas för att tillsammans samverka så väl som möjligt när de används i en ensemble. Slutsatserna från den undersökningen är att det är möjligt att skapa välpresterande ensembler både genom att ha många modeller som är antingen bra i genomsnitt eller olika varandra (dvs diversa). Insikter som har presenterats i litteraturen under de senaste åren diskuteras och relateras till resultaten i de inkluderade studierna. När man skapar konfidensbaserade modeller med hjälp av ett ramverk som kallas för conformal prediction så finns det flera frågor kring hur data bör utnyttjas på bästa sätt när man använder ensembler som behöver belysas. De frågor som relaterar till konfidensbaserad predicering inkluderar: Hur kan data utnyttjas på bästa sätt för att åstadkomma mer effektiva konfidensbaserade prediktioner med ensembler; Hur påverkar obalanserad datade konfidensbaserade prediktionerna när man använder conformal perdiction? Bidragen inkluderar två studier där resultaten i den första visar att det mest effektiva sättet att använda data när man har en baggingensemble är att använda sk out-of-bag estimeringar. Resultaten i den andra studien visar att obalanserad data behöver hanteras med hjälp av en klassvillkorad konfidensbaserad modell för att undvika en stark tendens att favorisera majoritetsklassen. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 8: In press.</p> / Dataanalys för detektion av läkemedelseffekter (DADEL)
2

On Effectively Creating Ensembles of Classifiers : Studies on Creation Strategies, Diversity and Predicting with Confidence

Löfström, Tuwe January 2015 (has links)
An ensemble is a composite model, combining the predictions from several other models. Ensembles are known to be more accurate than single models. Diversity has been identified as an important factor in explaining the success of ensembles. In the context of classification, diversity has not been well defined, and several heuristic diversity measures have been proposed. The focus of this thesis is on how to create effective ensembles in the context of classification. Even though several effective ensemble algorithms have been proposed, there are still several open questions regarding the role diversity plays when creating an effective ensemble. Open questions relating to creating effective ensembles that are addressed include: what to optimize when trying to find an ensemble using a subset of models used by the original ensemble that is more effective than the original ensemble; how effective is it to search for such a sub-ensemble; how should the neural networks used in an ensemble be trained for the ensemble to be effective? The contributions of the thesis include several studies evaluating different ways to optimize which sub-ensemble would be most effective, including a novel approach using combinations of performance and diversity measures. The contributions of the initial studies presented in the thesis eventually resulted in an investigation of the underlying assumption motivating the search for more effective sub-ensembles. The evaluation concluded that even if several more effective sub-ensembles exist, it may not be possible to identify which sub-ensembles would be the most effective using any of the evaluated optimization measures. An investigation of the most effective ways to train neural networks to be used in ensembles was also performed. The conclusions are that effective ensembles can be obtained by training neural networks in a number of different ways but that high average individual accuracy or much diversity both would generate effective ensembles. Several findings regarding diversity and effective ensembles presented in the literature in recent years are also discussed and related to the results of the included studies. When creating confidence based predictors using conformal prediction, there are several open questions regarding how data should be utilized effectively when using ensembles. Open questions related to predicting with confidence that are addressed include: how can data be utilized effectively to achieve more efficient confidence based predictions using ensembles; how do problems with class imbalance affect the confidence based predictions when using conformal prediction? Contributions include two studies where it is shown in the first that the use of out-of-bag estimates when using bagging ensembles results in more effective conformal predictors and it is shown in the second that a conformal predictor conditioned on the class labels to avoid a strong bias towards the majority class is more effective on problems with class imbalance. The research method used is mainly inspired by the design science paradigm, which is manifested by the development and evaluation of artifacts. / En ensemble är en sammansatt modell som kombinerar prediktionerna från flera olika modeller. Det är välkänt att ensembler är mer träffsäkra än enskilda modeller. Diversitet har identifierats som en viktig faktor för att förklara varför ensembler är så framgångsrika. Diversitet hade fram tills nyligen inte definierats entydigt för klassificering vilket resulterade i att många heuristiska diverstitetsmått har föreslagits. Den här avhandlingen fokuserar på hur klassificeringsensembler kan skapas på ett ändamålsenligt (eng. effective) sätt. Den vetenskapliga metoden är huvudsakligen inspirerad av design science-paradigmet vilket lämpar sig väl för utveckling och evaluering av IT-artefakter. Det finns sedan tidigare många framgångsrika ensembleralgoritmer men trots det så finns det fortfarande vissa frågetecken kring vilken roll diversitet spelar vid skapande av välpresterande (eng. effective) ensemblemodeller. Några av de frågor som berör diversitet som behandlas i avhandlingen inkluderar: Vad skall optimeras när man söker efter en delmängd av de tillgängliga modellerna för att försöka skapa en ensemble som är bättre än ensemblen bestående av samtliga modeller; Hur väl fungerar strategin att söka efter sådana delensembler; Hur skall neurala nätverk tränas för att fungera så bra som möjligt i en ensemble? Bidraget i avhandlingen inkluderar flera studier som utvärderar flera olika sätt att finna delensembler som är bättre än att använda hela ensemblen, inklusive ett nytt tillvägagångssätt som utnyttjar en kombination av både diversitets- och prestandamått. Resultaten i de första studierna ledde fram till att det underliggande antagandet som motiverar att söka efter delensembler undersöktes. Slutsatsen blev, trots att det fanns flera delensembler som var bättre än hela ensemblen, att det inte fanns något sätt att identifiera med tillgänglig data vilka de bättre delensemblerna var. Vidare undersöktes hur neurala nätverk bör tränas för att tillsammans samverka så väl som möjligt när de används i en ensemble. Slutsatserna från den undersökningen är att det är möjligt att skapa välpresterande ensembler både genom att ha många modeller som är antingen bra i genomsnitt eller olika varandra (dvs diversa). Insikter som har presenterats i litteraturen under de senaste åren diskuteras och relateras till resultaten i de inkluderade studierna. När man skapar konfidensbaserade modeller med hjälp av ett ramverk som kallas för conformal prediction så finns det flera frågor kring hur data bör utnyttjas på bästa sätt när man använder ensembler som behöver belysas. De frågor som relaterar till konfidensbaserad predicering inkluderar: Hur kan data utnyttjas på bästa sätt för att åstadkomma mer effektiva konfidensbaserade prediktioner med ensembler; Hur påverkar obalanserad datade konfidensbaserade prediktionerna när man använder conformal perdiction? Bidragen inkluderar två studier där resultaten i den första visar att det mest effektiva sättet att använda data när man har en baggingensemble är att använda sk out-of-bag estimeringar. Resultaten i den andra studien visar att obalanserad data behöver hanteras med hjälp av en klassvillkorad konfidensbaserad modell för att undvika en stark tendens att favorisera majoritetsklassen. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 8: In press.</p> / Dataanalys för detektion av läkemedelseffekter (DADEL)
3

Aggregating predictions using Non-Disclosed Conformal Prediction

Carrión Brännström, Robin January 2019 (has links)
When data are stored in different locations and pooling of such data is not allowed, there is an informational loss when doing predictive modeling. In this thesis, a new method called Non-Disclosed Conformal Prediction (NDCP) is adapted into a regression setting, such that predictions and prediction intervals can be aggregated from different data sources without interchanging any data. The method is built upon the Conformal Prediction framework, which produces predictions with confidence measures on top of any machine learning method. The method is evaluated on regression benchmark data sets using Support Vector Regression, with different sizes and settings for the data sources, to simulate real life scenarios. The results show that the method produces conservatively valid prediction intervals even though in some settings, the individual data sources do not manage to create valid intervals. NDCP also creates more stable intervals than the individual data sources. Thanks to its straightforward implementation, data owners which cannot share data but would like to contribute to predictive modeling, would benefit from using this method.
4

General image classifier for fluorescence microscopy using transfer learning

Öhrn, Håkan January 2019 (has links)
Modern microscopy and automation technologies enable experiments which can produce millions of images each day. The valuable information is often sparse, and requires clever methods to find useful data. In this thesis a general image classification tool for fluorescence microscopy images was developed usingfeatures extracted from a general Convolutional Neural Network (CNN) trained on natural images. The user selects interesting regions in a microscopy image and then, through an iterative process, using active learning, continually builds a training data set to train a classifier that finds similar regions in other images. The classifier uses conformal prediction to find samples that, if labeled, would most improve the learned model as well as specifying the frequency of errors the classifier commits. The result show that with the appropriate choice of significance one can reach a high confidence in true positive. The active learning approach increased the precision with a downside of finding fewer examples.
5

Normalized conformalprediction for time series data

Kowalczewski, Jakub January 2019 (has links)
Every forecast is valid only if proper prediction intervals are stated. Currently models focus mainly on point forecast and neglect the area of prediction intervals. The estimation of the error of the model is made and is applied to every prediction in the same way, whereas we could identify that every case is different and different error measure should be applied to every instance. One of the state-of-the-art techniques which can address this behaviour is conformal prediction with its variant of normalized conformal prediction. In this thesis we apply this technique into time series problems. The special focus is put to examine the technique of estimating the difficulty of every instance using the error of neighbouring instances. This thesis describes the entire process of adjusting time series data into normalized conformal prediction framework and the comparison with other techniques will be made. The final results do not show that aforementioned method is superior over an existing techniques in various setups different method performed the best. However, it is similar in terms of performance. Therefore, it is an interesting add-on to data science forecasting toolkit. / Varje prognos är endast giltig om korrekt förutsägningsintervall anges. För närvarande fokuserar modeller huvudsakligen på punktprognos och försummar området med förutsägelsesintervall. Uppskattningen av modellens fel görs och tillämpas på varje förutsägelse på samma sätt, medan vi kunde identifiera att varje fall är annorlunda och olika felmått bör tillämpas på varje instans. En av de senaste teknikerna som kan hantera detta beteende är konform förutsägelse med dess variant av normaliserad konform förutsägelse. I denna avhandling tillämpar vi denna teknik i tidsserieproblem. Det speciella fokus ligger på att undersöka tekniken för att uppskatta svårigheten för varje instans med hjälp av felet i angränsande instanser. Den här avhandlingen beskriver hela processen för att anpassa tidsseriedata till normaliserat konformitetsprognosram och jämförelsen med andra tekniker kommer att göras. De slutliga resultaten visar inte att ovannämnda metod är överlägsen jämfört med en befintlig teknik - i olika uppsättningar utförde olika metoder bäst. Men det är liknande vad gäller prestanda. Därför är det ett intressant tillägg till datavetenskapens prognosverktygssats.
6

Anomaly detection in trajectory data for surveillance applications

Laxhammar, Rikard January 2011 (has links)
Abnormal behaviour may indicate important objects and events in a wide variety of domains. One such domain is intelligence and surveillance, where there is a clear trend towards more and more advanced sensor systems producing huge amounts of trajectory data from moving objects, such as people, vehicles, vessels and aircraft. In the maritime domain, for example, abnormal vessel behaviour, such as unexpected stops, deviations from standard routes, speeding, traffic direction violations etc., may indicate threats and dangers related to smuggling, sea drunkenness, collisions, grounding, hijacking, piracy etc. Timely detection of these relatively infrequent events, which is critical for enabling proactive measures, requires constant analysis of all trajectories; this is typically a great challenge to human analysts due to information overload, fatigue and inattention. In the Baltic Sea, for example, there are typically 3000–4000 commercial vessels present that are monitored by only a few human analysts. Thus, there is a need for automated detection of abnormal trajectory patterns. In this thesis, we investigate algorithms appropriate for automated detection of anomalous trajectories in surveillance applications. We identify and discuss some key theoretical properties of such algorithms, which have not been fully addressed in previous work: sequential anomaly detection in incomplete trajectories, continuous learning based on new data requiring no or limited human feedback, a minimum of parameters and a low and well-calibrated false alarm rate. A number of algorithms based on statistical methods and nearest neighbour methods are proposed that address some or all of these key properties. In particular, a novel algorithm known as the Similarity-based Nearest Neighbour Conformal Anomaly Detector (SNN-CAD) is proposed. This algorithm is based on the theory of Conformal prediction and is unique in the sense that it addresses all of the key properties above. The proposed algorithms are evaluated on real world trajectory data sets, including vessel traffic data, which have been complemented with simulated anomalous data. The experiments demonstrate the type of anomalous behaviour that can be detected at a low overall alarm rate. Quantitative results for learning and classification performance of the algorithms are compared. In particular, results from reproduced experiments on public data sets show that SNN-CAD, combined with Hausdorff distance  for measuring dissimilarity between trajectories, achieves excellent classification performance without any parameter tuning. It is concluded that SNN-CAD, due to its general and parameter-light design, is applicable in virtually any anomaly detection application. Directions for future work include investigating sensitivity to noisy data, and investigating long-term learning strategies, which address issues related to changing behaviour patterns and increasing size and complexity of training data.
7

Computational Modelling in Drug Discovery : Application of Structure-Based Drug Design, Conformal Prediction and Evaluation of Virtual Screening

Lindh, Martin January 2017 (has links)
Structure-based drug design and virtual screening are areas of computational medicinal chemistry that use 3D models of target proteins. It is important to develop better methods in this field with the aim of increasing the speed and quality of early stage drug discovery. The first part of this thesis focuses on the application of structure-based drug design in the search for inhibitors for the protein 1-deoxy-D-xylulose-5-phosphate reductoisomerase (DXR), one of the enzymes in the DOXP/MEP synthetic pathway. This pathway is found in many bacteria (such as Mycobacterium tuberculosis) and in the parasite Plasmodium falciparum. In order to evaluate and improve current virtual screening methods, a benchmarking data set was constructed using publically available high-throughput screening data. The exercise highlighted a number of problems with current data sets as well as with the use of publically available high-throughput screening data. We hope this work will help guide further development of well designed benchmarking data sets for virtual screening methods. Conformal prediction is a new method in the computer-aided drug design toolbox that gives the prediction range at a specified level of confidence for each compound. To demonstrate the versatility and applicability of this method we derived models of skin permeability using two different machine learning methods; random forest and support vector machines.
8

Conformal anomaly detection : Detecting abnormal trajectories in surveillance applications

Laxhammar, Rikard January 2014 (has links)
Human operators of modern surveillance systems are confronted with an increasing amount of trajectory data from moving objects, such as people, vehicles, vessels, and aircraft. A large majority of these trajectories reflect routine traffic and are uninteresting. Nevertheless, some objects are engaged in dangerous, illegal or otherwise interesting activities, which may manifest themselves as unusual and abnormal trajectories. These anomalous trajectories can be difficult to detect by human operators due to cognitive limitations. In this thesis, we study algorithms for the automated detection of anomalous trajectories in surveillance applications. The main results and contributions of the thesis are two-fold. Firstly, we propose and discuss a novel approach for anomaly detection, called conformal anomaly detection, which is based on conformal prediction (Vovk et al.). In particular, we propose two general algorithms for anomaly detection: the conformal anomaly detector (CAD) and the computationally more efficient inductive conformal anomaly detector (ICAD). A key property of conformal anomaly detection, in contrast to previous methods, is that it provides a well-founded approach for the tuning of the anomaly threshold that can be directly related to the expected or desired alarm rate. Secondly, we propose and analyse two parameter-light algorithms for unsupervised online learning and sequential detection of anomalous trajectories based on CAD and ICAD: the sequential Hausdorff nearest neighbours conformal anomaly detector (SHNN-CAD) and the sequential sub-trajectory local outlier inductive conformal anomaly detector (SSTLO-ICAD), which is more sensitive to local anomalous sub-trajectories. We implement the proposed algorithms and investigate their classification performance on a number of real and synthetic datasets from the video and maritime surveillance domains. The results show that SHNN-CAD achieves competitive classification performance with minimum parameter tuning on video trajectories. Moreover, we demonstrate that SSTLO-ICAD is able to accurately discriminate realistic anomalous vessel trajectories from normal background traffic.
9

Improving ligand-based modelling by combining various features

Omran, Abir January 2021 (has links)
Background: In drug discovery morphological profiles can be used to identify and establish a drug's biological activity or mechanism of action. Quantitative structure-activity relationship (QSAR) is an approach that uses the chemical structures to predict properties e.g., biological activity. Support Vector Machine (SVM) is a machine learning algorithm that can be used for classification. Confidence measures as conformal predictions can be implemented on top of machine learning algorithms. There are several methods that can be applied to improve a model’s predictive performance. Aim: The aim in this project is to evaluate if ligand-based modelling can be improved by combining features from chemical structures, target predictions and morphological profiles. Method: The project was divided into three experiments. In experiment 1 five bioassay datasets were used. In experiment 2 and 3 a cell painting dataset was used that contained morphological profiles from three different classes of kinase inhibitors, and the classes were used as endpoints. Support vector machine, liblinear models were built in all three experiments. A significant level of 0.2 was set to calculate the efficiency. The mean observed fuzziness and efficiency were used as measurements to evaluate the model performance. Results: Similar trends were observed for all datasets in experiment 1. Signatures+CDK13+TP which is the most complex model obtained the lowest mean observed fuzziness in four out of five times. With a confidence level of 0.8, TP+Signatures obtained the highest efficiency. Signatures+Morphological Profiles+TP obtained the lowest mean observed fuzziness in experiment 2 and 3. Signatures obtained the highest correct single label predictions with a confidence of 80%. Discussion: Less correct single label predictions were observed for the active class in comparison to the inactive class. This could have been due to them being harder to predict. The morphological profiles did not contribute with an improvement to the models predictive performance compared to Signatures. This could be due to the lack of information obtained from the dataset. Conclusion: A combination of features from chemical structures and target predictions improved ligand-based modelling compared to models only built on one of the features. The combination of features from chemical structures and morphological profiles did not improve the ligand-based models, compared to the model only built on chemical structures. By adding features from target predictions to a model built with features from chemical structures and morphological profiles a decrease in mean observed fuzziness was obtained.
10

Conformal survival predictions at a user-controlled time point : The introduction of time point specialized Conformal Random Survival Forests

van Miltenburg, Jelle January 2018 (has links)
The goal of this research is to expand the field of conformal predictions using Random Survival Forests. The standard Conformal Random Survival Forest can predict with a fixed certainty whether something will survive up until a certain time point. This research is the first to show that there is little practical use in the standard Conformal Random Survival Forest algorithm. It turns out that the confidence guarantees of the conformal prediction framework are violated if the Standard algorithm makes predictions for a user-controlled fixed time point. To solve this challenge, this thesis proposes two algorithms that specialize in conformal predictions for a fixed point in time: a Fixed Time algorithm and a Hybrid algorithm. Both algorithms transform the survival data that is used by the split evaluation metric in the Random Survival Forest algorithm. The algorithms are evaluated and compared along six different set prediction evaluation criteria. The prediction performance of the Hybrid algorithm outperforms the prediction performance of the Fixed Time algorithm in most cases. Furthermore, the Hybrid algorithm is more stable than the Fixed Time algorithm when the predicting job extends to various time points. The hybrid Conformal Random Survival Forest should thus be considered by anyone who wants to make conformal survival predictions at usercontrolled time points. / Målet med denna avhandling är att utöka området för konformitetsprediktion med hjälp av Random Survival Forests. Standardutförandet av Conformal Random Survival Forest kan förutsäga med en viss säkerhet om någonting kommer att överleva fram till en viss tidpunkt. Denna avhandling är den första som visar att det finns liten praktisk användning i standardutförandet av Conformal Random Survival Forest-algoritmen. Det visar sig att konfidensgarantierna för konformitetsprediktionsramverket bryts om standardalgoritmen gör förutsägelser för en användarstyrd fast tidpunkt. För att lösa denna utmaning, föreslår denna avhandling två algoritmer som specialiserar sig i konformitetsprediktion för en bestämd tidpunkt: en fast-tids algoritm och en hybridalgoritm. Båda algoritmerna omvandlar den överlevnadsdata som används av den delade utvärderingsmetoden i Random Survival Forest-algoritmen. Uppskattningsförmågan för hybridalgoritmen överträffar den för fast-tids algoritmen i de flesta fall. Dessutom är hybrid algoritmen stabilare än fast-tids algoritmen när det förutsägelsejobbet sträcker sig till olika tidpunkter. Hybridalgoritmen för Conformal Random Survival Forest bör därför föredras av den som vill göra konformitetsprediktion av överlevnad vid användarstyrda tidpunkter.

Page generated in 0.1111 seconds