• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 239
  • 72
  • 28
  • 28
  • 18
  • 9
  • 9
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 487
  • 487
  • 487
  • 159
  • 136
  • 113
  • 111
  • 82
  • 78
  • 73
  • 73
  • 65
  • 63
  • 57
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Classification of Healthy and Alzheimer's Patients Using Electroencephalography and Supervised Machine Learning / Klassifiering av friska och alzheimers patienter med hjälp av elektroencefalografi och maskininlärning

Javanmardi, Ramtin, Rehman, Dawood January 2018 (has links)
Alzheimer’s is one of the most costly illnesses that exists today and the number of people with alzheimers diease is expected to increase with 100 million until the year 2050. The medication that exists today is most effective if Alzheimer’s is detected during early stages since these medications do not cure Alzheimer’s but slows down the progression of the disease. Electroencephalography (EEG) is a relatively cheap method in comparison to for example Magnetic Resonance Imaging when it comes to diagnostic tools. However it is not clear how to deduce whether a patient has Alzheimer’s disease just from EEG data when the analyst is a human. This is the underlying motivation for our investigation; can supervised machine learning methods be used for pattern recognition using only the spectral power of EEG data to tell whether an individual has alzheimer’s disease or not? The output accuracy of the trained supervised machine learning models showed an average accuracy of above 80%. This indicates that there is a difference in the neural oscillations of the brain between healthy individuals and alzheimer’s disease patients which the machine learning methods are able to detect using pattern recognition. / Alzheimer är en av de mest kostsamma sjukdomar som existerar idag och antalet människor med alzheimer förväntas öka med omkring 100 miljoner människor tills 2050. Den medicinska hjälp som finns tillgänglig idag är som mest effektiv om man upptäcker Alzheimer i ett tidigt stadium eftersom dagens mediciner inte botar sjukdomen utan fungerar som bromsmedicin. Elektroencefalografi är en relativt billig metod för diagnostisering jämfört med Magnetisk resonanstomografi. Det är emellertid inte tydligt hur en läkare eller annan tränad individ ska tolka EEG datan för att kunna avgöra om det är en patient med alzheimers som de kollar på. Så den bakomliggande motivation till vår undersökning är; Kan man med hjälp av övervakad maskininlärning i kombination med spektral kraft från EEG datorn skapa modeller som kan avgöra om en patient har alzheimers eller inte. Medelvärdet av våra modellers noggrannhet var över 80%. Detta tyder på att det finns en faktiskt skillnad mellan hjärna signalerna hos en patient med alzheimer och en frisk individ, och att man med hjälp av maskininlärning kan hitta dessa skillnader som en människa enkelt missar.
342

Realtidsövervakning av multicastvideoström / Monitoring of multicast video streaming in realtime

Hassan, Waleed, Hellström, Martin January 2017 (has links)
Den enorma ökningen av multicasttjänster har visat begränsningarna hos traditionella nätverkshanteringsverktyg vid multicastkvalitetsövervakning. Det behövs någon annan form av övervakningsteknik som inte är en hårdvaruinriktad lösning så som ökad länkgenomströmmning, buffertlängd och kapacitet för att förbättra kundupplevelsen. I rapporten undersöks användningen av biblioteken FFmpeg, och OpenCV samt no-reference image quality assessemnt algoritmen BRISQUE för att förbättra tjänstekvaliteten och kundupplevelsen. Genom att upptäcka kvalitetsbrister hos bildrutor samt bitfel i videoströmmen kan QoS och QoE förbättras. Uppgiftens ändamål är att i realtid detektera avvikelser i bildkvalitet och bitfel i en multicastvideoström för att sedan notifiera tjänsteleverantören med hjälp av SNMP traps. Undersökningen visar positiva resultat med en hybridlösning med användning av både BRISQUE och FFmpeg då båda ensamma inte är tillräckligt anpassade för multimediaövervakning. FFmpeg har möjligheten att detektera avkodningsfel som oftast beror på allvarliga bitfel, och BRISQUE algoritmen utvecklades för att analysera bilder och bestämma bildkvaliteten. Enligt testresultaten kan BRISQUE användas för multicastvideoanalysering eftersom att den subjektiva bildkvaliteten kan bestämmas med god pålitlighet. Kombinationen av dessa metoder har visat bra resultat men behöver undersökas mer för användning av multicastövervakning. / The enormous increase in multicast services has shown the limitations of traditional network management tools in multicast quality monitoring. There is a need for new monitoring technologies that are not hardware-based solutions such as increased link throughput, buffer length and capacity to enhance the quality of experience. This paper examines the use of FFmpeg, and OpenCV as well the no-reference image quality assessment algorithm BRISQUE to improve the quality of service as well as the quality of experience. By detecting image quality deficiencies as well as bit errors in the video stream, the QoS and QoE can be improved. The purpose of this project was to develop a monitoring system that has the ability to detect fluctuations in image quality and bit errors in a multicast video stream in real time and then notify the service provider using SNMP traps. The tests performed in this paper shows positive results when using the hybrid solution proposed in this paper, both BRISQUE and FFmpeg alone are not sufficiently adapted for this purpose. FFmpeg has the ability to detect decoding errors that usually occurs due to serious bit errors and the BRISQUE algorithm was developed to analyse images and determine the subjective image quality. According to the test results BRISQUE can be used for multicast video analysis because the subjective image quality can be determined with good reliability. The combination of these methods has shown good results but needs to be investigated and developed further.
343

Learning to Grasp Unknown Objects using Weighted Random Forest Algorithm from Selective Image and Point Cloud Feature

Iqbal, Md Shahriar 01 January 2014 (has links)
This method demonstrates an approach to determine the best grasping location on an unknown object using Weighted Random Forest Algorithm. It used RGB-D value of an object as input to find a suitable rectangular grasping region as the output. To accomplish this task, it uses a subspace of most important features from a very high dimensional extensive feature space that contains both image and point cloud features. Usage of most important features in the grasping algorithm has enabled the system to be computationally very fast while preserving maximum information gain. In this approach, the Random Forest operates using optimum parameters e.g. Number of Trees, Number of Features at each node, Information Gain Criteria etc. ensures optimization in learning, with highest possible accuracy in minimum time in an advanced practical setting. The Weighted Random Forest chosen over Support Vector Machine (SVM), Decision Tree and Adaboost for implementation of the grasping system outperforms the stated machine learning algorithms both in training and testing accuracy and other performance estimates. The Grasping System utilizing learning from a score function detects the rectangular grasping region after selecting the top rectangle that has the largest score. The system is implemented and tested in a Baxter Research Robot with Parallel Plate Gripper in action.
344

Practical Implementations Of The Active Set Method For Support Vector Machine Training With Semi-definite Kernels

Sentelle, Christopher 01 January 2014 (has links)
The Support Vector Machine (SVM) is a popular binary classification model due to its superior generalization performance, relative ease-of-use, and applicability of kernel methods. SVM training entails solving an associated quadratic programming (QP) that presents significant challenges in terms of speed and memory constraints for very large datasets; therefore, research on numerical optimization techniques tailored to SVM training is vast. Slow training times are especially of concern when one considers that re-training is often necessary at several values of the models regularization parameter, C, as well as associated kernel parameters. The active set method is suitable for solving SVM problem and is in general ideal when the Hessian is dense and the solution is sparse–the case for the `1-loss SVM formulation. There has recently been renewed interest in the active set method as a technique for exploring the entire SVM regularization path, which has been shown to solve the SVM solution at all points along the regularization path (all values of C) in not much more time than it takes, on average, to perform training at a single value of C with traditional methods. Unfortunately, the majority of active set implementations used for SVM training require positive definite kernels, and those implementations that do allow semi-definite kernels tend to be complex and can exhibit instability and, worse, lack of convergence. This severely limits applicability since it precludes the use of the linear kernel, can be an issue when duplicate data points exist, and doesn’t allow use of low-rank kernel approximations to improve tractability for large datasets. The difficulty, in the case of a semi-definite kernel, arises when a particular active set results in a singular KKT matrix (or the equality-constrained problem formed using the active set is semidefinite). Typically this is handled by explicitly detecting the rank of the KKT matrix. Unfortunately, this adds significant complexity to the implementation; and, if care is not taken, numerical iii instability, or worse, failure to converge can result. This research shows that the singular KKT system can be avoided altogether with simple modifications to the active set method. The result is a practical, easy to implement active set method that does not need to explicitly detect the rank of the KKT matrix nor modify factorization or solution methods based upon the rank. Methods are given for both conventional SVM training as well as for computing the regularization path that are simple and numerically stable. First, an efficient revised simplex method is efficiently implemented for SVM training (SVM-RSQP) with semi-definite kernels and shown to out-perform competing active set implementations for SVM training in terms of training time as well as shown to perform on-par with state-of-the-art SVM training algorithms such as SMO and SVMLight. Next, a new regularization path-following algorithm for semi-definite kernels (Simple SVMPath) is shown to be orders of magnitude faster, more accurate, and significantly less complex than competing methods and does not require the use of external solvers. Theoretical analysis reveals new insights into the nature of the path-following algorithms. Finally, a method is given for computing the approximate regularization path and approximate kernel path using the warm-start capability of the proposed revised simplex method (SVM-RSQP) and shown to provide significant, orders of magnitude, speed-ups relative to the traditional grid search where re-training is performed at each parameter value. Surprisingly, it also shown that even when the solution for the entire path is not desired, computing the approximate path can be seen as a speed-up mechanism for obtaining the solution at a single value. New insights are given concerning the limiting behaviors of the regularization and kernel path as well as the use of low-rank kernel approximations.
345

Remote Sensing of Urbanization and Environmental Impacts

Haas, Jan January 2013 (has links)
The unprecedented growth of urban areas all over the globe is nowadays maybe most apparent in China having undergone rapid urbanization since the late 1970s. The need for new residential, commercial and industrial areas leads to new urban regions challenging sustainable development and the maintenance and creation of a high living standard as well as the preservation of ecological functionality. Therefore, timely and reliable information on land-cover changes and their consequent environmental impacts are needed to support sustainable urban development.The objective of this research is the analysis of land-cover changes, especially the development of urban areas in terms of speed, magnitude and resulting implications for the natural and rural environment using satellite imagery and the quantification of environmental impacts with the concepts of ecosystem services and landscape metrics. The study areas are the cities of Shanghai and Stockholm and the three highly-urbanized Chinese regions Jing-Jin-Ji, the Yangtze River Delta and the Pearl River Delta. The analyses are based on classification of optical satellite imagery (Landsat TM/ETM+ and HJ-1A/B) over the past two decades. The images were first co-registered and mosaicked, whereupon GLCM texture features were generated and tasseled cap transformations performed to improve class separabilities. The mosaics were classified with a pixel-based SVM and a random forest decision tree ensemble classifier. Based on the classification results, two urbanization indices were derived that indicate both the absolute amount of urban land and the speed of urban development. The spatial composition and configuration of the landscape was analysed by landscape metrics. Environmental impacts were quantified by attributing ecosystem service values to the classifications and the observation of value changes over time. ivThe results from the comparative study between Shanghai and Stockholm show a decrease in all natural land-cover classes and agricultural areas, whereas urban areas increased by approximately 120% in Shanghai, nearly ten times as much as in Stockholm where no significant land-cover changes other than a 12% urban expansion could be observed. From the landscape metrics analysis results, it appears that fragmentation in both study regions occurred mainly due to the growth of high density built-up areas in previously more natural environments, while the expansion of low density built-up areas was for the most part in conjunction with pre-existing patches. Urban growth resulted in ecosystem service value losses of ca. 445 million US dollars in Shanghai, mostly due to a decrease in natural coastal wetlands. In Stockholm, a 4 million US dollar increase in ecosystem service values could be observed that can be explained by the maintenance and development of urban green spaces. Total urban growth in Shanghai was 1,768 km2 compared to 100 km2 in Stockholm. Regarding the comparative study of urbanization in the three Chinese regions, a total increase in urban land of about 28,000 km2 could be detected with a simultaneous decrease in ecosystem service values corresponding to ca. 18.5 billion Chinese Yuan Renminbi. The speed and relative urban growth in Jing-Jin-Ji was highest, followed by the Yangtze River Delta and the Pearl River Delta. The increase in urban land occurred predominately at the expense of cropland. Wetlands decreased due to land reclamation in all study areas. An increase in landscape complexity in terms of land-cover composition and configuration could be detected. Urban growth in Jing-Jin-Ji contributed most to the decrease in ecosystem service values, closely followed by the Yangtze River Delta and the Pearl River Delta. / <p>QC 20130610</p>
346

A Cloud-Based Intelligent and Energy Efficient Malware Detection Framework. A Framework for Cloud-Based, Energy Efficient, and Reliable Malware Detection in Real-Time Based on Training SVM, Decision Tree, and Boosting using Specified Heuristics Anomalies of Portable Executable Files

Mirza, Qublai K.A. January 2017 (has links)
The continuity in the financial and other related losses due to cyber-attacks prove the substantial growth of malware and their lethal proliferation techniques. Every successful malware attack highlights the weaknesses in the defence mechanisms responsible for securing the targeted computer or a network. The recent cyber-attacks reveal the presence of sophistication and intelligence in malware behaviour having the ability to conceal their code and operate within the system autonomously. The conventional detection mechanisms not only possess the scarcity in malware detection capabilities, they consume a large amount of resources while scanning for malicious entities in the system. Many recent reports have highlighted this issue along with the challenges faced by the alternate solutions and studies conducted in the same area. There is an unprecedented need of a resilient and autonomous solution that takes proactive approach against modern malware with stealth behaviour. This thesis proposes a multi-aspect solution comprising of an intelligent malware detection framework and an energy efficient hosting model. The malware detection framework is a combination of conventional and novel malware detection techniques. The proposed framework incorporates comprehensive feature heuristics of files generated by a bespoke static feature extraction tool. These comprehensive heuristics are used to train the machine learning algorithms; Support Vector Machine, Decision Tree, and Boosting to differentiate between clean and malicious files. Both these techniques; feature heuristics and machine learning are combined to form a two-factor detection mechanism. This thesis also presents a cloud-based energy efficient and scalable hosting model, which combines multiple infrastructure components of Amazon Web Services to host the malware detection framework. This hosting model presents a client-server architecture, where client is a lightweight service running on the host machine and server is based on the cloud. The proposed framework and the hosting model were evaluated individually and combined by specifically designed experiments using separate repositories of clean and malicious files. The experiments were designed to evaluate the malware detection capabilities and energy efficiency while operating within a system. The proposed malware detection framework and the hosting model showed significant improvement in malware detection while consuming quite low CPU resources during the operation.
347

Mobile Machine Learning for Real-time Predictive Monitoring of Cardiovascular Disease

Boursalie, Omar January 2016 (has links)
Chronic cardiovascular disease (CVD) is increasingly becoming a burden for global healthcare systems. This burden can be attributed in part to traditional methods of managing CVD in an aging population that involves periodic meetings between the patient and their healthcare provider. There is growing interest in developing continuous monitoring systems to assist in the management of CVD. Monitoring systems can utilize advances in wearable devices and health records, which provides minimally invasive methods to monitor a patient’s health. Despite these advances, the algorithms deployed to automatically analyze the wearable sensor and health data is considered too computationally expensive to run on the mobile device. Instead, current mobile devices continuously transmit the collected data to a server for analysis at great computational and data transmission expense. In this thesis a novel mobile system designed for monitoring CVD is presented. Unlike existing systems, the proposed system allows for the continuous monitoring of physiological sensors, data from a patient’s health record and analysis of the data directly on the mobile device using machine learning algorithms (MLA) to predict an individual’s CVD severity level. The system successfully demonstrated that a mobile device can act as a complete monitoring system without requiring constant communication with a server. A comparative analysis between the support vector machine (SVM) and multilayer perceptron (MLP) to explore the effectiveness of each algorithm for monitoring CVD is also discussed. Both models were able to classify CVD risk with the SVM achieving the highest accuracy (63%) and specificity (76%). Finally, unlike current systems the resource requirements for each component in the system was evaluated. The MLP was found to be more efficient when running on the mobile device compared to the SVM. The results of thesis also show that the MLAs complexity was not a barrier to deployment on a mobile device. / Thesis / Master of Applied Science (MASc) / In this thesis, a novel mobile system for monitoring cardiovascular (CVD) disease is presented. The system allows for the continuous monitoring of both physiological sensors, data from a patient’s health record and analysis of the data directly on the mobile device using machine learning algorithms (MLA) to predict an individual’s CVD severity level. The system successfully demonstrated that a mobile device can act as a complete monitoring system without requiring constant communication with a remote server. A comparative analysis between the support vector machine (SVM) and multilayer perceptron (MLP) to explore the effectiveness of each MLA for monitoring CVD is also discussed. Both models were able to classify CVD severity with the SVM achieving the highest accuracy (63%) and specificity (76%). Finally, the resource requirements for each component in the system were evaluated. The results show that the MLAs complexity was not a barrier to deployment on a mobile device.
348

DEVELOPMENT OF NOISE AND VIBRATION BASED FAULT DIAGNOSIS METHOD FOR ELECTRIFIED POWERTRAIN USING SUPERVISED MACHINE LEARNING CLASSIFICATION

Joohyun Lee (17552055) 06 December 2023 (has links)
<p dir="ltr">The industry's interest in electrified powertrain-equipped vehicles has increased due to environmental and economic reasons. Electrified powertrains, in general, produce lower sound and vibration level than those equipped with internal combustion engines, making noise and vibration (N&V) from other non-engine powertrain components more perceptible. One such N&V type that arouses concern to both vehicle manufacturers and passengers is gear growl, but the signal characteristics of gear growl noise and vibration and the threshold of those characteristics that can be used to determine whether a gear growl requires attention are not yet well understood. This study focuses on developing a method to detect gear-growl based on the N\&V measurements and determining thresholds on various severities of gear-growl using supervised machine learning classification. In general, a machine learning classifier requires sufficient high-quality training data with strong information independence to ensure accurate classification performance. In industrial practices, acquiring high-quality vehicle NVH data is expensive in terms of finance, time, and effort. A physically informed data augmentation method is, thus, proposed to generate realistic powertrain NVH signals based on high-quality measurements which not only provides a larger training data set but also enriches the signal feature variations included in the data set. More specifically, this method extracts physical information such as angular speed, tonal amplitudes distribution, and broadband spectrum shape from the measurement data. Then, it recreates a synthetic signal that mimics the measurement data. The measured and simulated (via data augmentation) are transformed into feature matrix representation so that the N\&V signals can be used in the classification model training process. Features describing signal characteristics are studied, extracted, and selected. While the root-mean-square (RMS) of the vibration signal and spectral entropy were sufficient for detecting gear-growl with a test accuracy of 0.9828, the acoustic signal required more features due to background noise, making data linearly inseparable. The minimum Redundancy Maximum Relevance (mRMR) feature scoring method was used to assess the importance of acoustic signal features in classification. The five most important features based on the importance score were the angular acceleration of the driveshaft, the time derivative of RMS, the tone-to-noise ratio (TNR), the time derivative of the spectral spread of the tonal component of the acoustic signal, and the time derivative of the spectral spread of the original acoustic signal (before tonal and broadband separation). A supervised classification model is developed using a support vector machine from the extracted acoustic signal features. Data used in training and testing consists of steady-state vehicle operations of 25, 35, 45, and 55 mph, with two vehicles with two different powertrain specs: axles with 4.56 and 6.14 gear ratios. The dataset includes powertrains with swapped axles (four different configurations). Techniques such as cost weighting, median filter, and hyperparameter tuning are implemented to improve the classification performance where the model classifies if a segment in the signal represents a gear-growl event or no gear-growl event. The average accuracy of test data was 0.918. A multi-class classification model is further implemented to classify different severities based on preliminary subjective listening studies. Data augmentation using signal simulation showed improvement in binary classification applications. In this study, only gear-growl was used as a fault type. Still, data augmentation, feature extraction and selection, and classification methods can be generalized for NVH signal-based fault diagnosis applications. Further listening studies are suggested for improved classification of multi-class classification applications.</p>
349

Development of new data fusion techniques for improving snow parameters estimation

De Gregorio, Ludovica 26 November 2019 (has links)
Water stored in snow is a critical contribution to the world’s available freshwater supply and is fundamental to the sustenance of natural ecosystems, agriculture and human societies. The importance of snow for the natural environment and for many socio-economic sectors in several mid‐ to high‐latitude mountain regions around the world, leads scientists to continuously develop new approaches to monitor and study snow and its properties. The need to develop new monitoring methods arises from the limitations of in situ measurements, which are pointwise, only possible in accessible and safe locations and do not allow for a continuous monitoring of the evolution of the snowpack and its characteristics. These limitations have been overcome by the increasingly used methods of remote monitoring with space-borne sensors that allow monitoring the wide spatial and temporal variability of the snowpack. Snow models, based on modeling the physical processes that occur in the snowpack, are an alternative to remote sensing for studying snow characteristics. However, from literature it is evident that both remote sensing and snow models suffer from limitations as well as have significant strengths that it would be worth jointly exploiting to achieve improved snow products. Accordingly, the main objective of this thesis is the development of novel methods for the estimation of snow parameters by exploiting the different properties of remote sensing and snow model data. In particular, the following specific novel contributions are presented in this thesis: i. A novel data fusion technique for improving the snow cover mapping. The proposed method is based on the exploitation of the snow cover maps derived from the AMUNDSEN snow model and the MODIS product together with their quality layer in a decision level fusion approach by mean of a machine learning technique, namely the Support Vector Machine (SVM). ii. A new approach has been developed for improving the snow water equivalent (SWE) product obtained from AMUNDSEN model simulations. The proposed method exploits some auxiliary information from optical remote sensing and from topographic characteristics of the study area in a new approach that differs from the classical data assimilation approaches and is based on the estimation of AMUNDSEN error with respect to the ground data through a k-NN algorithm. The new product has been validated with ground measurement data and by a comparison with MODIS snow cover maps. In a second step, the contribution of information derived from X-band SAR imagery acquired by COSMO-SkyMed constellation has been evaluated, by exploiting simulations from a theoretical model to enlarge the dataset.
350

Implementering av maskinginlärningsmodeller för detektering av ett objekt baserad på endimensionell elektromagnetisk strålningsdata / Implementation of machine learning models for detecting an object based on one-dimensional electromagnetic radiation data

Heinke, Simon, Åberg, Marcus January 2020 (has links)
Clinical trials are experiments or observations on a patient’s responses of different medical treatments to cure diseases. Such trials are heavily regulated and must achieve a certain quality standard of the trial and clinical adherence is a determining factor on the success of a study. However, it has historically been difficult to systematically follow and understand patient adherence to medical ordinations, predominately due to lack of proper tools. One new type of tools is a digital pillbox that can be used to supply pills to participants in clinical trials. This paper examines implementing two supervised machine learning models to detect if an object (a pill) is found in an encapsulated compartment (pillbox) based on electromagnetic radiation data from a proximity sensor. Support Vector Machine (SVM) and Random Forest (RF) were evaluated on a data set of N=1,485 observations, consisting of five classes: four different pills and ‘no pill’. RF performs best with accuracy of 98.0% and weighted average precision of 98.0%. SVM had 97.3% accuracy and 97.6% weighted average precision. Best performance was achieved at N=1,000 for RF and 1,100 for SVM. The conclusion was that a high accuracy and precision can be achieved using either RF or SVM. The classification model strengthens the value proposition of a digital pillbox and can improve clinical trials to achieve better data quality. However, for the model to contribute actual economical value, digital pillboxes must be a common practice in clinical trials. / Kliniska studier är experiment eller observationer av en patients reaktion på olika typer av medicinsk vård för behandling sjukdomar. Sådana studier är tungt reglerade och behöver uppnå en viss kvalitésstandard och klinisk följsamhet är en avgörande faktor för en studies framgång. Trots det har det historiskt varit svårt att systematiskt mäta och förstå en patients följsamhet av en medicinsk ordination, primärt på grund av brist av användbara verktyg. En ny typ av verktyg är en digital  pillerbox som försörjer piller till deltagare i kliniska studier. Denna studie undersöker implementation av två bevakade maskininlärningsmodeller för detektion om ett objekt (ett piller) befinner sig i ett slutet fack baserad på elektromagnetisk strålning från en närhetssensor. Support Vector Machine (SVM) och Random Forest (RF) utvärderades på ett dataset av N=1 485 observationer utgjort av fem klasser: fyra piller och ’inget piller’. RF presterar bäst med 98,0% i träffsäkerhet och 98,0% i viktad medelprecision. SVM fick 97,3% träffsäkerhet och 97,6% viktad medelprecision. Bäst prestation uppnåddes vid N=1 000 för RF och N=1 100 för SVM. Slutsatsen var att en hög träffsäkerhet och precision kan uppnås genom antingen RF eller SVM. Klassificeringsmodellen förstärker en digital pillerbox värdeerbjudande och kan hjälpa kliniska studier att uppnå högre datakvalité. Däremot, för klassificeringsmodellen ska bidra med faktiskt ekonomiskt värde, behöver digitala pillerboxar vara en vedertagen praxis.

Page generated in 0.0794 seconds