• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 577
  • 295
  • 86
  • 38
  • 15
  • 11
  • 6
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 1172
  • 801
  • 406
  • 291
  • 280
  • 274
  • 200
  • 196
  • 188
  • 140
  • 119
  • 118
  • 116
  • 116
  • 115
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Plug-in methods in classification / Méthodes de type plug-in en classification

Chzhen, Evgenii 25 September 2019 (has links)
Ce manuscrit étudie plusieurs problèmes de classification sous contraintes. Dans ce cadre de classification, notre objectif est de construire un algorithme qui a des performances aussi bonnes que la meilleure règle de classification ayant une propriété souhaitée. Fait intéressant, les méthodes de classification de type plug-in sont bien appropriées à cet effet. De plus, il est montré que, dans plusieurs configurations, ces règles de classification peuvent exploiter des données non étiquetées, c'est-à-dire qu'elles sont construites de manière semi-supervisée. Le Chapitre 1 décrit deux cas particuliers de la classification binaire - la classification où la mesure de performance est reliée au F-score, et la classification équitable. A ces deux problèmes, des procédures semi-supervisées sont proposées. En particulier, dans le cas du F-score, il s'avère que cette méthode est optimale au sens minimax sur une classe usuelle de distributions non-paramétriques. Aussi, dans le cas de la classification équitable, la méthode proposée est consistante en terme de risque de classification, tout en satisfaisant asymptotiquement la contrainte d’égalité des chances. De plus, la procédure proposée dans ce cadre d'étude surpasse en pratique les algorithmes de pointe. Le Chapitre 3 décrit le cadre de la classification multi-classes par le biais d'ensembles de confiance. Là encore, une procédure semi-supervisée est proposée et son optimalité presque minimax est établie. Il est en outre établi qu'aucun algorithme supervisé ne peut atteindre une vitesse de convergence dite rapide. Le Chapitre 4 décrit un cas de classification multi-labels dans lequel on cherche à minimiser le taux de faux-négatifs sous réserve de contraintes de type presque sûres sur les règles de classification. Dans cette partie, deux contraintes spécifiques sont prises en compte: les classifieurs parcimonieux et ceux soumis à un contrôle des erreurs négatives à tort. Pour les premiers, un algorithme supervisé est fourni et il est montré que cet algorithme peut atteindre une vitesse de convergence rapide. Enfin, pour la seconde famille, il est montré que des hypothèses supplémentaires sont nécessaires pour obtenir des garanties théoriques sur le risque de classification / This manuscript studies several problems of constrained classification. In this frameworks of classification our goal is to construct an algorithm which performs as good as the best classifier that obeys some desired property. Plug-in type classifiers are well suited to achieve this goal. Interestingly, it is shown that in several setups these classifiers can leverage unlabeled data, that is, they are constructed in a semi-supervised manner.Chapter 2 describes two particular settings of binary classification -- classification with F-score and classification of equal opportunity. For both problems semi-supervised procedures are proposed and their theoretical properties are established. In the case of the F-score, the proposed procedure is shown to be optimal in minimax sense over a standard non-parametric class of distributions. In the case of the classification of equal opportunity the proposed algorithm is shown to be consistent in terms of the misclassification risk and its asymptotic fairness is established. Moreover, for this problem, the proposed procedure outperforms state-of-the-art algorithms in the field.Chapter 3 describes the setup of confidence set multi-class classification. Again, a semi-supervised procedure is proposed and its nearly minimax optimality is established. It is additionally shown that no supervised algorithm can achieve a so-called fast rate of convergence. In contrast, the proposed semi-supervised procedure can achieve fast rates provided that the size of the unlabeled data is sufficiently large.Chapter 4 describes a setup of multi-label classification where one aims at minimizing false negative error subject to almost sure type constraints. In this part two specific constraints are considered -- sparse predictions and predictions with the control over false negative errors. For the former, a supervised algorithm is provided and it is shown that this algorithm can achieve fast rates of convergence. For the later, it is shown that extra assumptions are necessary in order to obtain theoretical guarantees in this case
132

Be More with Less: Scaling Deep-learning with Minimal Supervision

Yaqing Wang (12470301) 28 April 2022 (has links)
<p>  </p> <p>Large-scale deep learning models have reached previously unattainable performance for various tasks. However, the ever-growing resource consumption of neural networks generates large carbon footprint, brings difficulty for academics to engage in research and stops emerging economies from enjoying growing Artificial Intelligence (AI) benefits. To further scale AI to bring more benefits, two major challenges need to be solved. Firstly, even though large-scale deep learning models achieved remarkable success, their performance is still not satisfactory when fine-tuning with only a handful of examples, thereby hindering widespread adoption in real-world applications where a large scale of labeled data is difficult to obtain. Secondly, current machine learning models are still mainly designed for tasks in closed environments where testing datasets are highly similar to training datasets. When the deployed datasets have distribution shift relative to collected training data, we generally observe degraded performance of developed models. How to build adaptable models becomes another critical challenge. To address those challenges, in this dissertation, we focus on two topics: few-shot learning and domain adaptation, where few-shot learning aims to learn tasks with limited labeled data and domain adaption address the discrepancy between training data and testing data. In Part 1, we show our few-shot learning studies. The proposed few-shot solutions are built upon large-scale language models with evolutionary explorations from improving supervision signals, incorporating unlabeled data and improving few-shot learning abilities with lightweight fine-tuning design to reduce deployment costs. In Part 2, domain adaptation studies are introduced. We develop a progressive series of domain adaption approaches to transfer knowledge across domains efficiently to handle distribution shifts, including capturing common patterns across domains, adaptation with weak supervision and adaption to thousands of domains with limited labeled data and unlabeled data. </p>
133

"Semi-supervised" trénování hlubokých neuronových sítí pro rozpoznávání řeči / Semi-Supervised Training of Deep Neural Networks for Speech Recognition

Veselý, Karel January 2018 (has links)
V této dizertační práci nejprve prezentujeme teorii trénování neuronových sítí pro rozpoznávání řeči společně s implementací trénovacího receptu 'nnet1', který je součástí toolkitu s otevřeným kódem Kaldi. Recept se skládá z předtrénování bez učitele pomocí algoritmu RBM, trénování klasifikátoru z řečových rámců s kriteriální funkcí Cross-entropy a ze sekvenčního trénování po větách s kriteriální funkcí sMBR. Následuje hlavní téma práce, kterým je semi-supervised trénování se smíšenými daty s přepisem i bez přepisu. Inspirováni konferenčními články a úvodními experimenty jsme se zaměřili na několik otázek: Nejprve na to, zda je lepší konfidence (t.j. důvěryhodnosti automaticky získaných anotací) počítat po větách, po slovech nebo po řečových rámcích. Dále na to, zda by konfidence měly být použity pro výběr dat nebo váhování dat - oba přístupy jsou kompatibilní s trénováním pomocí metody stochastického nejstrmějšího sestupu, kde jsou gradienty řečových rámců násobeny vahou. Dále jsme se zabývali vylepšováním semi-supervised trénování pomocí kalibrace kofidencí a přístupy, jak model dále vylepšit pomocí dat se správným přepisem. Nakonec jsme navrhli jednoduchý recept, pro který není nutné časově náročné ladění hyper-parametrů trénování, a který je prakticky využitelný pro různé datové sady. Experimenty probíhaly na několika sadách řečových dat: pro rozpoznávač vietnamštiny s 10 přepsaným hodinami (Babel) se chybovost snížila o 2.5%, pro angličtinu se 14 přepsanými hodinami (Switchboard) se chybovost snížila o 3.2%. Zjistili jsme, že je poměrně těžké dále vylepšit přesnost systému pomocí úprav konfidencí, zároveň jsme ale přesvědčení, že naše závěry mají značnou praktickou hodnotu: data bez přepisu je jednoduché nasbírat a naše navrhované řešení přináší dobrá zlepšení úspěšnosti a není těžké je replikovat.
134

Analyzing the performance of active learning strategies on machine learning problems

Werner, Vendela January 2023 (has links)
Digitalisation within industries is rapidly advancing and data possibilities are growing daily. Machine learning models need a large amount of data that are well-annotated for good performance. To get well-annotated data, an expert is needed, which is expensive, and the annotation itself could be very time-consuming. The performance of machine learning models is dependent on the size of the data set since a large amount of annotation is required for a good performance. Active learning has emerged as a solution to increase the size of the data through selective annotation. Instead of labelling data points at random, active learning strategies can be used to select data points based on informativeness or uncertainty. The challenge lies in determining the most effective active learning strategy for a combination of machine learning model and problem type. Although active learning has been around for a while, benchmarking strategies have not widely been explored. The aim of the thesis was to benchmark different AL strategies and analyse their performance on underlying ML problems and ML methods/models. For this purpose, an experiment was constructed to, in an unbiased way, compare different machine learning models in combination with different active learning strategies within the areas of computer vision, drug discovery, and natural language processing. Nine different active learning strategies were analysed in the thesis, with a random strategy working as the baseline, tested on six different machine learning methods/models. The result of this thesis was that active learning had a positive effect within all problem areas and especially worked well for unbalanced data. The two main conclusions are that all active learning strategies work better for a smaller budget due to the importance of selecting informative data points and that prediction-based strategies are the most successful for all problem types. / Föreställ dig möjligheten att ha ett verktyg för att bota en genetisk sjukdom. Idag finns data överallt, även ditt DNA anses vara fullt av värdefull information och mysterier redo att utforskas. I våra data finns det oändliga kopplingar och dolda relationer som inte ens det bästa mänskliga sinnet kan hitta och datorkraft har blivit en styrka att räkna med. Ett vinnande koncept har visat sig vara human-in-the-loop-programmering, där människa och dator arbetar tillsammans. Detta kallas inom maskininlärning för supervised learn- ing. Normalt sett kräver supervised learning en stor mängd data, och för mer komplexa uppgifter, en expert då feedback från en människa förväntas. Man kan se datorn som en detektiv och experten som dennes chef som pekar i rätt riktning. Riktningen pekas ut genom annotering av data, man berättar för datorn vilket svar som är rätt så att den lär sig ta ut särdrag. Exempelvis om man vill ha ett program som skiljer på hund från katt så kan det vara svårt att veta vad som är vad om man aldrig har sett ett djur innan. Båda har två öron, två ögon, fyra ben, och i många fall, även päls. En människa kan då berätta för datorn om det är en hund eller katt som syns på bilden och datorn kommer då börja lära sig se mönster och se utmärkande egenskaper. Att annotera data tar både lång tid och kostar mycket pengar. Vad gör man egentligen när mängden data är för liten, och/eller kostnaden för en expert blir för stor? Sam är en person med en sällsynt genetisk sjukdom. De har hört talas om ett program som bygger på supervised learning som kan ge förslag på vilken medicinsk behandling de kan pröva för att lindra sina symtom. På grund av den unika genetiska sjukdom som Sam har så finns det inte mycket data om detta, vilket gör att programvaran inte kommer fungera i Sams fall. Kom ihåg att supervised learning behöver mycket data som är väl annoterad för att ge pålitlig utdata. Hur ska programmeraren kunna hjälpa Sam? Med active learning såklart! Active learning är ett samlingsnamn för olika strategier som selekterar de mest informativa, eller osäkra datapunkterna att annotera. I stället för att exempelvis göra 2000 annoteringar kan en bättre prestanda åstadkommas med enbart 100. Skillnaden ligger i att det under supervised learning utan active learn- ing presenteras en färdig uppsättning av punkter för experten att annotera. Med active learning sker en interaktion för att välja ut punkter för annotering. Detta resulterar i en mer kostnadseffektiv inlärning som även presterar bra på ett litet data set. Detta exjobb har studerat prestationen av active learning inom läkemedelsbranschen och även prob- lem inom datorseende och språkteknologi. Resultatet gav att minst en av de applicerade active learning strategierna ledde till en förbättrad prestanda inom samtliga områden. Kanske kan vi i framtiden faktiskt använda active learning till att hjälpa personer som Sam och ha verktyget för att lösa mysteriet och bota dennes genetiska sjukdom.
135

Semi-Supervised Plant Leaf Detection and Stress Recognition / Semi-övervakad detektering av växtblad och möjlig stressigenkänning

Antal Csizmadia, Márk January 2022 (has links)
One of the main limitations of training deep learning-based object detection models is the availability of large amounts of data annotations. When annotations are scarce, semi-supervised learning provides frameworks to improve object detection performance by utilising unlabelled data. This is particularly useful in plant leaf detection and possible leaf stress recognition, where data annotations are expensive to obtain due to the need for specialised domain knowledge. This project aims to investigate the feasibility of the Unbiased Teacher, a semi-supervised object detection algorithm, for detecting plant leaves and recognising possible leaf stress in experimental settings where few annotations are available during training. We build an annotated data set for this task and implement the Unbiased Teacher algorithm. We optimise the Unbiased Teacher algorithm and compare its performance to that of a baseline model. Finally, we investigate which hyperparameters of the Unbiased Teacher algorithm most significantly affect its performance and its ability to utilise unlabelled images. We find that the Unbiased Teacher algorithm outperforms the baseline model in the experimental settings when limited annotated data are available during training. Amongst the hyperparameters we consider, we identify the confidence threshold as having the most effect on the algorithm’s performance and ability to leverage unlabelled data. Ultimately, we demonstrate the feasibility of improving object detection performance with the Unbiased Teacher algorithm in plant leaf detection and possible stress recognition when few annotations are available. The improved performance reduces the amount of annotated data required for this task, reducing annotation costs and thereby increasing usage for real-world tasks. / En av huvudbegränsningarna med att träna djupinlärningsbaserade objektdetekteringsmodeller är tillgången på stora mängder annoterad data. Vid små mängder av tillgänglig data kan semi-övervakad inlärning erbjuda ett ramverk för att förbättra objektdetekteringsprestanda genom att använda icke-annoterad data. Detta är särskilt användbart vid detektering av växtblad och möjlig igenkänning av stressymptom hos bladen, där kostnaden för annotering av data är hög på grund av behovet av specialiserad kunskap inom området. Detta projekt syftar till att undersöka genomförbarheten av Opartiska Läraren (eng. ”Unbiased Teacher”), en semi-övervakad objektdetekteringsalgoritm, för att upptäcka växtblad och känna igen möjliga stressymptom hos blad i experimentella miljöer när endast en liten mängd annoterad data finns tillgänglig under träning. För att åstadkomma detta bygger vi ett annoterat dataset och implementerar Opartiska Läraren. Vi optimerar Opartiska Läraren och jämför dess prestanda med en baslinjemodell. Slutligen undersöker vi de hyperparametrar som mest påverkar Opartiska Lärarens prestanda och dess förmåga att använda icke-annoterade bilder. Vi finner att Opartiska Läraren överträffar baslinjemodellen i de experimentella inställningarna när det finns en begränsad mängd annoterad data under träningen. Bland hyperparametrarna vi överväger identifierar vi konfidensgränsen som har störst effekt på algoritmens prestanda och dess förmåga att utnyttja icke-annoterad data. Vi demonstrerar möjligheten att förbättra objektdetekteringsprestandan med Opartiska Läraren i växtbladsdetektering och möjlig stressigenkänning när få anteckningar finns tillgängliga. Den förbättrade prestandan minskar mängden annoterad data som krävs, vilket minskar anteckningskostnaderna och ökar därmed användbarheten för användning inom mer praktiska områden.
136

Quality monitoring of projection welding using machine learning with small data sets

Koal, Johannes, Hertzschuch, Tim, Zschetzsche, Jörg, Füssel, Uwe 19 January 2024 (has links)
Capacitor discharge welding is an efficient, cost-effective and stable process. It is mostly used for projection welding. Real-time monitoring is desired to ensure quality. Until this point, measured process quantities were evaluated through expert systems. This method takes much time for developing, is strongly restricted to specific welding tasks and needs deep understanding of the process. Another possibility is quality prediction based on process data with machine learning. This method can overcome the downsides of expert systems. But it requires classified welding experiments to achieve a high prediction probability. In industrial manufacturing, it is rarely possible to generate big sets of this type of data. Therefore, semi-supervised learning will be investigated to enable model development on small data sets. Supervised learning is used to develop machine learning models on large amounts of data. These models are used as a comparison to the semi-supervised models. The time signals of the process parameters are evaluated in these investigations. A total of 389 classified weld tests were performed. With semi-supervised learning methods, the amount of training data necessary was reduced to 31 classified data sets.
137

Vicarious battering: The experience of intervening at a domestic violence-focused supervised visitation center

Parker, Tracee 27 February 2017 (has links)
No description available.
138

Label-Efficient Visual Understanding with Consistency Constraints

Zou, Yuliang 24 May 2022 (has links)
Modern deep neural networks are proficient at solving various visual recognition and understanding tasks, as long as a sufficiently large labeled dataset is available during the training time. However, the progress of these visual tasks is limited by the number of manual annotations. On the other hand, it is usually time-consuming and error-prone to annotate visual data, rendering the challenge of scaling up human labeling for many visual tasks. Fortunately, it is easy to collect large-scale, diverse unlabeled visual data from the Internet. And we can acquire a large amount of synthetic visual data with annotations from game engines effortlessly. In this dissertation, we explore how to utilize the unlabeled data and synthetic labeled data for various visual tasks, aiming to replace or reduce the direct supervision from the manual annotations. The key idea is to encourage deep neural networks to produce consistent predictions across different transformations (\eg geometry, temporal, photometric, etc.). We organize the dissertation as follows. In Part I, we propose to use the consistency over different geometric formulations and a cycle consistency over time to tackle the low-level scene geometry perception tasks in a self-supervised learning setting. In Part II, we tackle the high-level semantic understanding tasks in a semi-supervised learning setting, with the constraint that different augmented views of the same visual input maintain consistent semantic information. In Part III, we tackle the cross-domain image segmentation problem. By encouraging an adaptive segmentation model to output consistent results for a diverse set of strongly-augmented synthetic data, the model learns to perform test-time adaptation on unseen target domains with one single forward pass, without model training or optimization at the inference time. / Doctor of Philosophy / Recently, deep learning has emerged as one of the most powerful tools to solve various visual understanding tasks. However, the development of deep learning methods is significantly limited by the amount of manually labeled data. On the other hand, it is usually time-consuming and error-prone to annotate visual data, making the human labeling process not easily scalable. Fortunately, it is easy to collect large-scale, diverse raw visual data from the Internet (\eg search engines, YouTube, Instagram, etc.). And we can acquire a large amount of synthetic visual data with annotations from game engines effortlessly. In this dissertation, we explore how we can utilize the raw visual data and synthetic data for various visual tasks, aiming to replace or reduce the direct supervision from the manual annotations. The key idea behind this is to encourage deep neural networks to produce consistent predictions of the same visual input across different transformations (\eg geometry, temporal, photometric, etc.). We organize the dissertation as follows. In Part I, we propose using the consistency over different geometric formulations and a forward-backward cycle consistency over time to tackle the low-level scene geometry perception tasks, using unlabeled visual data only. In Part II, we tackle the high-level semantic understanding tasks using both a small amount of labeled data and a large amount of unlabeled data jointly, with the constraint that different augmented views of the same visual input maintain consistent semantic information. In Part III, we tackle the cross-domain image segmentation problem. By encouraging an adaptive segmentation model to output consistent results for a diverse set of strongly-augmented synthetic data, the model learns to perform test-time adaptation on unseen target domains.
139

From Pixels to Prices with ViTMAE : Integrating Real Estate Images through Masked Autoencoder Vision Transformers (ViTMAE) with Conventional Real Estate Data for Enhanced Automated Valuation / Från pixlar till priser med ViTMAE : Integrering av bostadsbilder genom Masked Autoencoder Vision Transformers (ViTMAE) med konventionell fastighetsdata för förbättrad automatiserad värdering

Ekblad Voltaire, Fanny January 2024 (has links)
The integration of Vision Transformers (ViTs) using Masked Autoencoder pre-training (ViTMAE) into real estate valuation is investigated in this Master’s thesis, addressing the challenge of effectively analyzing visual information from real estate images. This integration aims to enhance the accuracy and efficiency of valuation, a task traditionally dependent on realtor expertise. The research involved developing a model that combines ViTMAE-extracted visual features from real estate images with traditional property data. Focusing on residential properties in Sweden, the study utilized a dataset of images and metadata from online real estate listings. An adapted ViTMAE model, accessed via the Hugging Face library, was trained on the dataset for feature extraction, which was then integrated with metadata to create a comprehensive multimodal valuation model. Results indicate that including ViTMAE-extracted image features improves prediction accuracy in real estate valuation models. The multimodal approach, merging visual and traditional metadata, improved accuracy over metadata-only models. This thesis contributes to real estate valuation by showcasing the potential of advanced image processing techniques in enhancing valuation models. It lays the groundwork for future research in more refined holistic valuation models, incorporating a wider range of factors beyond visual data. / Detta examensarbete undersöker integrationen av Vision Transformers (ViTs) med Masked Autoencoder pre-training (ViTMAE) i bostadsvärdering, genom att addressera utmaningen att effektivt analysera visuell information från bostadsannonser. Denna integration syftar till att förbättra noggrannheten och effektiviteten i fastighetsvärdering, en uppgift som traditionellt är beroende av en fysisk besiktning av mäklare. Arbetet innefattade utvecklingen av en modell som kombinerar bildinformation extraherad med ViTMAE från fastighetsbilder med traditionella fastighetsdata. Med fokus på bostadsfastigheter i Sverige använde studien en databas med bilder och metadata från bostadsannonser. Den anpassade ViTMAE-modellen, tillgänglig via Hugging Face-biblioteket, tränades på denna databas för extraktion av bildinformation, som sedan integrerades med metadata för att skapa en omfattande värderingsmodell. Resultaten indikerar att inklusion av ViTMAE-extraherad bildinformation förbättrar noggranheten av bostadssvärderingsmodeller. Den multimodala metoden, som kombinerar visuell och traditionell metadata, visade en förbättring i noggrannhet jämfört med modeller som endast använder metadata. Denna uppsats bidrar till bostadsvärdering genom att visa på potentialen hos avancerade bildanalys för att förbättra värderingsmodeller. Den lägger grunden för framtida forskning i mer raffinerade holistiska värderingsmodeller som inkluderar ett bredare spektrum av faktorer utöver visuell data.
140

<b>A Study on the Use of Unsupervised, Supervised, and Semi-supervised Modeling for Jamming Detection and Classification in Unmanned Aerial Vehicles</b>

Margaux Camille Marie Catafort--Silva (18477354) 02 May 2024 (has links)
<p dir="ltr">In this work, first, unsupervised machine learning is proposed as a study for detecting and classifying jamming attacks targeting unmanned aerial vehicles (UAV) operating at a 2.4 GHz band. Three scenarios are developed with a dataset of samples extracted from meticulous experimental routines using various unsupervised learning algorithms, namely K-means, density-based spatial clustering of applications with noise (DBSCAN), agglomerative clustering (AGG) and Gaussian mixture model (GMM). These routines characterize attack scenarios entailing barrage (BA), single- tone (ST), successive-pulse (SP), and protocol-aware (PA) jamming in three different settings. In the first setting, all extracted features from the original dataset are used (i.e., nine in total). In the second setting, Spearman correlation is implemented to reduce the number of these features. In the third setting, principal component analysis (PCA) is utilized to reduce the dimensionality of the dataset to minimize complexity. The metrics used to compare the algorithms are homogeneity, completeness, v-measure, adjusted mutual information (AMI) and adjusted rank index (ARI). The optimum model scored 1.00, 0.949, 0.791, 0.722, and 0.791, respectively, allowing the detection and classification of these four jamming types with an acceptable degree of confidence.</p><p dir="ltr">Second, following a different study, supervised learning (i.e., random forest modeling) is developed to achieve a binary classification to ensure accurate clustering of samples into two distinct classes: clean and jamming. Following this supervised-based classification, two-class and three-class unsupervised learning is implemented considering three of the four jamming types: BA, ST, and SP. In this initial step, the four aforementioned algorithms are used. This newly developed study is intended to facilitate the visualization of the performance of each algorithm, for example, AGG performs a homogeneity of 1.0, a completeness of 0.950, a V-measure of 0.713, an ARI of 0.557 and an AMI of 0.713, and GMM generates 1, 0.771, 0.645, 0.536 and 0.644, respectively. Lastly, to improve the classification of this study, semi-supervised learning is adopted instead of unsupervised learning considering the same algorithms and dataset. In this case, GMM achieves results of 1, 0.688, 0.688, 0.786 and 0.688 whereas DBSCAN achieves 0, 0.036, 0.028, 0.018, 0.028 for homogeneity, completeness, V-measure, ARI and AMI respectively. Overall, this unsupervised learning is approached as a method for jamming classification, addressing the challenge of identifying newly introduced samples.</p>

Page generated in 0.0635 seconds