1 |
Algorithmic Distribution of Applied Learning on Big DataShukla, Manu 16 October 2020 (has links)
Machine Learning and Graph techniques are complex and challenging to distribute. Generally, they are distributed by modeling the problem in a similar way as single node sequential techniques except applied on smaller chunks of data and compute and the results combined. These techniques focus on stitching the results from smaller chunks as the best possible way to have the outcome as close to the sequential results on entire data as possible. This approach is not feasible in numerous kernel, matrix, optimization, graph, and other techniques where the algorithm needs access to all the data during execution. In this work, we propose key-value pair based distribution techniques that are widely applicable to statistical machine learning techniques along with matrix, graph, and time series based algorithms. The crucial difference with previously proposed techniques is that all operations are modeled on key-value pair based fine or coarse-grained steps. This allows flexibility in distribution with no compounding error in each step. The distribution is applicable not only in robust disk-based frameworks but also in in-memory based systems without significant changes. Key-value pair based techniques also provide the ability to generate the same result as sequential techniques with no edge or overlap effects in structures such as graphs or matrices to resolve.
This thesis focuses on key-value pair based distribution of applied machine learning techniques on a variety of problems. For the first method key-value pair distribution is used for storytelling at scale. Storytelling connects entities (people, organizations) using their observed relationships to establish meaningful storylines. When performed sequentially these computations become a bottleneck because the massive number of entities make space and time complexity untenable. We present DISCRN, or DIstributed Spatio-temporal ConceptseaRch based StorytelliNg, a distributed framework for performing spatio-temporal storytelling. The framework extracts entities from microblogs and event data, and links these entities using a novel ConceptSearch to derive storylines in a distributed fashion utilizing key-value pair paradigm. Performing these operations at scale allows deeper and broader analysis of storylines. The novel parallelization techniques speed up the generation and filtering of storylines on massive datasets. Experiments with microblog posts such as Twitter data and GDELT(Global Database of Events, Language and Tone) events show the efficiency of the techniques in DISCRN.
The second work determines brand perception directly from people's comments in social media. Current techniques for determining brand perception, such as surveys of handpicked users by mail, in person, phone or online, are time consuming and increasingly inadequate. The proposed DERIV system distills storylines from open data representing direct consumer voice into a brand perception. The framework summarizes the perception of a brand in comparison to peer brands with in-memory key-value pair based distributed algorithms utilizing supervised machine learning techniques. Experiments performed with open data and models built with storylines of known peer brands show the technique as highly scalable and accurate in capturing brand perception from vast amounts of social data compared to sentiment analysis.
The third work performs event categorization and prospect identification in social media. The problem is challenging due to endless amount of information generated daily. In our work, we present DISTL, an event processing and prospect identifying platform. It accepts as input a set of storylines (a sequence of entities and their relationships) and processes them as follows: (1) uses different algorithms (LDA, SVM, information gain, rule sets) to identify themes from storylines;
(2) identifies top locations and times in storylines and combines with themes to generate events that are meaningful in a specific scenario for categorizing storylines; and (3) extracts top prospects as people and organizations from data elements contained in storylines. The output comprises sets of events in different categories and storylines under them along with top prospects identified. DISTL utilizes in-memory key-value pair based distributed processing that scales to high data volumes and categorizes generated storylines in near real-time.
The fourth work builds flight paths of drones in a distributed manner to survey a large area taking images to determine growth of vegetation over power lines allowing for adjustment to terrain and number of drones and their capabilities. Drones are increasingly being used to perform risky and labor intensive aerial tasks cheaply and safely. To ensure operating costs are low and flights autonomous, their flight plans must be pre-built. In existing techniques drone flight paths are not automatically pre-calculated based on drone capabilities and terrain information. We present details of an automated flight plan builder DIMPL that pre-builds flight plans for drones tasked with surveying a large area to take photographs of electric poles to identify ones with hazardous vegetation overgrowth. DIMPL employs a distributed in-memory key-value pair based paradigm to process subregions in parallel and build flight paths in a highly efficient manner.
The fifth work highlights scaling graph operations, particularly pruning and joins. Linking topics to specific experts in technical documents and finding connections between experts are crucial for detecting the evolution of emerging topics and the relationships between their influencers in state-of-the-art research. Current techniques that make such connections are limited to similarity measures. Methods based on weights such as TF-IDF and frequency to identify important topics and self joins between topics and experts are generally utilized to identify connections between experts. However, such approaches are inadequate for identifying emerging keywords and experts since the most useful terms in technical documents tend to be infrequent and concentrated in just a few documents. This makes connecting experts through joins on large dense graphs challenging. We present DIGDUG, a framework that identifies emerging topics by applying graph operations to technical terms. The framework identifies connections between authors of patents and journal papers by performing joins on connected topics and topics associated with the authors at scale. The problem of scaling the graph operations for topics and experts is solved through dense graph pruning and graph joins categorized under their own scalable separable dense graph class based on key-value pair distribution. Comparing our graph join and pruning technique against multiple graph and join methods in MapReduce revealed a significant improvement in performance using our approach. / Doctor of Philosophy / Distribution of Machine Learning and Graph algorithms is commonly performed by modeling the core algorithm in the same way as the sequential technique except implemented on distributed framework. This approach is satisfactory in very few cases, such as depth-first search and subgraph enumerations in graphs, k nearest neighbors, and few additional common methods. These techniques focus on stitching the results from smaller data or compute chunks as the best possible way to have the outcome as close to the sequential results on entire data as possible. This approach is not feasible in numerous kernel, matrix, optimization, graph, and other techniques where the algorithm needs to perform exhaustive computations on all the data during execution. In this work, we propose key-value pair based distribution techniques that are exhaustive and widely applicable to statistical machine learning algorithms along with matrix, graph, and time series based operations. The crucial difference with previously proposed techniques is that all operations are modeled as key-value pair based fine or coarse-grained steps. This allows flexibility in distribution with no compounding error in each step. The distribution is applicable not only in robust disk-based frameworks but also in in-memory based systems without significant changes. Key-value pair based techniques also provide the ability to generate the same result as sequential techniques with no edge or overlap effects in structures such as graphs or matrices to resolve.
|
2 |
UNIFYING DISTILLATION WITH PERSONALIZATION IN FEDERATED LEARNINGSiddharth Divi (10725357) 29 April 2021 (has links)
<div>Federated learning (FL) is a decentralized privacy-preserving learning technique in which clients learn a joint collaborative model through a central aggregator without sharing their data. In this setting, all clients learn a single common predictor (FedAvg), which does not generalize well on each client's local data due to the statistical data heterogeneity among clients. In this paper, we address this problem with PersFL, a discrete two-stage personalized learning algorithm. In the first stage, PersFL finds the optimal teacher model of each client during the FL training phase. In the second stage, PersFL distills the useful knowledge from optimal teachers into each user's local model. The teacher model provides each client with some rich, high-level representation that a client can easily adapt to its local model, which overcomes the statistical heterogeneity present at different clients. We evaluate PersFL on CIFAR-10 and MNIST datasets using three data-splitting strategies to control the diversity between clients' data distributions.</div><div><br></div><div>We empirically show that PersFL outperforms FedAvg and three state-of-the-art personalization methods, pFedMe, Per-FedAvg and FedPer on majority data-splits with minimal communication cost. Further, we study the performance of PersFL on different distillation objectives, how this performance is affected by the equitable notion of fairness among clients, and the number of required communication rounds. We also build an evaluation framework with the following modules: Data Generator, Federated Model Generation, and Evaluation Metrics. We introduce new metrics for the domain of personalized FL, and split these metrics into two perspectives: Performance, and Fairness. We analyze the performance of all the personalized algorithms by applying these metrics to answer the following questions: Which personalization algorithm performs the best in terms of accuracy across all the users?, and Which personalization algorithm is the fairest amongst all of them? Finally, we make the code for this work available at https://tinyurl.com/1hp9ywfa for public use and validation.</div>
|
3 |
Towards Peer-to-Peer Federated Learning: Algorithms and Comparisons to Centralized Federated LearningMäenpää, Dylan January 2021 (has links)
Due to privacy and regulatory reasons, sharing data between institutions can be difficult. Because of this, real-world data are not fully exploited by machine learning (ML). An emerging method is to train ML models with federated learning (FL) which enables clients to collaboratively train ML models without sharing raw training data. We explored peer-to-peer FL by extending a prominent centralized FL algorithm called Fedavg to function in a peer-to-peer setting. We named this extended algorithm FedavgP2P. Deep neural networks at 100 simulated clients were trained to recognize digits using FedavgP2P and the MNIST data set. Scenarios with IID and non-IID client data were studied. We compared FedavgP2P to Fedavg with respect to models' convergence behaviors and communication costs. Additionally, we analyzed the connection between local client computation, the number of neighbors each client communicates with, and how that affects performance. We also attempted to improve the FedavgP2P algorithm with heuristics based on client identities and per-class F1-scores. The findings showed that by using FedavgP2P, the mean model convergence behavior was comparable to a model trained with Fedavg. However, this came with a varying degree of variation in the 100 models' convergence behaviors and much greater communications costs (at least 14.9x more communication with FedavgP2P). By increasing the amount of local computation up to a certain level, communication costs could be saved. When the number of neighbors a client communicated with increased, it led to a lower variation of the models' convergence behaviors. The FedavgP2P heuristics did not show improved performance. In conclusion, the overall findings indicate that peer-to-peer FL is a promising approach.
|
4 |
Ablation Programming for Machine LearningSheikholeslami, Sina January 2019 (has links)
As machine learning systems are being used in an increasing number of applications from analysis of satellite sensory data and health-care analytics to smart virtual assistants and self-driving cars they are also becoming more and more complex. This means that more time and computing resources are needed in order to train the models and the number of design choices and hyperparameters will increase as well. Due to this complexity, it is usually hard to explain the effect of each design choice or component of the machine learning system on its performance.A simple approach for addressing this problem is to perform an ablation study, a scientific examination of a machine learning system in order to gain insight on the effects of its building blocks on its overall performance. However, ablation studies are currently not part of the standard machine learning practice. One of the key reasons for this is the fact that currently, performing an ablation study requires major modifications in the code as well as extra compute and time resources.On the other hand, experimentation with a machine learning system is an iterative process that consists of several trials. A popular approach for execution is to run these trials in parallel, on an Apache Spark cluster. Since Apache Spark follows the Bulk Synchronous Parallel model, parallel execution of trials includes several stages, between which there will be barriers. This means that in order to execute a new set of trials, all trials from the previous stage must be finished. As a result, we usually end up wasting a lot of time and computing resources on unpromising trials that could have been stopped soon after their start.We have attempted to address these challenges by introducing MAGGY, an open-source framework for asynchronous and parallel hyperparameter optimization and ablation studies with Apache Spark and TensorFlow. This framework allows for better resource utilization as well as ablation studies and hyperparameter optimization in a unified and extendable API. / Eftersom maskininlärningssystem används i ett ökande antal applikationer från analys av data från satellitsensorer samt sjukvården till smarta virtuella assistenter och självkörande bilar blir de också mer och mer komplexa. Detta innebär att mer tid och beräkningsresurser behövs för att träna modellerna och antalet designval och hyperparametrar kommer också att öka. På grund av denna komplexitet är det ofta svårt att förstå vilken effekt varje komponent samt designval i ett maskininlärningssystem har på slutresultatet.En enkel metod för att få insikt om vilken påverkan olika komponenter i ett maskinlärningssytem har på systemets prestanda är att utföra en ablationsstudie. En ablationsstudie är en vetenskaplig undersökning av maskininlärningssystem för att få insikt om effekterna av var och en av dess byggstenar på dess totala prestanda. Men i praktiken så är ablationsstudier ännu inte vanligt förekommande inom maskininlärning. Ett av de viktigaste skälen till detta är det faktum att för närvarande så krävs både stora ändringar av koden för att utföra en ablationsstudie, samt extra beräkningsoch tidsresurser.Vi har försökt att ta itu med dessa utmaningar genom att använda en kombination av distribuerad asynkron beräkning och maskininlärning. Vi introducerar maggy, ett ramverk med öppen källkodsram för asynkron och parallell hyperparameteroptimering och ablationsstudier med PySpark och TensorFlow. Detta ramverk möjliggör bättre resursutnyttjande samt ablationsstudier och hyperparameteroptimering i ett enhetligt och utbyggbart API.
|
5 |
Cluster selection for Clustered Federated Learning using Min-wise Independent Permutations and Word Embeddings / Kluster selektion för Klustrad Federerad Inlärning med användning av “Min-wise” Oberoende Permutations och OrdinbäddningarRaveen Bandara Harasgama, Pulasthi January 2022 (has links)
Federated learning is a widely established modern machine learning methodology where training is done directly on the client device with local client data and the local training results are shared to compute a global model. Federated learning emerged as a result of data ownership and the privacy concerns of traditional machine learning methodologies where data is collected and trained at a central location. However, in a distributed data environment, the training suffers significantly when the client data is not identically distributed. Hence, clustered federated learning was proposed where similar clients are clustered and trained independently to form specialized cluster models which are then used to compute a global model. In this approach, the cluster selection for clustered federated learning is a major factor that affects the effectiveness of the global model. This research presents two approaches for client clustering using local client data for clustered federated learning while preserving data privacy. The two proposed approaches use min-wise independent permutations to compute client signatures using text and word embeddings. These client signatures are then used as a representation of client data to cluster clients using agglomerative hierarchical clustering. Unlike previously proposed clustering methods, the two presented approaches do not use model updates, provide a better privacy-preserving mechanism and have a lower communication overhead. With extensive experimentation, we show that the proposed approaches outperform the random clustering approach. Finally, we present a client clustering methodology that can be utilized in a practical clustered federated learning environment. / Federerad inlärning är en etablerad och modern maskininlärnings metod. Träningen är utförd direkt på klientenheten med lokal klient data. Sen är dem lokala träningsresultat delad för att beräkna en global modell. Federerad inlärning har utvecklats på grund av dataägarskap- och dataintegritetsproblem vid traditionella maskininlärnings metoder. Dessa metoder samlar och tränar data på en central enhet. I den här metoden är kluster selektionen en viktig faktor som påverkar effektiviteten av den globala modellen. Detta forskningsarbete presenterar två metoder för klient klustring med hjälp av lokala klientdata för federerad inlärning samtidigt tar metoderna hänsyn på dataintegritet. Metoderna använder “min-wise” oberoende permutations och förtränade (“text och word”) inbäddningar. Dessa klientsignaturer används som en klientdata representation för att klustrar klienter med hjälp av agglomerativ hierarkisk klustring. Till skillnad från tidigare klustringsmetoder använder de två presenterade metoderna inte modelluppdateringar. Detta ger en bättre sekretessbevarande mekanism och har lägre kommunikationskostnader. De två presenterade metoderna överträffar den slumpmässiga klustringsmetoden genom omfattande experiment och analys. Till slut presenterar vi en klientklustermetodik som kan användas i en praktisk klustrad federerad inlärningsmiljö.
|
6 |
Scalable Gaussian Process Regression for Time Series Modelling / Skalerbar Gaussisk process regression för modellering av tidsserierBoopathi, Vidhyarthi January 2019 (has links)
Machine learning algorithms has its applications in almost all areas of our daily lives. This is mainly due to its ability to learn complex patterns and insights from massive datasets. With the increase in the data at a high rate, it is becoming necessary that the algorithms are resource-efficient and scalable. Gaussian processes are one of the efficient techniques in non linear modelling, but has limited practical applications due to its computational complexity. This thesis studies how parallelism techniques can be applied to optimize performance of Gaussian process regression and empirically assesses parallel learning of a sequential GP and a distributed Gaussian Process Regression algorithm with Random Projection approximation implemented in SPARK framework. These techniques were tested on the dataset provided by Volvo Cars. From the experiments, it is shown that training the GP model with 45k records or 219 ≈106 data points takes less than 30 minutes on a spark cluster with 8 nodes. With sufficient computing resources these algorithms can handle arbitrarily large datasets. / Maskininlärningsalgoritmer har sina applikationer inom nästan alla områden i vårt dagliga liv. Detta beror främst på dess förmåga att lära sig komplexa mönster och insikter från massiva datamängder. Med ökningen av data i en hög takt, blir det nödvändigt att algoritmerna är resurseffektiva och skalbara. Gaussiska processer är en av de effektiva teknikerna i icke-linjär modellering, men har begränsade praktiska tillämpningar på grund av dess beräkningskomplexitet. Den här uppsatsen studerar hur parallellismtekniker kan användas för att optimera prestanda för Gaussisk processregression och utvärderar parallellt inlärning av en sekventiell GP och distribuerad Gaussian Process Regression algoritm med Random Projection approximation implementerad i SPARK ramverk. Dessa tekniker testades på en datamängd från Volvo Cars. Från experimenten visas att det krävs mindre än 30 minuter att träna GP-modellen med 45k poster eller 219 ≈106 datapunkter på ett Spark-kluster med 8 noder. Med tillräckliga datoressurser kan dessa algoritmer hantera godtyckligt stora datamängder.
|
7 |
Evaluating Distributed Machine Learning for Fog Computing loT scenarios : A Comparison Between Distributed and Cloud-based Training on TensorflowEl Ghamri, Hassan January 2022 (has links)
Dag för dag blir sakernas internet-enheter (IoT) en större del av vårt liv. För närvarande är dessa enheter starkt beroende av molntjänster vilket kan utgöra en integritetsrisk. Det allmänna syftet med denna rapport är att undersöka alternativ till molntjänster, ett ganska fascinerande alternativ är fog computing. Fog computing är en struktur som utnyttjar processorkraften hos enheter i utkanten av nätverket (lokala enheter) snarare än att helt förlita sig på molntjänster. Ett specifikt fall av denna struktur undersöks ytterligare som huvudsyftet i denna rapport vilket är distribuerad maskininlärning för IoT-enheter. Detta mål uppnås genom att besvara frågorna om vilka metoder/verktyg som finns tillgängliga för att åstadkomma det och hur väl fungerar de jämfört med molntjänster. Det finns tre huvudsteg i denna studie. Det första steget var informationsinsamling på två olika nivåer. Först på en grundläggande nivå där området för studien undersöks. Den andra nivån var mer specifik och handlade om att ytterligare samla information om tillgängliga verktyg för distribuering av maskininlärning och utvärdera dessa verktyg. Det andra steget var att implementera tester för att verifiera prestandan för varje verktyg vald baserat på den insamlade informationen. Det sista steget var att sammanfatta resultaten och dra slutsatser. Studien har visat att distribuerad maskininlärning fortfarande är för omogen för att ersätta molntjänster eftersom de befintliga verktygen inte är optimerade för IoT-enheter. Det bästa alternativet för tillfället är att hålla sig till molntjänster, men om lägre prestanda till viss del kan tolereras, så är vissa IoT-enheter kraftfulla nog att bearbeta maskininlärningsuppgiften självständigt. Distribuerad maskininlärning är fortfarande ett ganska nytt koncept, men det utvecklas snabbt, förhoppningsvis når denna utveckling snart IoT-enheter. / By day, internet of things (IoT) devices is becoming a bigger part of our life. Currently these devices are heavily dependent on cloud computing which can be a privacy risk. The general aim of this report is to investigate alternatives to cloud computing, a quite fascinating alternative is fog computing. Fog computing is a structure that utilizes the processing power of devices at the edge of the network (local devices) rather than fully relying on cloud computing. A specific case of this structure is further investigated as the main objective of this report which is distributed machine learning for IoT devices. This objective is achieved by answering the questions of what methods/tools are available to accomplish that and how well do they function in comparison to cloud computing. There are three main stages of this study. The first stage was information gathering on two different levels. First on a basic level exploring the field. The second one was to further gather information about available tools for distributing machine learning and evaluate them. The second stage was implementing tests to verify the performance of each approach/tool chosen from the information gathered. The last stage was to summarize the results and reach to conclusions. The study has shown that distributed machine learning is still too immature to replace cloud computing since the existing tools isn’t optimized for this use case. The best option for now is to stick to cloud computing, but if lower performance to some extent can be tolerated, then some IoT devices is powerful enough to process the machine learning task independently. Distributed machine learning is still quite a new concept but it’s growing fast, hoping this growth soon expands to support IoT devices.
|
8 |
Decentralized Large-Scale Natural Language Processing Using Gossip Learning / Decentraliserad Storskalig Naturlig Språkbehandling med Hjälp av SkvallerinlärningAlkathiri, Abdul Aziz January 2020 (has links)
The field of Natural Language Processing in machine learning has seen rising popularity and use in recent years. The nature of Natural Language Processing, which deals with natural human language and computers, has led to the research and development of many algorithms that produce word embeddings. One of the most widely-used of these algorithms is Word2Vec. With the abundance of data generated by users and organizations and the complexity of machine learning and deep learning models, performing training using a single machine becomes unfeasible. The advancement in distributed machine learning offers a solution to this problem. Unfortunately, due to reasons concerning data privacy and regulations, in some real-life scenarios, the data must not leave its local machine. This limitation has lead to the development of techniques and protocols that are massively-parallel and data-private. The most popular of these protocols is federated learning. However, due to its centralized nature, it still poses some security and robustness risks. Consequently, this led to the development of massively-parallel, data private, decentralized approaches, such as gossip learning. In the gossip learning protocol, every once in a while each node in the network randomly chooses a peer for information exchange, which eliminates the need for a central node. This research intends to test the viability of gossip learning for large- scale, real-world applications. In particular, it focuses on implementation and evaluation for a Natural Language Processing application using gossip learning. The results show that application of Word2Vec in a gossip learning framework is viable and yields comparable results to its non-distributed, centralized counterpart for various scenarios, with an average loss on quality of 6.904%. / Fältet Naturlig Språkbehandling (Natural Language Processing eller NLP) i maskininlärning har sett en ökande popularitet och användning under de senaste åren. Naturen av Naturlig Språkbehandling, som bearbetar naturliga mänskliga språk och datorer, har lett till forskningen och utvecklingen av många algoritmer som producerar inbäddningar av ord. En av de mest använda av dessa algoritmer är Word2Vec. Med överflödet av data som genereras av användare och organisationer, komplexiteten av maskininlärning och djupa inlärningsmodeller, blir det omöjligt att utföra utbildning med hjälp av en enda maskin. Avancemangen inom distribuerad maskininlärning erbjuder en lösning på detta problem, men tyvärr får data av sekretesskäl och datareglering i vissa verkliga scenarier inte lämna sin lokala maskin. Denna begränsning har lett till utvecklingen av tekniker och protokoll som är massivt parallella och dataprivata. Det mest populära av dessa protokoll är federerad inlärning (federated learning), men på grund av sin centraliserade natur utgör det ändock vissa säkerhets- och robusthetsrisker. Följaktligen ledde detta till utvecklingen av massivt parallella, dataprivata och decentraliserade tillvägagångssätt, såsom skvallerinlärning (gossip learning). I skvallerinlärningsprotokollet väljer varje nod i nätverket slumpmässigt en like för informationsutbyte, vilket eliminerarbehovet av en central nod. Syftet med denna forskning är att testa livskraftighetenav skvallerinlärning i större omfattningens verkliga applikationer. I synnerhet fokuserar forskningen på implementering och utvärdering av en NLP-applikation genom användning av skvallerinlärning. Resultaten visar att tillämpningen av Word2Vec i en skvallerinlärnings ramverk är livskraftig och ger jämförbara resultat med dess icke-distribuerade, centraliserade motsvarighet för olika scenarier, med en genomsnittlig kvalitetsförlust av 6,904%.
|
9 |
Compression and Distribution of a Neural Network With IoT ApplicationsBacke, Hannes, Rydberg, David January 2021 (has links)
In order to enable deployment of large neuralnetwork models on devices with limited memory capacity, refinedmethods for compressing these are essential. This project aimsat investigating some possible solutions, namely pruning andpartitioned logit based knowledge distillation, using teacherstudentlearning methods. A cumbersome benchmark teacherneural network was developed and used as a reference. A specialcase of logit based teacher-student learning was then applied,resulting not only in a compressed model, but also in a convenientway of distributing it. The individual student models were ableto mimic the parts of the teacher model with small losses, whilethe network of student models achieved similar accuracy as theteacher model. Overall, the size of the network of student modelswas around 11% of the teacher. Another popular method ofcompressing neural networks was also tested - pruning. Pruningthe teacher network resulted in a much smaller model, around18% of the teacher model, with similar accuracy. / För att möjliggöra användning av storaneurala nätverksmodeller på enheter med begränsad minneskapacitetkrävs raffinerade metoder för komprimering av dessa.Detta projekt syftar till att undersöka några möjliga lösningar,nämligen pruning och partitionerad logit-baserad knowledgedistillation, med hjälp av teacher-student-träning. Ett stortriktmärkesnätverk utvecklades och användes som referens. Enspeciell typ av logit-baserad teacher-student-träning tillämpadessedan, vilket inte bara resulterade i en komprimerad modellutan också i ett smidigt sätt att distribuera den på. De enskildastudent-modellerna kunde efterlikna delar av teachermodellenmed små förluster, medan nätverket av studentmodelleruppnådde ungefär samma noggrannhet som teachermodellen.Sammantaget uppmättes storleken av nätverket avstudent-modeller till cirka 11 % av teacher-modellen. En annanpopulär metod för komprimering av neurala nätverk testadesockså pruning. Pruning av teacher-modellen resulterade i enmycket mindre modell, cirka 18 % av teacher-modellen i termerav storlek, med liknande noggrannhet. / Kandidatexjobb i elektroteknik 2021, KTH, Stockholm
|
10 |
Multi-Agent-Based Collaborative Machine Learning in Distributed Resource EnvironmentsAhmad Esmaeili (19153444) 18 July 2024 (has links)
<p dir="ltr">This dissertation presents decentralized and agent-based solutions for organizing machine learning resources, such as datasets and learning models. It aims to democratize the analysis of these resources through a simple yet flexible query structure, automate common ML tasks such as training, testing, model selection, and hyperparameter tuning, and enable privacy-centric building of ML models over distributed datasets. Based on networked multi-agent systems, the proposed approach represents ML resources as autonomous and self-reliant entities. This representation makes the resources easily movable, scalable, and independent of geographical locations, alleviating the need for centralized control and management units. Additionally, as all machine learning and data mining tasks are conducted near their resources, providers can apply customized rules independently of other parts of the system. </p><p><br></p>
|
Page generated in 0.1116 seconds