Meta-learning, which allows individuals to learn from a collection of algorithms, is currently one of the most essential and cutting-edge deep-learning issues. Because of their widespread applicability, these algorithms are inextricably linked to essential systems and human lives, and the necessity to test and debug such crucial systems is apparent. We investigated the use of common software tools for the production of test cases to ensure the quality of meta-learning models. The goal of this study is to look at some of the challenges and benefits of test approaches used to develop test cases for meta-learning system models. As a case study, we use a model-agnostic meta-learning method and a combination of comparative studies to extract the obstacles and benefits of each technique. We highlighted the challenges and drawbacks of each testing strategy for the Black-box, White-box, and Gray-box categories by comparing post-train tests to pre-train tests. The results suggest that traditional testing procedures can help analyze meta-learning models, and using these kinds of tests allows one to save testing time while also improving the performance of meta-learner models.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:mdh-59257 |
Date | January 2022 |
Creators | Seyedshahi, Farzaneh Alsadat |
Publisher | Mälardalens universitet, Akademin för innovation, design och teknik |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0023 seconds