Spelling suggestions: "subject:"[een] SIAMESE NEURAL NETWORKS"" "subject:"[enn] SIAMESE NEURAL NETWORKS""
1 |
Neural Networks for Standardizing Ratings inLeague of LegendsJansson, Andréas, Karlsson, Erik January 2022 (has links)
In the game League of Legends (LoL) there are several different regions globally withtheir own rating distribution. The purpose of this thesis is to examine if there are anydifferences in playing strength between the regions, and if so quantify what the offsetsare numerically.Data of matches played online are available publicly. We extracted 8.7 million matchesin total and over 600 different features per match. Each match is also annotated with alocal rating - which represents the rank it was played at. All these matches are betweenteams from similar regions and not across regions - hence the rating is a local one andnot a global one. Absence of a global score prevents us from comparing matches acrossregions. Our goal is to rank the different regions by developing a model that can predicta global score using the data available for local ratings.We first develop a Deep Neural Network (DNN) which is trained on equal amounts ofdata from all the regions to predict a global rating. We then use a Siamese Neural Network (SNN), with the purpose of generating a distribution that would be comparableto the true distribution of ratings. In both the above experiments we hide the regioninformation from the network. We also developed a model that is provided region information in a separate layer while training. The outcome of the DNN model is validatedby using the outcomes of SNN and region-aware models. In order to further improvethe results, we normalize the data with respect to the duration of a match. We performfurther experiments where a model is trained on matches from one specific region andthen use it for predicting ratings of matches from other regions.The results allowed us to rank the different regions based on their performance. Someof the results were surprising - for instance the experiments suggests that Japan andOceania, who has very little presence on the professional e-sports scene, are in the top.
|
2 |
Content-based Recommender System for Detecting Complementary Products : Evaluating Siamese Neural Networks for Predicting Complementary Relationships among E-Commerce Products / Innehållsbaserat rekommendationssystem för att upptäcka kompletterande produkterAngelovska, Marina January 2020 (has links)
As much as the diverse and rich offer on e-commerce websites helps the users find what they need at one market place, the online catalogs are sometimes too overwhelming. Recommender systems play an important role in e-commerce websites as they improve the customer journey by helping the users find what they want at the right moment. These recommendations can be based on users’ characteristics, demographics, purchase or session history.In this thesis we focus on identifying complementary relationship between products in the case of the largest e-commerce company in the Netherlands. Complementary products are products that go well together, products that might be a necessity to the chosen product or simply a nice addition to it. At the company, there is big potential as complementary products increase the average purchase value and they exist for less than 20% of the whole catalog.We propose a content-based recommender system for detecting complemen- tary products, using a supervised deep learning approach that relies on Siamese Neural Network (SNN).The purpose of this thesis is three-fold; Firstly, the main goal is to create a SNN model that will be able to predict complementary products for any given product based on the content. For this purpose, we implement and compare two different models: Siamese Convolutional Neu- ral Network and Siamese Long Short-Term Memory (LSTM) Recurrent Neural Network. We feed these neural networks with pairs of products taken from the company, which are either complementary or non-complementary. Secondly, the basic assumption of our approach is that most of the important features for a product are included in its title, but we conduct experiments including the product description and brand as well. Lastly, we propose an extension of the SNN approach to handle millions of products in a matter of seconds.∼As a result from the experiments, we conclude that Siamese LSTM can predict complementary products with highest accuracy of 85%. Our assumption that the title is the most valuable attribute was confirmed. In addition, trans- forming our solution to a K-nearest-neighbour problem in order to optimize it for millions of products gave promising results. / Så mycket som det mångfaldiga och rika utbudet på e-handelswebbplatser hjälper användarna att hitta det de behöver på en marknadsplats, är online- katalogerna ibland för överväldigande. Rekommendationssystem en viktig roll på e-handelswebbplatser eftersom de förbättrar kundupplevelsen genom att hjälpa användarna att hitta vad de vill ha i rätt ögonblick. Dessa rekommen- dationer kan baseras på användarens egenskaper, demografi, inköps- eller ses- sionshistorik.I denna avhandling fokuserar vi på att identifiera komplementära förhållanden mellan produkter för det största e-handelsföretaget i Nederländerna. Komplet- terande produkter är produkter passar väl ihop, produkter som kan vara en nödvändighet för den valda produkten eller helt enkelt ett trevligt tillskott till den. På företaget finns det stor potential eftersom kompletterande produkter ökar det genomsnittliga inköpsvärdet och de tillhandahålls för mindre än 20% av hela katalogen.Vi föreslår ett innehållsbaserat rekommendationssystem för att upptäcka kom- pletterande produkter, med en övervakad strategi för inlärning som bygger på Siamese Neural Network (SNN). Syftet med denna avhandling är i tre steg; För det första är huvudmålet att skapa en SNN-modell som kan förutsäga komplet- terande produkter för en given produkt baserat på innehållet. För detta ändamål implementerar och jämför vi två olika modeller: Siamese Convolutional Neu- ral Network och Siamese Long Short-Term Memory (LSTM) Recurrent Neural Network. Vi matar in data i dessa neurala nätverk med par produkter hämta- de från företaget, som antingen är komplementära eller icke-komplementära. Det andra grundläggande antagandet av vår metod att de flesta av de viktiga funktionerna för en produkt ingår i dess titel, men vi genomför också expe- riment inklusive produktbeskrivningen och varumärket. Slutligen föreslår vi en utvidgning av SNN-metoden för att hantera miljoner produkter på några sekunder.∼Som ett resultat av eperimenten drar vi slutsatsen att Siamese LSTM kan för- utsäga komplementära produkter med högsta noggrannhet på 85%. Vårt antagande att titeln är det mest värdefulla attributet bekräftades. Därtill är om- vandling av vår lösning till ett K-närmaste grannproblem för att optimera den för miljontals produkter gav lovande resultat.
|
3 |
Defect classification in LPBF images using semi-supervised learningGöransson, Anton January 2022 (has links)
Laser powder bed fusion is an additive manufacturing technique that is capable of building metallic parts by spreading many layers of metal powder over a build surface and using a laser to melt specific sections of the surface. The part is built by melting consecutive layers on top of each other until the design is completed. However, during this process defects can occur. These defects have impacts on the part’s physical properties, and it is important to detect them for quality assurance. A single part takes several hundred or thousands of layers to build. While each layer is built, cameras and sensors are used to create images of each layer. These images are used for identification and classification of defects that could have a negative impact on a printed part’s physical properties, such as tensile strength. Classification of defects would reduce manual inspection of the printed part. Thus, the classification of defects in each layer must be automated, as it would be infeasible to manually classify each layer. Recently, machine learning have proven to be an effective method for automating defect classification in laser powder bed fusion. However, machine learning and especially deep-learning approaches generally require a large amount of labeled training data, which is typically not available for laser powder bed fusion printed parts. Labeling of images requires manual labor and domain knowledge. One of the greatest obstacles in defect classification, is how machine learning can be applied despite this absence of labeled data. A machine learning approach that show potential for being trained with less data, is the siamese neural network approach. In this thesis, a novel approach for automating defect classification is developed, using layer images from a laser powder bed fusion printing process. In order to cope with the limited access to labeled data, the classifiers are based on the siamese neural network structure. Two siamese neural network structures are developed, a one-shot classifier, which directly classifies the instance, and a hierarchical classifier with a hierarchical classification process according to the hierarchy of the defect classes. The classifiers are evaluated by inferring a test set of images collected from the laser powder bed fusion process. The one-shot classifier is able to classify the images with an accuracy of 70%and the hierarchical classifier with an accuracy of 86%. For the hierarchical classifier area of the ROC curves were calculated to be, 0.96 and 0.95 for the normal vs defect and overheating vs spattering stages respectively. Unlabeled images were added to the training set of a new instance of the hierarchical classifier, which could infer the test set without any major changes to test set accuracy.
|
4 |
[pt] AJUSTE FINO DE MODELO AUTO-SUPERVISIONADO USANDO REDES NEURAIS SIAMESAS PARA CLASSIFICAÇÃO DE IMAGENS DE COVID-19 / [en] FINE-TUNING SELF-SUPERVISED MODEL WITH SIAMESE NEURAL NETWORKS FOR COVID-19 IMAGE CLASSIFICATIONANTONIO MOREIRA PINTO 03 December 2024 (has links)
[pt] Nos últimos anos, o aprendizado auto-supervisionado demonstrou desempenho estado da arte em áreas como visão computacional e processamento de
linguagem natural. No entanto, ajustar esses modelos para tarefas específicas
de classificação, especialmente com dados rotulados, permanece sendo um desafio. Esta dissertação apresenta uma abordagem para ajuste fino de modelos
auto-supervisionados usando Redes Neurais Siamesas, aproveitando a função
de perda semi-hard triplet loss. Nosso método visa refinar as representações
do espaço latente dos modelos auto-supervisionados para melhorar seu desempenho em tarefas posteriores de classificação. O framework proposto emprega
Masked Autoencoders para pré-treinamento em um conjunto abrangente de
dados de radiografias, seguido de ajuste fino com redes siamesas para separação eficaz de características e melhor classificação. A abordagem é avaliada
no conjunto de dados COVIDx 9 para detecção de COVID-19 a partir de radiografias frontais de peito, alcançando uma nova precisão recorde de 98,5 por cento,
superando as técnicas tradicionais de ajuste fino e o modelo COVID-Net CRX
3. Os resultados demonstram a eficácia de nosso método em aumentar a utilidade de modelos auto-supervisionados para tarefas complexas de imagem
médica. Trabalhos futuros explorarão a escalabilidade dessa abordagem para
outros domínios e a integração de funções de perda de espaço de embedding
mais sofisticadas. / [en] In recent years, self-supervised learning has demonstrated state-of-theart performance in domains such as computer vision and natural language processing. However, fine-tuning these models for specific classification tasks,
particularly with labeled data, remains challenging. This thesis introduces a
novel approach to fine-tuning self-supervised models using Siamese Neural
Networks, specifically leveraging a semi-hard triplet loss function. Our method
aims to refine the latent space representations of self-supervised models to
improve their performance on downstream classification tasks. The proposed
framework employs Masked Autoencoders for pre-training on a comprehensive
radiograph dataset, followed by fine-tuning with Siamese networks for effective
feature separation and improved classification. The approach is evaluated on
the COVIDx dataset for COVID-19 detection from frontal chest radiographs,
achieving a new record accuracy of 98.5 percent, surpassing traditional fine-tuning
techniques and COVID-Net CRX 3. The results demonstrate the effectiveness
of our method in enhancing the utility of self-supervised models for complex
medical imaging tasks. Future work will explore the scalability of this approach
to other domains and the integration of more sophisticated embedding-space
loss functions.
|
5 |
Utilizing energy-saving techniques to reduce energy and memory consumption when training machine learning models : Sustainable Machine Learning / Implementation av energibesparande tekniker för att minska energi- och minnesförbrukningen vid träning av modeller för maskininlärning : Hållbar maskininlärningEl Yaacoub, Khalid January 2024 (has links)
Emerging machine learning (ML) techniques are showing great potential in prediction performance. However, research and development is often conducted in an environment with extensive computational resources and blinded by prediction performance. In reality, computational resources might be contained on constrained hardware where energy and memory consumption must be restrained. Furthermore, shortages of sufficiently large datasets for ML is a frequent problem, combined with the cost of data retention. This generates a significant demand for sustainable ML. With sustainable ML, practitioners can train ML models on less data, which reduces memory and energy consumption during the training process. To explore solutions to these problems, this thesis dives into several techniques that have been introduced in the literature to achieve energy-savings when training machine learning models. These techniques include Quantization-Aware Training, Model Distillation, Quantized Distillation, Continual Learning and a deeper dive into Siamese Neural Networks (SNNs), one of the most promising techniques for sustainability. Empirical evaluations are conducted using several datasets to illustrate the potential of these techniques and their contribution to sustainable ML. The findings of this thesis show that the energy-saving techniques could be leveraged in some cases to make machine learning models more manageable and sustainable whilst not compromising significant model prediction performance. In addition, the deeper dive into SNNs shows that SNNs can outperform standard classification networks, under both the standard multi-class classification case and the Continual Learning case, whilst being trained on significantly less data. / Maskininlärning har i den senaste tidens forskning visat stor potential och hög precision inom klassificering. Forskning, som ofta bedrivs i en miljö med omfattande beräkningsresurser, kan lätt bli förblindad av precision. I verkligheten är ofta beräkningsresurser lokaliserade på hårdvara där energi- och minneskapacitet är begränsad. Ytterligare ett vanligt problem är att uppnå en tillräckligt stor datamängd för att uppnå önskvärd precision vid träning av maskininlärningsmodeller. Dessa problem skapar en betydande efterfrågan av hållbar maskininlärning. Hållbar maskininlärning har kapaciteten att träna modeller på en mindre datamängd, vilket minskar minne- och energiförbrukning under träningsprocessen. För att utforska hållbar maskininlärning analyserar denna avhandling Quantization-Aware Training, Model Distillation, Quantized Distillation, Continual Learning och en djupare evaluering av Siamesiska Neurala Nätverk (SNN), en av de mest lovande teknikerna inom hållbar maskininlärning. Empiriska utvärderingar utfördes med hjälp av flera olika datamängder för att illustrera potentialen hos dessa tekniker. Resultaten visar att energibesparingsteknikerna kan utnyttjas för att göra maskininlärningsmodeller mer hållbara utan att kompromissa för precision. Dessutom visar undersökningen av SNNs att de kan överträffa vanliga neurala nätverk, med och utan Continual Learning, även om de tränas på betydligt mindre data.
|
Page generated in 0.0502 seconds