• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • Tagged with
  • 9
  • 9
  • 9
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

TOWARD ROBUST AND INTERPRETABLE GRAPH AND IMAGE REPRESENTATION LEARNING

Juan Shu (14816524) 27 April 2023 (has links)
<p>Although deep learning models continue to gain momentum, their robustness and interpretability have always been a big concern because of the complexity of such models. In this dissertation, we studied several topics on the robustness and interpretability of convolutional neural networks (CNNs) and graph neural networks (GNNs). We first identified the structural problem of deep convolutional neural networks that leads to the adversarial examples and defined DNN uncertainty regions. We also argued that the generalization error, the large sample theoretical guarantee established for DNN, cannot adequately capture the phenomenon of adversarial examples. Secondly, we studied the dropout in GNNs, which is an effective regularization approach to prevent overfitting. Contrary to CNN, GNN usually has a shallow structure because a deep GNN normally sees performance degradation. We studied different dropout schemes and established a connection between dropout and over-smoothing in GNNs. Therefore we developed layer-wise compensation dropout, which allows GNN to go deeper without suffering performance degradation. We also developed a heteroscedastic dropout which effectively deals with a large number of missing node features due to heavy experimental noise or privacy issues. Lastly, we studied the interpretability of graph neural networks. We developed a self-interpretable GNN structure that denoises useless edges or features, leading to a more efficient message-passing process. The GNN prediction and explanation accuracy were boosted compared with baseline models. </p>
2

The Dynamics of the Impacts of Automated Vehicles: Urban Form, Mode Choice, and Energy Demand Distribution

Wang, Kaidi 24 August 2021 (has links)
The commercial deployment of automated vehicles (AVs) is around the corner. With the development of automation technology, automobile and IT companies have started to test automated vehicles. Waymo, an automated driving technology development company, has recently opened the self-driving service to the public. The advancement in this emerging mobility option also drives transportation reasearchers and urban planners to conduct automated vehicle-related research, especially to gain insights on the impact of automated vehicles (AVs) in order to inform policymaking. However, the variation with urban form, the heterogeneity of mode choice, and the impacts at disaggregated levels lead to the dynamics of the impacts of AVs, which not comprehensively understood yet. Therefore, this dissertation extends existing knowledge base by understanding the dynamics of the impacts from three perspectives: (1) examining the role of urban form in the performance of SAV systems; (2) exploring the heterogeneity of AV mode choices across regions; and (3) investigating the distribution of energy consumption in the era of AVs. To examine the first aspect, Shared AV (SAV) systems are simulated for 286 cities and the simulation outcomes are regressed on urban form variables that measure density, diversity, and design. It is suggested that the compact development, a multi-core city pattern, high level of diversity, as well as more pedestrian-oriented networks can promote the performance of SAVs measured using service efficiency, trip pooling success rate, and extra VMT generation. The AV mode choice behaviors of private conventional vehicle (PCV) users in Seattle and Knasas City metropolitan areas are examined using an interpretable machine learning framework based on an AV mode choice survey. It is suggested that attitudes and trip and mode-specific attributes are the most predictive. Positive attitudes can promote the adoption of PAVs. Longer PAV in-vehicle time encourages the residents to keep the PCVs. Longer walking distance promotes the usage of SAVs. In addition, the effects of in-vehicle time and walking distance vary across the two examined regions due to distinct urban form, transportation infrustructure and cultural backgrounds. Kansas City residents can tolerate shorter walking distance before switching to SAV choices due to the car-oriented environment while Seattle residents are more sensitive to in-vehicle travel time because of the local congestion levels. The final part of the dissertation examines the demand for energy of AVs at disaggregated levels incorporating heterogeneity of AV mode choices. A three-step framework is employed including the prediction of mode choice, the determination of vehicle trajectories, and the estimation of the demand for energy. It is suggested that the AV scenario can generate -0.36% to 2.91% extra emissions and consume 2.9% more energy if gasoline is used. The revealed distribution of traffic volume suggests that the demand for charging is concentrated around the downtown areas and on highways if AVs consume electricity. In summary, the dissertation demonstrates that there is a dynamics with regard to the impacts and performance of AVs across regions due to various urban form, infrastructure and cultural environment, and the spatial heterogeneity within cities. / Doctor of Philosophy / Automated vehicles (AVs) have been a hot topic in recent years especially after various IT and automobile companies announced their plans for making AVs. Waymo, an automated driving technology development company, has recently opened the self-driving service to the public. Automated vehicles, which are defined as being able to self-drive, self-park, and automate routing, provide potentials for new business models such as privately owned automated vehicles (PAVs) that serve trips within households, shared AVs (SAVs) that offer door-to-door service to the public who request service using app-based platforms, and SAVs with pool where multiple passengers may be pooled together when the vehicles do not detour much if sequentially picking up and dropping off passengers. Therefore, AVs can transform the transportation system especially by reducing vehicle ownership and increasing travel distance. To plan for a sustainable future, it is important to gain an understanding of the impacts of AVs under various scenarios. Thus, a wealth of case studies explore the system performance of SAVs such as served trips per SAV per day. However, the impacts of AVs are not static and tend to vary across cities, depend on heterogeneous mode choices within regions, and may not be evenly distributed within a city. Therefore, this dissertation fills the research gaps by (1) investigating how urban features such as density may influence the system performance of SAVs; (2) exploring heterogeneity of key factors that influence the decisions about using AVs across regions; and (3) examining the distribution of the demand for energy in the era of AVs. The first study in the dissertation simulates the SAVs that serve trips within 286 cities and examines the relationship between the system performance of SAVs and city features such as density, diversity, and design. The system performance of SAVs is evaluated using served trips per SAV per day, percent of pooled trips that allow ridesharing, and percent of extra Vehicle Miles Traveled (VMT) compared to the VMT requested by the served trips. The results suggest that compact diverse development patterns and pedestrian-oriented networks can promote the performance of SAVs. The second study uses an interpretable machine learning framework to understand the heterogeneous mode choice behaviors of private car users in the era of AVs in two regions. The framework uses an AV mode choice survey, where respondents are asked to take mode choice experiments given attributes about the trips, to train machine learning models. Accumulated Local Effects (ALE) plots are used to analyze the model results. ALE outputs the accumulated change of the probability of choosing specific modes within small intervals across the range of the variable of interest. It is suggested that attitudes and trip-specific attributes such as in-vehicle time are the most important determinants. Positive attitudes, longer trips, and longer walking distance can promote the adoption of AV modes. In addition, the effects of in-vehicle time and walking distance vary across the two examined regions due to distinct urban form, transportation infrastructure, and cultural backgrounds. Kansas City residents can tolerate shorter walking distance before switching to SAV choices due to the car-oriented environment while Seattle residents are more sensitive to in-vehicle travel time because of the local congestion levels. The final part of the dissertation examines the demand for energy of AVs at disaggregated levels incorporating heterogeneity of AV mode choices. A three-step framework is employed including the prediction of mode choice, the determination of vehicle trajectories, and the estimation of the demand for energy. It is suggested that the AV scenario can generate -0.36% to 2.91% of extra emissions and consume 2.9% more energy compared to a business as usual (BAU) scenario if gasoline is used. The revealed distribution of traffic volume suggests that the demand for charging is concentrated around the downtown areas and on highways if AVs consume electricity. In summary, the dissertation demonstrates that there is a dynamics with regard to the impacts and performance of AVs across regions due to various urban form, infrastructure and cultural environment, and the spatial heterogeneity within cities.
3

Mohou stroje vysvětlit akciové výnosy? / Can Machines Explain Stock Returns?

Chalupová, Karolína January 2021 (has links)
Can Machines Explain Stock Returns? Thesis Abstract Karolína Chalupová January 5, 2021 Recent research shows that neural networks predict stock returns better than any other model. The networks' mathematically complicated nature is both their advantage, enabling to uncover complex patterns, and their curse, making them less readily interpretable, which obscures their strengths and weaknesses and complicates their usage. This thesis is one of the first attempts at overcoming this curse in the domain of stock returns prediction. Using some of the recently developed machine learning interpretability methods, it explains the networks' superior return forecasts. This gives new answers to the long- standing question of which variables explain differences in stock returns and clarifies the unparalleled ability of networks to identify future winners and losers among the stocks in the market. Building on 50 years of asset pricing research, this thesis is likely the first to uncover whether neural networks support the economic mechanisms proposed by the literature. To a finance practitioner, the thesis offers the transparency of decomposing any prediction into its drivers, while maintaining a state-of-the-art profitability in terms of Sharpe ratio. Additionally, a novel metric is proposed that is particularly suited...
4

Towards Understanding slag build-up in a Grate-Kiln furnace : A study of what parameters in the Grate-Kiln furnace leads to increased slag build-up, in a modern pellet production kiln / Mot ökad förståelse av slaguppbyggnad i ett kulsintersverk

Olsson, Oscar, Österman, Uno January 2022 (has links)
As more data is being gathered in industrial production facilities, the interest in applying machine learning models to the data is growing. This includes the iron ore mining industry, and in particular the build-up of slag in grate-kiln furnaces. Slag is a byproduct in the pelletizing process within these furnaces, that can cause production stops, quality issues, and unplanned maintenance. Previous studies on slag build-up have been done mainly by chemists and process engineers. Whilst previous research has hypothesized contributing factors to slag build-up, the studies have mostly been conducted in simulation environments and thus have not used real sensor data utilizing machine learning models. Luossavaara-Kiirunavaara Aktiebolag (LKAB) has provided data from one of their grate-kiln furnaces, a time-series data of sensor readings, that compressed before storage.  A Scala package was built to ingest and interpolate the LKAB data and make it ready for machine learning experiments. The estimation of slag within the kiln was found too arbitrary to make accurate predictions. Therefore, three quality metrics, tightly connected to the build-up of slag, were selected as target variables instead. Independent and identically distributed (IID) units of data were created by isolating fuel usage, product type produced and production rate. Further, another IID criterion was created, adjusting the time for each feature in order to be able to compare feature values for a single pellet in production. Specifically, the time it takes for a pellet to go from the feature sensor to the quality test was added to the original timestamp. This resulted in a table where each row represents multiple features and quality measures for the same small batch of pellets. An IID unit of interest was then used to find the most contributing features by using principal component analysis (PCA) and lasso regression. It was found that using the two mentioned methods, the number of features could be reduced to a smaller set of important features. Further, using decision tree regression with the subset of features, selected from the most important features, it was found that decision tree regression had a similar performance with the subset of features as the lasso regression. Decision tree and lasso regression were chosen for interpretability, which was important in order to be able to discuss the contributing factors with LKAB process engineers. / Idag genereras allt mer data från industriella produktionsanläggningar och intresset att applicera maskininlärningsmodeller på denna data växer. Detta inkluderar även industrin för utvining av järnmalm, i synnerhet uppbyggnaden av slagg i grate-kiln ugnar. Slagg är en biprodukt från pelletsproduktionen som kan orsaka produktionsstopp, kvalitetsbrister och oplanerat underhåll av ugnarna. Tidigare forskning kring slagguppbyggnad har i huvudsak gjorts av kemister och processingenjörer och ett antal bidragande faktorer till slagguppbyggnad ha antagits. Däremot har dessa studier främst utförts i simulerad experimentmiljö och därför inte applicerat maskininlärningsmodeler på sensordata från produktion. Luossavaara-Kiirunavaara Aktiebolag (LKAB) har till denna studie framställt och försett data från en av deras grate-kiln ugnar, specifikt tidsseriedata från sensorer som har komprimerats innan lagring. Ett Scala-paket byggdes för att ladda in och interpolera LKAB:s data, för att sedan göra den redo och applicerbar för experiment med maskininlärningsmodeller. Direkta mätningar för slagguppbyggnad och slaggnivå upptäcktes vara för slumpartade och bristfälliga för prediktion, därför användas istället tre kvalitetsmätningar, med tydligt samband till påföljderna från slagguppbyggnad, som målvariabler. Independent and identically distributed (IID) enheter skapades för all data genom att isolera bränsleanvändning, produkttyp och produktionstakt. Vidare, skapades ytterligare ett kriterie för IID:er, en tidsjustering av varje variabel för att göra det möjligt att kunna jämföra variabler inbördes för en enskild pellet i produktion. Specifikt, användes tiden det tar för en pellet från att den mäts av en enskild sensor till att kvalitetstestet tas. Tidsskillnaden adderas sedan till sensormätningens tidsstämpel. Detta resulterade i en tabell där varje rad representerade samma lilla mängd av pellets. En IID enhet av intresse analyserades sedan för att undersöka vilka variabler som har störst varians och påverkan genom en principal komponentsanalys (PCA) och lassoregression. Genom att använda dessa metoder konstaterades det att antalet variabler kunde reduceras till ett mindre antal variabler och ett nytt, mindre, dataset av de viktigaste variablerna skapades. Vidare, genom regression av beslutsträd med de viktigaste variablerna, konstaterades att beslutträdsregression och lassoregression hade liknande prestanda när data med de viktigaste variablerna användes. Beslutträdsregression och lassoregression användes för att experimentens resultat skulle ha en hög förklaringsgrad, vilket är viktigt för att kunna diskutera variabler med högst påverkan på slagguppbyggnaden och ge resultat som är tolkbara och användbara för LKAB:s processingenjörer.
5

[en] APPROXIMATE BORN AGAIN TREE ENSEMBLES / [pt] ÁRVORES BA APROXIMADAS

28 October 2021 (has links)
[pt] Métodos ensemble como random forest, boosting e bagging foram extensivamente estudados e provaram ter uma acurácia melhor do que usar apenas um preditor. Entretanto, a desvantagem é que os modelos obtidos utilizando esses métodos podem ser muito mais difíceis de serem interpretados do que por exemplo, uma árvore de decisão. Neste trabalho, nós abordamos o problema de construir uma árvore de decisão que aproximadamente reproduza um conjunto de árvores, explorando o tradeoff entre acurácia e interpretabilidade, que pode ser alcançado quando a reprodução exata do conjunto de árvores é relaxada. Primeiramente, nós formalizamos o problem de obter uma árvore de decisão de uma determinada profundidade que seja a mais aderente ao conjunto de árvores e propomos um algoritmo de programação dinâmica para resolver esse problema. Nós também provamos que a árvore de decisão obtida por esse procedimento satisfaz garantias de generalização relacionadas a generalização do modelo original de conjuntos de árvores, um elemento crucial para a efetividade dessa árvore de decisão em prática. Visto que a complexidade computacional do algoritmo de programação dinâmica é exponencial no número de features, nós propomos duas heurísticas para gerar árvores de uma determinada profundidade com boa aderência em relação ao conjunto de árvores. Por fim, nós conduzimos experimentos computacionais para avaliar os algoritmos propostos. Quando utilizados classificadores mais interpretáveis, os resultados indicam que em diversas situações a perda em acurácia é pequena ou inexistente: restrigindo a árvores de decisão de profundidade 6, nossos algoritmos produzem árvores que em média possuem acurácias que estão a 1 por cento (considerando o algoritmo de programção dinâmica) ou 2 por cento (considerando os algoritmos heurísticos) do conjunto original de árvores. / [en] Ensemble methods in machine learning such as random forest, boosting, and bagging have been thoroughly studied and proven to have better accuracy than using a single predictor. However, their drawback is that they give models that can be much harder to interpret than those given by, for example, decision trees. In this work, we approach in a principled way the problem of constructing a decision tree that approximately reproduces a tree ensemble, exploring the tradeoff between accuracy and interpretability that can be obtained once exact reproduction is relaxed. First, we formally define the problem of obtaining the decision tree of a given depth that is most adherent to a tree ensemble and give a Dynamic Programming algorithm for solving this problem. We also prove that the decision trees obtained by this procedure satisfy generalization guarantees related to the generalization of the original tree ensembles, a crucial element for their effectiveness in practice. Since the computational complexity of the Dynamic Programming algorithm is exponential in the number of features, we also design heuristics to compute trees of a given depth with good adherence to a tree ensemble. Finally, we conduct a comprehensive computational evaluation of the algorithms proposed. The results indicate that in many situations, there is little or no loss in accuracy in working more interpretable classifiers: even restricting to only depth-6 decision trees, our algorithms produce trees with average accuracies that are within 1 percent (for the Dynamic Programming algorithm) or 2 percent (heuristics) of the original random forest.
6

Applying Machine Learning to Explore Nutrients Predictive of Cardiovascular Disease Using Canadian Linked Population-Based Data / Machine Learning to Predict Cardiovascular Disease with Nutrition

Morgenstern, Jason D. January 2020 (has links)
McMaster University MASTER OF PUBLIC HEALTH (2020) Hamilton, Ontario (Health Research Methods, Evidence, and Impact) TITLE: Applying Machine Learning to Determine Nutrients Predictive of Cardiovascular Disease Using Canadian Linked Population-Based Data AUTHOR: Jason D. Morgenstern, B.Sc. (University of Guelph), M.D. (Western University) SUPERVISOR: Professor L.N. Anderson, NUMBER OF PAGES: xv, 121 / The use of big data and machine learning may help to address some challenges in nutritional epidemiology. The first objective of this thesis was to explore the use of machine learning prediction models in a hypothesis-generating approach to evaluate how detailed dietary features contribute to CVD risk prediction. The second objective was to assess the predictive performance of the models. A population-based retrospective cohort study was conducted using linked Canadian data from 2004 – 2018. Study participants were adults age 20 and older (n=12 130 ) who completed the 2004 Canadian Community Health Survey, Cycle 2.2, Nutrition (CCHS 2.2). Statistics Canada has linked the CCHS 2.2 data to the Discharge Abstracts Database and the Canadian Vital Statistics Death database, which were used to determine cardiovascular outcomes (stroke or ischemic heart disease events or deaths). Conditional inference forests were used to develop models. Then, permutation feature importance (PFI) and accumulated local effects (ALEs) were calculated to explore contributions of nutrients to predicted disease. Supplement-use (median PFI (M)=4.09 x 10-4, IQR=8.25 x 10-7 – 1.11 x 10-3) and caffeine (M=2.79 x 10-4, IQR= -9.11 x 10-5 – 5.86 x 10-4) had the highest median PFIs for nutrition-related features. Supplement-use was associated with decreased predicted risk of CVD (accumulated local effects range (ALER)= -3.02 x 10-4 – 2.76 x 10-4) and caffeine was associated with increased predicted risk (ALER= -9.96 x 10-4 – 0.035). The best-performing model had a logarithmic loss of 0.248. Overall, many non-linear relationships were observed, including threshold, j-shaped, and u-shaped. The results of this exploratory study suggest that applying machine learning to the nutritional epidemiology of CVD, particularly using big datasets, may help elucidate risks and improve predictive models. Given the limited application thus far, work such as this could lead to improvements in public health recommendations and policy related to dietary behaviours. / Thesis / Master of Public Health (MPH) / This work explores the potential for machine learning to improve the study of diet and disease. In chapter 2, opportunities are identified for big data to make diet easier to measure. Also, we highlight how machine learning could find new, complex relationships between diet and disease. In chapter 3, we apply a machine learning algorithm, called conditional inference forests, to a unique Canadian dataset to predict whether people developed strokes or heart attacks. This dataset included responses to a health survey conducted in 2004, where participants’ responses have been linked to administrative databases that record when people go to hospital or die up until 2017. Using these techniques, we identified aspects of nutrition that predicted disease, including caffeine, alcohol, and supplement-use. This work suggests that machine learning may be helpful in our attempts to understand the relationships between diet and health.
7

Insurance Fraud Detection using Unsupervised Sequential Anomaly Detection / Detektion av försäkringsbedrägeri med oövervakad sekvensiell anomalitetsdetektion

Hansson, Anton, Cedervall, Hugo January 2022 (has links)
Fraud is a common crime within the insurance industry, and insurance companies want to quickly identify fraudulent claimants as they often result in higher premiums for honest customers. Due to the digital transformation where the sheer volume and complexity of available data has grown, manual fraud detection is no longer suitable. This work aims to automate the detection of fraudulent claimants and gain practical insights into fraudulent behavior using unsupervised anomaly detection, which, compared to supervised methods, allows for a more cost-efficient and practical application in the insurance industry. To obtain interpretable results and benefit from the temporal dependencies in human behavior, we propose two variations of LSTM based autoencoders to classify sequences of insurance claims. Autoencoders can provide feature importances that give insight into the models' predictions, which is essential when models are put to practice. This approach relies on the assumption that outliers in the data are fraudulent. The models were trained and evaluated on a dataset we engineered using data from a Swedish insurance company, where the few labeled frauds that existed were solely used for validation and testing. Experimental results show state-of-the-art performance, and further evaluation shows that the combination of autoencoders and LSTMs are efficient but have similar performance to the employed baselines. This thesis provides an entry point for interested practitioners to learn key aspects of anomaly detection within fraud detection by thoroughly discussing the subject at hand and the details of our work. / <p>Gjordes digitalt via Zoom. </p>
8

Survivability Prediction and Analysis using Interpretable Machine Learning : A Study on Protecting Ships in Naval Electronic Warfare

Rydström, Sidney January 2022 (has links)
Computer simulation is a commonly applied technique for studying electronic warfare duels. This thesis aims to apply machine learning techniques to convert simulation output data into knowledge and insights regarding defensive actions for a ship facing multiple hostile missiles. The analysis may support tactical decision-making, hence the interpretability aspect of predictions is necessary to allow for human evaluation and understanding of impacts from the explanatory variables. The final distance for the threats to the target and the probability of the threats hitting the target was modeled using a multi-layer perceptron model with a multi-task approach, including custom loss functions. The results generated in this study show that the selected methodology is more successful than a baseline using regression models. Modeling the outcome with artificial neural networks results in a black box for decision making. Therefore the concept of interpretable machine learning was applied using a post-hoc approach. Given the learned model, the features considered, and the multiple threats, the feature contributions to the model were interpreted using Kernel SHapley Additive exPlanations (SHAP). The method consists of local linear surrogate models for approximating Shapley values. The analysis primarily showed that an increased seeker activation distance was important, and the increased time for defensive actions improved the outcomes. Further, predicting the final distance to the ship at the beginning of a simulation is important and, in general, a guidance of the actual outcome. The action of firing chaff grenades in the tracking gate also had importance. More chaff grenades influenced the missiles' tracking and provided a preferable outcome from the defended ship's point of view.
9

Zero/Few-Shot Text Classification : A Study of Practical Aspects and Applications / Textklassificering med Zero/Few-Shot Learning : En Studie om Praktiska Aspekter och Applikationer

Åslund, Jacob January 2021 (has links)
SOTA language models have demonstrated remarkable capabilities in tackling NLP tasks they have not been explicitly trained on – given a few demonstrations of the task (few-shot learning), or even none at all (zero-shot learning). The purpose of this Master’s thesis has been to investigate practical aspects and potential applications of zero/few-shot learning in the context of text classification. This includes topics such as combined usage with active learning, automated data labeling, and interpretability. Two different methods for zero/few-shot learning have been investigated, and the results indicate that:  • Active learning can be used to marginally improve few-shot performance, but it seems to be mostly beneficial in settings with very few samples (e.g. less than 10). • Zero-shot learning can be used produce reasonable candidate labels for classes in a dataset, given knowledge of the classification task at hand.  • It is difficult to trust the predictions of zero-shot text classification without access to a validation dataset, but IML methods such as saliency maps could find usage in debugging zero-shot models. / Ledande språkmodeller har uppvisat anmärkningsvärda förmågor i att lösa NLP-problem de inte blivit explicit tränade på – givet några exempel av problemet (few-shot learning), eller till och med inga alls (zero-shot learning). Syftet med det här examensarbetet har varit att undersöka praktiska aspekter och potentiella tillämpningar av zero/few-shot learning inom kontext av textklassificering. Detta inkluderar kombinerad användning med aktiv inlärning, automatiserad datamärkning, och tolkningsbarhet. Två olika metoder för zero/few-shot learning har undersökts, och resultaten indikerar att: • Aktiv inlärning kan användas för att marginellt förbättra textklassificering med few-shot learning, men detta verkar vara mest fördelaktigt i situationer med väldigt få datapunkter (t.ex. mindre än 10). • Zero-shot learning kan användas för att hitta lämpliga etiketter för klasser i ett dataset, givet kunskap om klassifikationsuppgiften av intresse. • Det är svårt att lita på robustheten i textklassificering med zero-shot learning utan tillgång till valideringsdata, men metoder inom tolkningsbar maskininlärning såsom saliency maps skulle kunna användas för att felsöka zero-shot modeller.

Page generated in 0.0552 seconds