• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 67
  • 67
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Covariate Model Building in Nonlinear Mixed Effects Models

Ribbing, Jakob January 2007 (has links)
Population pharmacokinetic-pharmacodynamic (PK-PD) models can be fitted using nonlinear mixed effects modelling (NONMEM). This is an efficient way of learning about drugs and diseases from data collected in clinical trials. Identifying covariates which explain differences between patients is important to discover patient subpopulations at risk of sub-therapeutic or toxic effects and for treatment individualization. Stepwise covariate modelling (SCM) is commonly used to this end. The aim of the current thesis work was to evaluate SCM and to develop alternative approaches. A further aim was to develop a mechanistic PK-PD model describing fasting plasma glucose, fasting insulin, insulin sensitivity and beta-cell mass. The lasso is a penalized estimation method performing covariate selection simultaneously to shrinkage estimation. The lasso was implemented within NONMEM as an alternative to SCM and is discussed in comparison with that method. Further, various ways of incorporating information and propagating knowledge from previous studies into an analysis were investigated. In order to compare the different approaches, investigations were made under varying, replicated conditions. In the course of the investigations, more than one million NONMEM analyses were performed on simulated data. Due to selection bias the use of SCM performed poorly when analysing small datasets or rare subgroups. In these situations, the lasso method in NONMEM performed better, was faster, and additionally validated the covariate model. Alternatively, the performance of SCM can be improved by propagating knowledge or incorporating information from previously analysed studies and by population optimal design. A model was also developed on a physiological/mechanistic basis to fit data from three phase II/III studies on the investigational drug, tesaglitazar. This model described fasting glucose and insulin levels well, despite heterogeneous patient groups ranging from non-diabetic insulin resistant subjects to patients with advanced diabetes. The model predictions of beta-cell mass and insulin sensitivity were well in agreement with values in the literature.
52

Utilizing Data-Driven Approaches to Evaluate and Develop Air Traffic Controller Action Prediction Models

Jeongjoon Boo (9106310) 27 July 2020 (has links)
Air traffic controllers (ATCos) monitor flight operations and resolve predicted aircraft conflicts to ensure safe flights, making them one of the essential human operators in air traffic control systems. Researchers have been studying ATCos with human subjective approaches to understand their tasks and air traffic managing processes. As a result, models were developed to predict ATCo actions. However, there is a gap between our knowledge and the real-world. The developed models have never been validated against the real-world, which creates uncertainties in our understanding of how ATCos detect and resolve predicted aircraft conflicts. Moreover, we do not know how information from air traffic control systems affects their actions. This Ph.D. dissertation work introduces methods to evaluate existing ATCo action prediction models. It develops a prediction model based on flight contextual information (information describing flight operations) to explain the relationship between ATCo actions and information. Unlike conventional approaches, this work takes data-driven approaches that collect large-scale flight tracking data. From the collected real-world data, ATCo actions and corresponding predicted aircraft conflicts were identified by developed algorithms. Comparison methods were developed to measure both qualitative and quantitative differences between solutions from the existing prediction models and ATCo actions on the same aircraft conflicts. The collected data is further utilized to develop an ATCo action prediction model. A hierarchical structure found from analyzing the collected ATCo actions was applied to build a structure for the model. The flight contextual information generated from the collected data was used to predict the actions. Results from this work found that the collected ATCo actions do not show any preferences on the methods to resolve aircraft conflicts. Results found that the evaluated existing prediction model does not reflect the real-world. Also, a large portion of the real conflicts was to be solved by the model both physically and operationally. Lastly, the developed prediction model showed a clear relationship between ATCo actions and applied flight contextual information. These results suggest the following takeaways. First, human actions can be identified from closed-loop data. It could be an alternative approach to collect human subjective data. Second, the importance of evaluating models before implications. Third, potentials to utilize the flight contextual information to conduct high-end prediction models.
53

Bewertung innovativer Geschäftsmodelle: Entwicklung eines Simulationsmodells und Anwendung auf die bedarfsabhängige Funktionserweiterung im vernetzten Fahrzeug: Development of a simulation model and application to the ‘Function on Demand’ concept of the connected car

Ziegenfuss, Katharina 26 April 2021 (has links)
Die Bedeutung innovativer Geschäftsmodelle als Bestimmungsfaktor für den Unternehmenserfolg steht weitestgehend außer Frage. Aufgrund der hohen Komplexität von Geschäftsmodellen hat sich jedoch bislang kein praktisch anwendbares Bewertungskonzept etablieren können, welches Geschäftsmodellinnovationen in Hinblick auf deren Erfolgsentwicklung untersucht. Zur Adressierung dieser Problemstellung wird unter Anwendung des systemdynamischen Ansatzes ein Simulationsmodell entwickelt, welches den Wertbeitrag einer Geschäftsmodellinnovation ausweist. Neben dem Kapitalwert als finanzielle Wertgröße des Geschäftsmodells werden ferner der Kundenwert sowie der Wert der unternehmerischen Fähigkeiten als wichtige Wertgrößen explizit gemacht, da sie die zukünftige Leistungs- und Wettbewerbsfähigkeit des Geschäftsmodells determinieren. Damit liefert das Bewertungsmodell einen Ansatz zur ganzheitlichen Geschäftsmodellbewertung, die die Anwendung finanzieller Standardkalkulationen mit der Messbarmachung nicht-finanzieller Erfolgsgrößen kombiniert.:1 Einführung 2 Geschäftsmodelle und Geschäftsmodellbewertung 3 Entwicklungsprozess des systemdynamischen Geschäftsmodells bedarfsabhängiger Funktionserweiterungen 4 Aufbau des systemdynamischen Geschäftsmodells bedarfsabhängiger Funktionserweiterungen 5 Simulation des systemdynamischen Geschäftsmodells bedarfsabhängiger Funktionserweiterungen 6 Schlussbetrachtung / Business model innovations provide powerful levers for creating sustainable competitive advantage and thus have a positive impact on the value of an enterprise. However, due to the complexity of business models, no practically applicable framework, evaluating an innovative business model with regard to its effect on a company’s success, has been established. Hence, a simulation model assessing the value contribution of a business model innovation is developed. Using the mathematical modeling technique ‘System Dynamics’ to frame the value drivers of a business allows for simulation experiments that reveal the effect of the business model’s design on its profitability, therewith guiding policymakers towards better decisions. As a result, the simulation model reports the net present value of a business model. In addition, the success indicators customer lifetime value and the value of the enterprises’ capabilities are identified as important assets that have to be monitored closely as they determine the company’s prospective performance. In combining financial standard calculations with the operationalization of non-financial measures, the simulation model represents a comprehensive approach for business model evaluation.:1 Einführung 2 Geschäftsmodelle und Geschäftsmodellbewertung 3 Entwicklungsprozess des systemdynamischen Geschäftsmodells bedarfsabhängiger Funktionserweiterungen 4 Aufbau des systemdynamischen Geschäftsmodells bedarfsabhängiger Funktionserweiterungen 5 Simulation des systemdynamischen Geschäftsmodells bedarfsabhängiger Funktionserweiterungen 6 Schlussbetrachtung
54

Structure Oriented Evaluation Model for E-Learning

Tudevdagva, Uranchimeg 21 July 2014 (has links)
Volume 14 of publication series EINGEBETTETE, SELBSTORGANISIERENDE SYSTEME is devoted to the structure oriented evaluation of e-learning. For future knowledge society, beside creation of intelligent technologies, adapted methods of knowledge transfer are required. In this context e-learning becomes a key technology for development of any education system. E-learning is a complex process into which many different groups with specific tasks and roles are included. The dynamics of an e-learning process requires adjusted quality management. For that corresponding evaluation methods are needed. In the present work, Dr.Tudevdagva develops a new evaluation approach for e-learning. The advantage of her method is that in contrast to linear evaluation methods no weight factors are needed and the logical goal structure of an elearning process can be involved into evaluation. Based on general measure theory structure oriented score calculation rules are derived. The so obtained score function satisfies the same calculation rules as they are known from normalised measures. In statistical generalisation, these rules allow the structure oriented calculation of empirical evaluation scores based on checklist data. By these scores the quality can be described by which an e-learning has reached its total goal. Moreover, a consistent evaluation of embedded partial processes of an e-learning becomes possibly. The presented score calculation rules are part of a eight step evaluation model which is illustrated by pilot samples. U. Tudevdagva’s structure oriented evaluation model (SURE model) is by its embedding into the general measure theory quite universal applicable. In similar manner, an evaluation of efficiency of administration or organisation processes becomes possible.
55

Evaluation of Machine Learning Methods for Time Series Forecasting on E-commerce Data / Utvärdering av Maskininlärningsmodeller för tidsserie-prognotisering på e-handels data

Abrahamsson, Peter, Ahlqvist, Niklas January 2022 (has links)
Within demand forecasting, and specifically within the field of e-commerce, the provided data often contains erratic behaviours which are difficult to explain. This induces contradictions to the common assumptions within classical approaches for time series analysis. Yet, classical and naive approaches are still commonly used. Machine learning could be used to alleviate such problems. This thesis evaluates four models together with Swedish fin-tech company QLIRO AB. More specifically, a MLR (Multiple Linear Regression) model, a classic Box-Jenkins model (SARIMAX), an XGBoost model, and a LSTM-network (Long Short-Term Memory). The provided data consists of aggregated total daily reservations by e-merchants within the Nordic market from 2014. Some data pre processing was required and a smoothed version of the data set was created for comparison. Each model was constructed according to their specific requirements but with similar feature engineering. Evaluation was then made on a monthly level with a forecast horizon of 30 days during 2021. The results shows that both the MLR and the XGBoost provides the most consistent results together with perks for being easy to use. After these two, the LSTM-network showed the best results for November and December on the original data set but worst overall. Yet it had good performance on the smoothed data set and was then comparable to the first two. The SARIMAX was the worst performing of all the models considered in this thesis and was not as easy to implement. / Inom efterfrågeprognoser, och specifikt inom området e-handel, innehåller den tillhandahållna informationen ofta oberäkneliga beteenden som är svåra att förklara. Detta motsäger vanliga antaganden inom tidsserier som används för de mer klassiska tillvägagångssätten. Ändå är klassiska och naiva metoder fortfarande vanliga. Maskininlärning skulle kunna användas för att lindra sådana problem. Detta examensarbete utvärderar fyra modeller tillsammans med det svenska fintechföretaget QLIRO AB. Mer specifikt en MLR-modell (Multiple Linear Regression), en klassisk Box-Jenkins-modell (SARIMAX), en XGBoost-modell och ett LSTM-nätverk (Long Short-Term Memory). Den tillhandahållna informationen består av aggregerade dagliga reservationer från e-handlare inom den nordiska marknaden från 2014. Viss dataförbehandling krävdes och en utjämnad version av datamängden skapades för jämförelse. Varje modell konstruerades enligt deras specifika krav men med liknande \textit{feature engineering}. Utvärderingen gjordes sedan på månadsnivå med en prognoshorisont på 30 dagar under 2021. Resultaten visar att både MLR och XGBoost ger de mest pålitliga resultaten tillsammans med fördelar som att vara lätta att använda. Efter dessa visar LSTM-nätverket de bästa resultaten för november och december på den ursprungliga datamängden men sämst totalt sett. Ändå visar den god prestanda på den utjämnade datamängden och var sedan jämförbar med de två första modellerna. SARIMAX var den sämst presterande av alla jämförda modeller och inte lika lätt att implementera.
56

Benchmarking bias mitigation algorithms in representation learning through fairness metrics

Reddy, Charan 07 1900 (has links)
Le succès des modèles d’apprentissage en profondeur et leur adoption rapide dans de nombreux domaines d’application ont soulevé d’importantes questions sur l’équité de ces modèles lorsqu’ils sont déployés dans le monde réel. Des études récentes ont mis en évidence les biais encodés par les algorithmes d’apprentissage des représentations et ont remis en cause la fiabilité de telles approches pour prendre des décisions. En conséquence, il existe un intérêt croissant pour la compréhension des sources de biais dans l’apprentissage des algorithmes et le développement de stratégies d’atténuation des biais. L’objectif des algorithmes d’atténuation des biais est d’atténuer l’influence des caractéristiques des données sensibles sur les décisions d’éligibilité prises. Les caractéristiques sensibles sont des caractéristiques privées et protégées d’un ensemble de données telles que le sexe ou la race, qui ne devraient pas affecter les décisions de sortie d’éligibilité, c’està-dire les critères qui rendent un individu qualifié ou non qualifié pour une tâche donnée, comme l’octroi de prêts ou l’embauche. Les modèles d’atténuation des biais visent à prendre des décisions d’éligibilité sur des échantillons d’ensembles de données sans biais envers les attributs sensibles des données d’entrée. La difficulté des tâches d’atténuation des biais est souvent déterminée par la distribution de l’ensemble de données, qui à son tour est fonction du déséquilibre potentiel de l’étiquette et des caractéristiques, de la corrélation des caractéristiques potentiellement sensibles avec d’autres caractéristiques des données, du décalage de la distribution de l’apprentissage vers le phase de développement, etc. Sans l’évaluation des modèles d’atténuation des biais dans diverses configurations difficiles, leurs mérites restent incertains. Par conséquent, une analyse systématique qui comparerait différentes approches d’atténuation des biais sous la perspective de différentes mesures d’équité pour assurer la réplication des résultats conclus est nécessaire. À cette fin, nous proposons un cadre unifié pour comparer les approches d’atténuation des biais. Nous évaluons différentes méthodes d’équité formées avec des réseaux de neurones profonds sur un ensemble de données synthétiques commun et un ensemble de données du monde réel pour obtenir de meilleures informations sur le fonctionnement de ces méthodes. En particulier, nous formons environ 3000 modèles différents dans diverses configurations, y compris des configurations de données déséquilibrées et corrélées, pour vérifier les limites des modèles actuels et mieux comprendre dans quelles configurations ils sont sujets à des défaillances. Nos résultats montrent que le biais des modèles augmente à mesure que les ensembles de données deviennent plus déséquilibrés ou que les attributs des ensembles de données deviennent plus corrélés, le niveau de dominance des caractéristiques des ensembles de données sensibles corrélées a un impact sur le biais, et les informations sensibles restent dans la représentation latente même lorsque des algorithmes d’atténuation des biais sont appliqués. Résumant nos contributions - nous présentons un ensemble de données, proposons diverses configurations d’évaluation difficiles et évaluons rigoureusement les récents algorithmes prometteurs d’atténuation des biais dans un cadre commun et publions publiquement cette référence, en espérant que la communauté des chercheurs le considérerait comme un point d’entrée commun pour un apprentissage en profondeur équitable. / The rapid use and success of deep learning models in various application domains have raised significant challenges about the fairness of these models when used in the real world. Recent research has shown the biases incorporated within representation learning algorithms, raising doubts about the dependability of such decision-making systems. As a result, there is a growing interest in identifying the sources of bias in learning algorithms and developing bias-mitigation techniques. The bias-mitigation algorithms aim to reduce the impact of sensitive data aspects on eligibility choices. Sensitive features are private and protected features of a dataset, such as gender of the person or race, that should not influence output eligibility decisions, i.e., the criteria that determine whether or not an individual is qualified for a particular activity, such as lending or hiring. Bias mitigation models are designed to make eligibility choices on dataset samples without bias toward sensitive input data properties. The dataset distribution, which is a function of the potential label and feature imbalance, the correlation of potentially sensitive features with other features in the data, the distribution shift from training to the development phase, and other factors, determines the difficulty of bias-mitigation tasks. Without evaluating bias-mitigation models in various challenging setups, the merits of deep learning approaches to these tasks remain unclear. As a result, a systematic analysis is required to compare different bias-mitigation procedures using various fairness criteria to ensure that the final results are replicated. In order to do so, this thesis offers a single paradigm for comparing bias-mitigation methods. To better understand how these methods work, we compare alternative fairness algorithms trained with deep neural networks on a common synthetic dataset and a real-world dataset. We train around 3000 distinct models in various setups, including imbalanced and correlated data configurations, to validate the present models’ limits and better understand which setups are prone to failure. Our findings show that as datasets become more imbalanced or dataset attributes become more correlated, model bias increases, the dominance of correlated sensitive dataset features influence bias, and sensitive data remains in the latent representation even after bias-mitigation algorithms are applied. In summary, we present a dataset, propose multiple challenging assessment scenarios, rigorously analyse recent promising bias-mitigation techniques in a common framework, and openly disclose this benchmark as an entry point for fair deep learning.
57

Predicting Customer Churn in a Subscription-Based E-Commerce Platform Using Machine Learning Techniques

Aljifri, Ahmed January 2024 (has links)
This study investigates the performance of Logistic Regression, k-Nearest Neighbors (KNN), and Random Forest algorithms in predicting customer churn within an e-commerce platform. The choice of the mentioned algorithms was due to the unique characteristics of the dataset and the unique perception and value provided by each algorithm. Iterative models ‘examinations, encompassing preprocessing techniques, feature engineering, and rigorous evaluations, were conducted. Logistic Regression showcased moderate predictive capabilities but lagged in accurately identifying potential churners due to its assumptions of linearity between log odds and predictors. KNN emerged as the most accurate classifier, achieving superior sensitivity and specificity (98.22% and 96.35%, respectively), outperforming other models. Random Forest, with sensitivity and specificity (91.75% and 95.83% respectively) excelled in specificity but slightly lagged in sensitivity. Feature importance analysis highlighted "Tenure" as the most impactful variable for churn prediction. Preprocessing techniques differed in performance across models, emphasizing the importance of tailored preprocessing. The study's findings underscore the significance of continuous model refinement and optimization in addressing complex business challenges like customer churn. The insights serve as a foundation for businesses to implement targeted retention strategies, mitigating customer attrition, and promote growth in e-commerce platforms.
58

[pt] AVALIANDO TÉCNICAS PARA A REFLEXÃO ÉTICA E COMUNICAÇÃO SOBRE MODELOS DE APRENDIZADO DE MÁQUINAS PARA DESENVOLVEDORES / [en] EVALUATING APPROACHES FOR DEVELOPERS ETHICAL REASONING AND COMMUNICATION ABOUT MACHINE LEARNING MODELS

JOSE LUIZ NUNES 30 November 2021 (has links)
[pt] O uso de modelos de aprendizado de máquina se tornou ubíquo para um leque diverso de tarefas. Contudo, ainda não há nenhuma forma estabelecida para refletir sobre questões éticas em seu processo de desenvolvimento. Neste trabalho, realizamos um estudo qualitativo para avaliar duas técnicas propostas pela literatura para auxiliar desenvolvedores a refletirem sobre questões éticas relacionadas à construção e uso de modelos de aprendizado de máquina: (i) Model Cards; e o (ii) Template Estendido de Metacomunicação. Apresentamos nossos resultados a respeito do uso do Model Card pelos participantes, com o propósito de entender como esses atores interagiram com a ferramenta, assim como a dimensão ética de sua reflexão durante nossas entrevistas. Nosso objetivo é melhorar técnicas para desenvolvedores disponibilizaram informações sobre seus modelos e que a reflexão ética sobre os sistemas que desenvolveram. Além disso, nosso trabalho tem como objetivo contribuir para o desenvolvimento de um uso mais justo e ético de sistemas de aprendizado de máquina. / [en] Machine learning algorithms have become widespread for a wide array of tasks. However, there is still no established way to deal with the ethical issues involved in their development and design. Some techniques have been proposed in the literature to support the reflection and/or documentation of the design and development of machine learning models, including ethical considerations, such as: (i) Model Cards and (ii) the Extended Metacommunication Template. We conducted a qualitative study to evaluate the use of these tools. We present our results concerning the use of the Model Card by participants, with the objective of understanding how these actors interacted with the relevant tool and the ethical dimension of their reflections during our interviews. Our goal is to improve and support techniques for developers to disclose information about their models and reflect ethically about the systems they design. Furthermore, we aim to contribute to the development of a more ethically informed and fairer use of machine learning.
59

Capital market theories and pricing models : evaluation and consolidation of the available body of knowledge

Laubscher, Eugene Rudolph 05 1900 (has links)
The study investigates whether the main capital market theories and pricing models provide a reasonably accurate description of the working and efficiency of capital markets, of the pricing of shares and options and the effect the risk/return relationship has on investor behaviour. The capital market theories and pricing models included in the study are Portfolio Theory, the Efficient Market Hypothesis (EMH), the Capital Asset Pricing Model (CAPM), the Arbitrage Pricing Theory (APT), Options Theory and the BlackScholes (8-S) Option Pricing Model. The main conclusion of the study is that the main capital market theories and pricing models, as reviewed in the study, do provide a reasonably accurate description of reality, but a number of anomalies and controversial issues still need to be resolved. The main recommendation of the study is that research into these theories and models should continue unabated, while the specific recommendations in a South African context are the following: ( 1) the benefits of global diversification for South African investors should continue to be investigated; (2) the level and degree of efficiency of the JSE Securities Exchange SA (JSE) should continue to be monitored, and it should be established whether alternative theories to the EMH provide complementary or better descriptions of the efficiency of the South African market; (3) both the CAPM and the APT should continue to be tested, both individually and jointly, in order to better understand the pricing mechanism of, and risk/return relationship on the JSE; (4) much South African research still needs to be conducted on the efficiency of the relatively new options market and the application of the B-S Option Pricing Model under South African conditions. / Financial Accounting / M. Com. (Accounting)
60

On the evaluation of regional climate model simulations over South America

Lange, Stefan 28 October 2015 (has links)
Diese Dissertation beschäftigt sich mit regionaler Klimamodellierung über Südamerika, der Analyse von Modellsensitivitäten bezüglich Wolkenparametrisierungen und der Entwicklung neuer Methoden zur Modellevaluierung mithilfe von Klimanetzwerken. Im ersten Teil untersuchen wir Simulationen mit dem COnsortium for Small scale MOdeling model in CLimate Mode (COSMO-CLM) und stellen die erste umfassende Evaluierung dieses dynamischen regionalen Klimamodells über Südamerika vor. Dabei untersuchen wir insbesondere die Abhängigkeit simulierter tropischer Niederschläge von Parametrisierungen subgitterskaliger cumuliformer und stratiformer Wolken und finden starke Sensitivitäten bezüglich beider Wolkenparametrisierungen über Land. Durch einen simultanen Austausch der entsprechenden Schemata gelingt uns eine beträchtliche Reduzierung von Fehlern in klimatologischen Niederschlags- und Strahlungsmitteln, die das COSMO-CLM über tropischen Regionen für lange Zeit charakterisierten. Im zweiten Teil führen wir neue Metriken für die Evaluierung von Klimamodellen bezüglich räumlicher Kovariabilitäten ein. Im Kern bestehen diese Metriken aus Unähnlichkeitsmaßen für den Vergleich von simulierten mit beobachteten Klimanetzwerken. Wir entwickeln lokale und globale Unähnlichkeitsmaße zum Zwecke der Darstellung lokaler Unähnlichkeiten in Form von Fehlerkarten sowie der Rangordnung von Modellen durch Zusammenfassung lokaler zu globalen Unähnlichkeiten. Die neuen Maße werden dann für eine vergleichende Evaluierung regionaler Klimasimulationen mit COSMO-CLM und dem Statistical Analogue Resampling Scheme über Südamerika verwendet. Dabei vergleichen wir die sich ergebenden Modellrangfolgen mit solchen basierend auf mittleren quadratischen Abweichungen klimatologischer Mittelwerte und Varianzen und untersuchen die Abhängigkeit dieser Rangfolgen von der betrachteten Jahreszeit, Variable, dem verwendeten Referenzdatensatz und Klimanetzwerktyp. / This dissertation is about regional climate modeling over South America, the analysis of model sensitivities to cloud parameterizations, and the development of novel model evaluation techniques based on climate networks. In the first part we examine simulations with the COnsortium for Small scale MOdeling weather prediction model in CLimate Mode (COSMO-CLM) and provide the first thorough evaluation of this dynamical regional climate model over South America. We focus our analysis on the sensitivity of simulated tropical precipitation to the parameterizations of subgrid-scale cumuliform and stratiform clouds. It is shown that COSMO-CLM is strongly sensitive to both cloud parameterizations over tropical land. Using nondefault cumulus and stratus parameterization schemes we are able to considerably reduce long-standing precipitation and radiation biases that have plagued COSMO-CLM across tropical domains. In the second part we introduce new performance metrics for climate model evaluation with respect to spatial covariabilities. In essence, these metrics consist of dissimilarity measures for climate networks constructed from simulations and observations. We develop both local and global dissimilarity measures to facilitate the depiction of local dissimilarities in the form of bias maps as well as the aggregation of those local to global dissimilarities for the purposes of climate model intercomparison and ranking. The new measures are then applied for a comparative evaluation of regional climate simulations with COSMO-CLM and the STatistical Analogue Resampling Scheme (STARS) over South America. We compare model rankings obtained with our new performance metrics to those obtained with conventional root-mean-square errors of climatological mean values and variances, and analyze how these rankings depend on season, variable, reference data set, and climate network type.

Page generated in 0.0591 seconds