• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 11
  • 9
  • 9
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 177
  • 177
  • 67
  • 54
  • 54
  • 40
  • 27
  • 26
  • 22
  • 22
  • 19
  • 18
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Une famille de distributions symétriques et leptocurtiques représentée par la différence de deux variables aléatoires gamma

Augustyniak, Maciej January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
142

The Double Pareto-Lognormal Distribution and its applications in actuarial science and finance

Zhang, Chuan Chuan 01 1900 (has links)
Le but de ce mémoire de maîtrise est de décrire les propriétés de la loi double Pareto-lognormale, de montrer comment on peut introduire des variables explicatives dans le modèle et de présenter son large potentiel d'applications dans le domaine de la science actuarielle et de la finance. Tout d'abord, nous donnons la définition de la loi double Pareto-lognormale et présentons certaines de ses propriétés basées sur les travaux de Reed et Jorgensen (2004). Les paramètres peuvent être estimés en utilisant la méthode des moments ou le maximum de vraisemblance. Ensuite, nous ajoutons une variable explicative à notre modèle. La procédure d'estimation des paramètres de ce mo-\\dèle est également discutée. Troisièmement, des applications numériques de notre modèle sont illustrées et quelques tests statistiques utiles sont effectués. / The purpose of this Master's thesis is to describe the double Pareto-lognormal distribution, show how the model can be extended by introducing explanatory variables in the model and present its large potential of applications in actuarial science and finance. First, we give the definition of the double Pareto-lognormal distribution and present some of its properties based on the work of Reed and Jorgensen (2004). The parameters could be estimated by using the method of moments or maximum likelihood. Next, we add an explanatory variable to our model. The procedure of estimation for this model is also discussed. Finally, some numerical applications of our model are illustrated and some useful statistical tests are conducted.
143

Sur les tests lisses d'ajustement dans le context des series chronologiques

Tagne Tatsinkou, Joseph Francois 12 1900 (has links)
La plupart des modèles en statistique classique repose sur une hypothèse sur la distribution des données ou sur une distribution sous-jacente aux données. La validité de cette hypothèse permet de faire de l’inférence, de construire des intervalles de confiance ou encore de tester la fiabilité du modèle. La problématique des tests d’ajustement vise à s’assurer de la conformité ou de la cohérence de l’hypothèse avec les données disponibles. Dans la présente thèse, nous proposons des tests d’ajustement à la loi normale dans le cadre des séries chronologiques univariées et vectorielles. Nous nous sommes limités à une classe de séries chronologiques linéaires, à savoir les modèles autorégressifs à moyenne mobile (ARMA ou VARMA dans le cas vectoriel). Dans un premier temps, au cas univarié, nous proposons une généralisation du travail de Ducharme et Lafaye de Micheaux (2004) dans le cas où la moyenne est inconnue et estimée. Nous avons estimé les paramètres par une méthode rarement utilisée dans la littérature et pourtant asymptotiquement efficace. En effet, nous avons rigoureusement montré que l’estimateur proposé par Brockwell et Davis (1991, section 10.8) converge presque sûrement vers la vraie valeur inconnue du paramètre. De plus, nous fournissons une preuve rigoureuse de l’inversibilité de la matrice des variances et des covariances de la statistique de test à partir de certaines propriétés d’algèbre linéaire. Le résultat s’applique aussi au cas où la moyenne est supposée connue et égale à zéro. Enfin, nous proposons une méthode de sélection de la dimension de la famille d’alternatives de type AIC, et nous étudions les propriétés asymptotiques de cette méthode. L’outil proposé ici est basé sur une famille spécifique de polynômes orthogonaux, à savoir les polynômes de Legendre. Dans un second temps, dans le cas vectoriel, nous proposons un test d’ajustement pour les modèles autorégressifs à moyenne mobile avec une paramétrisation structurée. La paramétrisation structurée permet de réduire le nombre élevé de paramètres dans ces modèles ou encore de tenir compte de certaines contraintes particulières. Ce projet inclut le cas standard d’absence de paramétrisation. Le test que nous proposons s’applique à une famille quelconque de fonctions orthogonales. Nous illustrons cela dans le cas particulier des polynômes de Legendre et d’Hermite. Dans le cas particulier des polynômes d’Hermite, nous montrons que le test obtenu est invariant aux transformations affines et qu’il est en fait une généralisation de nombreux tests existants dans la littérature. Ce projet peut être vu comme une généralisation du premier dans trois directions, notamment le passage de l’univarié au multivarié ; le choix d’une famille quelconque de fonctions orthogonales ; et enfin la possibilité de spécifier des relations ou des contraintes dans la formulation VARMA. Nous avons procédé dans chacun des projets à une étude de simulation afin d’évaluer le niveau et la puissance des tests proposés ainsi que de les comparer aux tests existants. De plus des applications aux données réelles sont fournies. Nous avons appliqué les tests à la prévision de la température moyenne annuelle du globe terrestre (univarié), ainsi qu’aux données relatives au marché du travail canadien (bivarié). Ces travaux ont été exposés à plusieurs congrès (voir par exemple Tagne, Duchesne et Lafaye de Micheaux (2013a, 2013b, 2014) pour plus de détails). Un article basé sur le premier projet est également soumis dans une revue avec comité de lecture (Voir Duchesne, Lafaye de Micheaux et Tagne (2016)). / Several phenomena from natural and social sciences rely on distribution’s assumption among which the normal distribution is the most popular. The validity of that assumption is useful to setting up forecast intervals or for checking model adequacy of the underlying model. The goodness-of-fit procedures are tools to assess the adequacy of the data’s underlying assumptions. Autoregressive and moving average time series models are often used to find the mathematical behavior of these phenomena from natural and social sciences, and especially in the finance area. These models are based on some assumptions including normality distribution for the innovations. Normality assumption may be helpful for some testing procedures. Furthermore, stronger conclusions can be drawn from the adjusted model if the white noise can be assumed Gaussian. In this work, goodness-of-fit tests for checking normality for the innovations from autoregressive moving average time series models are proposed for both univariate and multivariate cases (ARMA and VARMA models). In our first project, a smooth test of normality for ARMA time series models with unknown mean based on a least square type estimator is proposed. We derive the asymptotic null distribution of the test statistic. The result here is an extension of the paper of Ducharme et Lafaye de Micheaux (2004), where they supposed the mean known and equal to zero. We use the least square type estimator proposed by Brockwell et Davis (1991, section 10.8) and we provide a rigorous proof that it is almost surely convergent. We show that the covariance matrix of the test is nonsingular regardless if the mean is known. We have also studied a data driven approach for the choice of the dimension of the family and we gave a finite sample approximation of the null distribution. Finally, the finite and asymptotic sample properties of the proposed test statistic are studied via a small simulation study. In the second project, goodness-of-fit tests for checking multivariate normality for the innovations from vector autoregressive moving average time series models are proposed. Since these time series models may rely on a large number of parameters, structured parameterization of the functional form is allowed. The methodology also relies on the smooth test paradigm and on families of orthonormal functions with respect to the multivariate normal density. It is shown that the smooth tests converge to convenient chi-square distributions asymptotically. An important special case makes use of Hermite polynomials, and in that situation we demonstrate that the tests are invariant under linear transformations. We observed that the test is not invariant under linear transformations with Legendre polynomials. A consistent data driven method is discussed to choose the family order from the data. In a simulation study, exact levels are studied and the empirical powers of the smooth tests are compared to those of other methods. Finally, an application to real data is provided, specifically on Canadian labour market data and annual global temperature. These works were exposed at several meeting (see for example Tagne, Duchesne and Lafaye de Micheaux (2013a, 2013b, 2014) for more details). A paper based on the first project is submitted in a refereed journal (see Duchesne, Lafaye de Micheaux et Tagne (2016)).
144

Análise do impacto de perturbações sobre medidas de qualidade de ajuste para modelos de equações estruturais / Analysis of the impact of disturbances over the measures of goodness of fit for structural equation models

Renata Trevisan Brunelli 11 May 2012 (has links)
A Modelagem de Equações Estruturais (SEM, do inglês Structural Equation Modeling) é uma metodologia multivariada que permite estudar relações de causa/efeito e correlação entre um conjunto de variáveis (podendo ser elas observadas ou latentes), simultaneamente. A técnica vem se difundindo cada vez mais nos últimos anos, em diferentes áreas do conhecimento. Uma de suas principais aplicações é na conrmação de modelos teóricos propostos pelo pesquisador (Análise Fatorial Conrmatória). Existem diversas medidas sugeridas pela literatura que servem para avaliar o quão bom está o ajuste de um modelo de SEM. Entretanto, é escassa a quantidade de trabalhos na literatura que listem relações entre os valores de diferentes medidas com possíveis problemas na amostra e na especicação do modelo, isto é, informações a respeito de que possíveis problemas desta natureza impactam quais medidas (e quais não), e de que maneira. Tal informação é importante porque permite entender os motivos pelos quais um modelo pode estar sendo considerado mal-ajustado. O objetivo deste trabalho é investigar como diferentes perturbações na amostragem, especicação e estimação de um modelo de SEM podem impactar as medidas de qualidade de ajuste; e, além disso, entender se o tamanho da amostra influencia esta resposta. Simultaneamente, também se avalia como tais perturbações afetam as estimativas, dado que há casos de perturbações em que os parâmetros continuam sendo bem ajustados, mesmo com algumas medidas indicando um mau ajuste; ao mesmo tempo, há ocasiões em que se indica um bom ajuste, enquanto que os parâmetros são estimados de forma distorcida. Tais investigações serão realizadas a partir de simulações de exemplos de amostras de diferentes tamanhos para cada tipo de perturbação. Então, diferentes especicações de modelos de SEM serão aplicados a estas amostras, e seus parâmetros serão estimados por dois métodos diferentes: Mínimos Quadrados Generalizados e Máxima Verossimilhança. Conhecendo tais resultados, um pesquisador que queira aplicar a técnica de SEM poderá se precaver e, dentre as medidas de qualidade de ajuste disponíveis, optar pelas que mais se adequem às características de seu estudo. / The Structural Equation Modeling (SEM) is a multivariate methodology that allows the study of cause-and-efect relationships and correlation of a set of variables (that may be observed or latent ones), simultaneously. The technique has become more diuse in the last years, in different fields of knowledge. One of its main applications is on the confirmation of theoretical models proposed by the researcher (Confirmatory Factorial Analysis). There are several measures suggested by literature to measure the goodness of t of a SEM model. However, there is a scarce number of texts that list relationships between the values of different of those measures with possible problems that may occur on the sample or the specication of the SEM model, like information concerning what problems of this nature impact which measures (and which not), and how does the impact occur. This information is important because it allows the understanding of the reasons why a model could be considered bad fitted. The objective of this work is to investigate how different disturbances of the sample, the model specification and the estimation of a SEM model are able to impact the measures of goodness of fit; additionally, to understand if the sample size has influence over this impact. It will also be investigated if those disturbances affect the estimates of the parameters, given the fact that there are disturbances for which occurrence some of the measures indicate badness of fit but the parameters are not affected; at the same time, that are occasions on which the measures indicate a good fit and there are disturbances on the estimates of the parameters. Those investigations will be made simulating examples of different size samples for which type of disturbance. Then, SEM models with different specifications will be fitted to each sample, and their parameters will be estimated by two dierent methods: Generalized Least Squares and Maximum Likelihood. Given those answers, a researcher that wants to apply the SEM methodology to his work will be able to be more careful and, among the available measures of goodness of fit, to chose those that are more adequate to the characteristics of his study.
145

Distribuição generalizada de chuvas máximas no Estado do Paraná. / Local and regional frequency analysis by lh-moments and generalized distributions

Pansera, Wagner Alessandro 07 December 2013 (has links)
Made available in DSpace on 2017-05-12T14:46:53Z (GMT). No. of bitstreams: 1 Wagner.pdf: 5111902 bytes, checksum: b4edf3498cca6f9c7e2a9dbde6e62e18 (MD5) Previous issue date: 2013-12-07 / The purpose of hydrologic frequency analysis is to relate magnitude of events with their occurrence frequency based on probability distribution. The generalized probability distributions can be used on the study concerning extreme hydrological events: extreme events, logistics and Pareto. There are several methodologies to estimate probability distributions parameters, however, L-moments are often used due to computational easiness. Reliability of quantiles with high return period can be increased by LH-moments or high orders L-moments. L-moments have been widely studied; however, there is little information about LH-moments on literature, thus, there is a great research requirement on such area. Therefore, in this study, LH-moments were studied under two approaches commonly used in hydrology: (i) local frequency analysis (LFA) and (ii) regional frequency analysis (RFA). Moreover, a database with 227 rainfall stations was set (daily maximum annual), in Paraná State, from 1976 to 2006. LFA was subdivided into two steps: (i) Monte Carlo simulations and (ii) application of results to database. The main result of Monte Carlo simulations was that LH-moments make 0.99 and 0.995 quantiles less biased. Besides, simulations helped on creating an algorithm to perform LFA by generalized distributions. The algorithm was applied to database and enabled an adjustment of 227 studied series. In RFA, the 227stations have been divided into 11 groups and regional growth curves were obtained; while local quantiles were obtained from the regional growth curves. The difference between local quantiles obtained by RFA was quantified with those obtained via LFA. The differences may be approximately 33 mm for return periods of 100 years. / O objetivo da análise de frequência das variáveis hidrológicas é relacionar a magnitude dos eventos com sua frequência de ocorrência por meio do uso de uma distribuição de probabilidade. No estudo de eventos hidrológicos extremos, podem ser usadas as distribuições de probabilidade generalizadas: de eventos extremos, logística e Pareto. Existem diversas metodologias para a estimativa dos parâmetros das distribuições de probabilidade, no entanto, devido às facilidades computacionais, utilizam-se frequentemente os momentos-L. A confiabilidade dos quantis com alto período de retorno pode ser aumentada utilizando os momentos-LH ou momentos-L de altas ordens. Os momentos-L foram amplamente estudados, todavia, os momentos-LH apresentam literatura reduzida, logo, mais pesquisas são necessárias. Portanto, neste estudo, os momentos-LH foram estudados sob duas abordagens comumente utilizadas na hidrologia: (i) Análise de frequência local (AFL) e (ii) Análise de frequência regional (AFR). Além disso, foi montado um banco de dados com 227 estações pluviométricas (máximas diárias anuais), localizadas no Estado do Paraná, no período de 1976 a 2006. A AFL subdividiu-se em duas etapas: (i) Simulações de Monte Carlo e (ii) Aplicação dos resultados ao banco de dados. O principal resultado das simulações de Monte Carlo foi que os momentos-LH tornam os quantis 0,99 e 0,995 menos enviesados. Além disso, as simulações viabilizaram a criação de um algoritmo para realizar a AFL utilizando as distribuições generalizadas. O algoritmo foi aplicado ao banco de dados e possibilitou ajuste das 227 séries estudadas. Na AFR, as 227 estações foram dividas em 11 grupos e foram obtidas as curvas de crescimento regional. Os quantis locais foram obtidos a partir das curvas de crescimento regional. Foi quantificada a diferença entre os quantis locais obtidos via AFL com aqueles obtidos via AFR. As diferenças podem ser de aproximadamente 33 mm para períodos de retorno de 100 anos.
146

Análise do impacto de perturbações sobre medidas de qualidade de ajuste para modelos de equações estruturais / Analysis of the impact of disturbances over the measures of goodness of fit for structural equation models

Brunelli, Renata Trevisan 11 May 2012 (has links)
A Modelagem de Equações Estruturais (SEM, do inglês Structural Equation Modeling) é uma metodologia multivariada que permite estudar relações de causa/efeito e correlação entre um conjunto de variáveis (podendo ser elas observadas ou latentes), simultaneamente. A técnica vem se difundindo cada vez mais nos últimos anos, em diferentes áreas do conhecimento. Uma de suas principais aplicações é na conrmação de modelos teóricos propostos pelo pesquisador (Análise Fatorial Conrmatória). Existem diversas medidas sugeridas pela literatura que servem para avaliar o quão bom está o ajuste de um modelo de SEM. Entretanto, é escassa a quantidade de trabalhos na literatura que listem relações entre os valores de diferentes medidas com possíveis problemas na amostra e na especicação do modelo, isto é, informações a respeito de que possíveis problemas desta natureza impactam quais medidas (e quais não), e de que maneira. Tal informação é importante porque permite entender os motivos pelos quais um modelo pode estar sendo considerado mal-ajustado. O objetivo deste trabalho é investigar como diferentes perturbações na amostragem, especicação e estimação de um modelo de SEM podem impactar as medidas de qualidade de ajuste; e, além disso, entender se o tamanho da amostra influencia esta resposta. Simultaneamente, também se avalia como tais perturbações afetam as estimativas, dado que há casos de perturbações em que os parâmetros continuam sendo bem ajustados, mesmo com algumas medidas indicando um mau ajuste; ao mesmo tempo, há ocasiões em que se indica um bom ajuste, enquanto que os parâmetros são estimados de forma distorcida. Tais investigações serão realizadas a partir de simulações de exemplos de amostras de diferentes tamanhos para cada tipo de perturbação. Então, diferentes especicações de modelos de SEM serão aplicados a estas amostras, e seus parâmetros serão estimados por dois métodos diferentes: Mínimos Quadrados Generalizados e Máxima Verossimilhança. Conhecendo tais resultados, um pesquisador que queira aplicar a técnica de SEM poderá se precaver e, dentre as medidas de qualidade de ajuste disponíveis, optar pelas que mais se adequem às características de seu estudo. / The Structural Equation Modeling (SEM) is a multivariate methodology that allows the study of cause-and-efect relationships and correlation of a set of variables (that may be observed or latent ones), simultaneously. The technique has become more diuse in the last years, in different fields of knowledge. One of its main applications is on the confirmation of theoretical models proposed by the researcher (Confirmatory Factorial Analysis). There are several measures suggested by literature to measure the goodness of t of a SEM model. However, there is a scarce number of texts that list relationships between the values of different of those measures with possible problems that may occur on the sample or the specication of the SEM model, like information concerning what problems of this nature impact which measures (and which not), and how does the impact occur. This information is important because it allows the understanding of the reasons why a model could be considered bad fitted. The objective of this work is to investigate how different disturbances of the sample, the model specification and the estimation of a SEM model are able to impact the measures of goodness of fit; additionally, to understand if the sample size has influence over this impact. It will also be investigated if those disturbances affect the estimates of the parameters, given the fact that there are disturbances for which occurrence some of the measures indicate badness of fit but the parameters are not affected; at the same time, that are occasions on which the measures indicate a good fit and there are disturbances on the estimates of the parameters. Those investigations will be made simulating examples of different size samples for which type of disturbance. Then, SEM models with different specifications will be fitted to each sample, and their parameters will be estimated by two dierent methods: Generalized Least Squares and Maximum Likelihood. Given those answers, a researcher that wants to apply the SEM methodology to his work will be able to be more careful and, among the available measures of goodness of fit, to chose those that are more adequate to the characteristics of his study.
147

The use of effect sizes in credit rating models

Steyn, Hendrik Stefanus 12 1900 (has links)
The aim of this thesis was to investigate the use of effect sizes to report the results of statistical credit rating models in a more practical way. Rating systems in the form of statistical probability models like logistic regression models are used to forecast the behaviour of clients and guide business in rating clients as “high” or “low” risk borrowers. Therefore, model results were reported in terms of statistical significance as well as business language (practical significance), which business experts can understand and interpret. In this thesis, statistical results were expressed as effect sizes like Cohen‟s d that puts the results into standardised and measurable units, which can be reported practically. These effect sizes indicated strength of correlations between variables, contribution of variables to the odds of defaulting, the overall goodness-of-fit of the models and the models‟ discriminating ability between high and low risk customers. / Statistics / M. Sc. (Statistics)
148

列聯表中離群細格偵測探討 / Detecting Outlying Cells in Cross-Classified Tables

施苑玉, Shi, Yuan Yu Unknown Date (has links)
在處理列聯表(Contingency table)資料時,一般我們常用卡方適合度檢定(chi-squared goodess-of-fit test)來判定模式配適的好壞。如果這個檢定是顯著的,則意謂著配適的模式並不恰當,我們則希望進一步探討可能的原因何在。這其中的一個可能原因是資料中存在所謂的離群細格(outlying cell),這些細格的觀測次數和其他細格的觀測次數呈現某種不一致的現象。   在以往的文獻中,離群細格的偵測,通常藉由不同定義的殘差(residual)作為工具,進而衍生出各種不同的偵測方法。只是,這些探討基本上僅局限於二維列聯表的情形,對於高維度的列聯表,並沒有作更進一步的詮釋。Brown (1974)提出一個逐步偵測的方法,可依序找出所有可能的離群細格,直到近似獨立(quasi-independence)的模式假設不再顯著為止。但是我們認為他所引介的這個方法所牽涉的計算程序似乎過於繁複,因此藉由簡化修改計算過程,我們提供了另一種離群細格偵測的方法。依據模擬實驗的結果發現,本文所介紹的方法與Brown的方法作比較只有過之而無不及。此外我們也探討了應用此種方法到三維列聯表的可行性和可能遭遇到的困難。 / Chi-squared goodness-of-fit tests are usually employed to test whether a model fits a contingency table well. When the test is significant, we would then like to identify the sources that cause significance. The existence of outlying cells that contribute heavily to the test statistic may be one of the reasons.   Brown (1974) offered a stepwise criteria for detecting outlying cells in two-way con-tingency tables. In attempt to simplify the lengthy calculations that are required in Brown's method, we suggest an alternative procedure in this study. Based on simulation results, we find that the procedure performs reasonably well, it even outperforms Brown's method on several occasions. In addition, some extensions and issues regarding three-way contingency tables are also addressed.
149

拔靴法在線性結構關係模式適合度指標之應用 / Bootstrap procedures for evaluating goodness-of-fit indices of linear structural equation models

羅靖霖, Lo, Chin Lin Unknown Date (has links)
線性結構關係模式是一種考慮以多個直線方程式來分析處理變數間因果關 係的統計方法,其結合了因徑分析及因素分析之優點並將之融合於整體模 式中。線性結構關係模式經過參數估計後,需評估整個模式之好壞,因此 許多學者嘗試提出一些評估模式好壞的適合度指標,如一般常用的卡方檢 定、殘差均方根、適合度指標、調整後適合度指標以及基準指標等。這些 指標中有的指標會受到樣本數大小或樣本分布的影響,有些指標受模式隱 藏變數多寡或因素指標多寡的影響,有些指標需有嚴格的條件(如樣本需 服從常態分布)及前提方可適用,且有些指標的分布是未知的,因此欲對 這些指標進行區間估計、假設檢定、或顯著性差異比較是不可能的。基於 上述各種適合度指標的缺點,本論文利用拔靴法進行重抽樣求得拔靴分布 來解決上述各種問題。然而傳統的拔靴法在線性結構關係模式上是不適用 的,因此,再提出一改良拔靴法程序,求得拔靴分布來做為評估模式好壞 的依據,並利用改良拔靴法來做巢狀模式之顯著性差異比較及利用抽樣誤 差和非抽樣誤差觀念來評估模式適合度。
150

賽局理論與學習模型的實證研究 / An empirical study of game theory and learning model

陳冠儒, Chen, Kuan Lu Unknown Date (has links)
賽局理論(Game Theory)大多假設理性決策,單一回合賽局通常可由理論證明均衡(Equilibrium)或是最佳決策,然而如果賽局重複進行,不見得只存在單一均衡,光從理論推導可能無法找到所有均衡。以囚犯困境(Prisoner Dilemma)為例,理論均衡為不合作,若重複的賽局中存有互利關係,不合作可能不是最佳選擇。近年來,經濟學家藉由和統計實驗設計類似的賽局實驗(Game Experiment),探討賽局在理論與實際間的差異,並以學習模型(Learning Model)描述參賽者的決策及行為,但學習模型的優劣大多依賴誤差大小判定,但誤差分析結果可能與資料有關(Data Dependent)。有鑑於學習模型在模型選取上的不足,本文引進統計分析的模型選取及殘差檢定,以實證資料、配合電腦模擬評估學習模型。 本文使用的實證資料,屬於囚犯困境的重複賽局(Repeated Game),包括四種不同的實驗設定,參加賽局實驗者(或是「玩家」)為政治大學大學部學生;比較學習模型有四種:增強學習模型(Reinforcement Learning model)、延伸的增強學習模型(Extend Reinforcement Learning Model)、信念學習模型(Belief Learning Model)、加權經驗吸引模型(Experience-Weighted Attraction Model)。實證及模擬分析發現,增強學習模型較適合用於描述囚犯困境資料,無論是較小的誤差或是適合度分析,增強學習模型都有較佳的結果;另外,也發現玩家在不同實驗設定中的反應並不一致,將玩家分類後會有較佳的結果。 / In game theory, the optimal strategy (or equilibrium) of one-shot games usually can be solved theoretically. But, the optimal strategies of repeated games are likely not unique and are more difficult to find. For example, the defection is the optimal decision for the one-shot Prisoner Dilemma (PD) game. But for the repeated PD game, if the players can benefit from cooperation between rounds then the defection won’t be the only optimal rule. In recent years, economists design game experiments to explore the behavior in repeated games and use the learning models to evaluate the player’s choices. Most of the evaluation criteria are based on the estimation and prediction errors, but the results are likely to be data dependent. In this study, we adapt the model selection process in regression analysis and apply the idea to evaluate learning models. We use empirical data, together with Monte Carlo simulation, to demonstrate the evaluation process. The empirical data used are repeated PD game, including four different experimental settings, and the players of the game are from National Chengchi University in Taiwan. Also, we consider four learning models: Reinforcement learning (RL) model, Extend Reinforcement learning (ERL) model, Belief Learning (BL) model, and Experience-weighted attraction (EWA) model. We found that the RL model is more appropriate to describe the PD data. In addition, the behaviors of players in a group can be quite different and separating the players into different sets can reduce the estimation errors.

Page generated in 0.0849 seconds