• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 49
  • 35
  • 27
  • 18
  • 10
  • 9
  • 9
  • 8
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 307
  • 61
  • 60
  • 47
  • 42
  • 39
  • 37
  • 36
  • 33
  • 31
  • 30
  • 29
  • 27
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Tópicos em mecânica estatística de sistemas complexos: uma abordagem mecânico-estatística de dois tópicos de interesse em finanças, economia e sociologia / Topics in statistical physics of complex systems: a statistical mechanical approach to two topics of interest in finance, economics and sociology

Rafael Sola de Paula de Angelo Calsaverini 26 April 2013 (has links)
No presente trabalho, exploramos dois temas de interesse em finanças, economia e antropologia social, através da aplicação de técnicas da teoria da informação e da mecânica estatística. No primeiro tópico, estudamos a conexão entre teoria de dependência estatística, teoria de informação e teoria da cópulas. O conceito de distribuição cópula é revisto e aplicado em reformulação das definições de medida de dependência dadas por Rényi 1. Em seguida, mostramos que a informação mútua satisfaz todos os requisitos para ser uma boa medida de dependência. Obtemos uma identidade entre a informação mútua e a entropia da distribuição cópula e uma decomposição mais específica da informação mútua de uma distribuição elíptica nas suas partes linear e não-linear. Avaliamos o risco de usar quantidades ingênuas como medidas de dependência estatística, mostrando que a correlação linear pode subestimar grosseiramente a dependência. Esses resultados são utilizados para desenvolver um método de detectação de desvios de dependência gaussiana em pares de variáveis aleatórias e aplicá-lo a séries temporais financeiras. Finalmente, discutimos um método para ajustar t-cópulas a dados empíricos 2 através da medida da informação mútua e do tau de Kendall. No segundo tópico, desenvolvemos um modelo para o surgimento de autoridade em sociedades humanas pré-agrícolas. Discutimos motivações empíricas com raízes em neurociência, primatologia e antropologia para um modelo matemático capaz de explicar a ampla variabilidade de formas de organização social humana no eixo igualitário hierárquico. O modelo resulta da aplicação de teoria da informação a uma hipótese sobre os custos evolutivos envolvidos na vida social. O modelo gera um diagrama de fase rico, com diferentes regimes que podem ser interpretados como diferentes tipos de organização social. Os parâmetros de controler do modelo estão ligados à capacidade cognitiva da espécie em questão, ao tamanho do grupo e a pressões ecológicas e sociais. / In the present work we explore two topics of interest in finance, economics and social anthropology through the application of techniques from information theory and statistical mechanics. In the first topic we study the connexion between statistical dependency theory, information theory and copula theory. The concept of copula distribution is reviewed and applied to the reformulation of the definition of dependency measures given by Rényi 3. It is then shown that the mutual information satisfy all the requirements to be a good dependency measure. We derive an identity between mutual information and the entropy of the copula distribution and a more specific decomposition of the mutual information of an elliptical distribution into its linear and non-linear parts. We evaluate the risk of using naive measures as statistical dependency measures by showing that linear correlation can grossly underestimate dependency. Those results are used to develop a method to detect deviation from gaussian dependence in pairs of random variables and apply it to financial time series. Finally, we discuss a method to adjust t-copulas to empirical data4 through the determination of the mutual information and Kendalls tau. In the second topic we develop a model for the emergence of authority in pre-agricultural human societies. We discuss empirical motivations with roots in neuroscience, primatology and anthropology for a mathematical model able to explain the ample variability of forms of human social organization in the egalitarian-hierarchical axis. The model results from the application of information theory on a hypothesis about the evolutive costs involved in social life. It generates a rich phase diagram, with different regimes which can be interpreted as different types of societal organization, from egalitarian to hierarchical. The control parameters of the model are connected to the cognitive capacity of the species in question, the size of the group and ecological and social pressures.
162

Modelo bayesiano para dados de sobrevivência com riscos semicompetitivos baseado em cópulas / Bayesian model for survival data with semicompeting risks based on copulas

Elizabeth González Patiño 23 March 2018 (has links)
Motivados por um conjunto de dados de pacientes com insuficiência renal crônica (IRC), propomos uma nova modelagem bayesiana que envolve cópulas da família Arquimediana e um modelo misto para dados de sobrevivência com riscos semicompetitivos. A estrutura de riscos semicompetitivos é bastante comum em estudos clínicos em que dois eventos são de interesse, um intermediário e outro terminal, de forma tal que a ocorrência do evento terminal impede a ocorrência do intermediário mas não vice-versa. Nesta modelagem provamos que a distribuição a posteriori sob a cópula de Clayton é própria. Implementamos os algoritmos de dados aumentados e amostrador de Gibbs para a inferência bayesiana, assim como os criterios de comparação de modelos: LPML, DIC e BIC. Realizamos um estudo de simulação para avaliar o desempenho da modelagem e finalmente aplicamos a metodologia proposta para analisar os dados dos pacientes com IRC, além de outros de pacientes que receberam transplante de medula óssea. / Motivated by a dataset of patients with chronic kidney disease (CKD), we propose a new bayesian model including the Arquimedean copula and a mixed model for survival data with semicompeting risks. The structure of semicompeting risks appears frequently in clinical studies where two-types of events are involved: a nonterminal and a terminal event such that the occurrence of terminal event precludes the occurrence of the non-terminal event but not viceversa. In this work we prove that the posterior distribution is proper when the Clayton copula is used. We implement the data augmentation algorithm and Gibbs sampling for the bayesian inference, as well as some bayesian model selection criteria: LPML, BIC and DIC. We carry out a simulation study for assess the model performance and finally, our methodology is illustrated with the chronic kidney disease study.
163

Empirical Likelihood Method for Ratio Estimation

Dong, Bin 22 February 2011 (has links)
Empirical likelihood, which was pioneered by Thomas and Grunkemeier (1975) and Owen (1988), is a powerful nonparametric method of statistical inference that has been widely used in the statistical literature. In this thesis, we investigate the merits of empirical likelihood for various problems arising in ratio estimation. First, motivated by the smooth empirical likelihood (SEL) approach proposed by Zhou & Jing (2003), we develop empirical likelihood estimators for diagnostic test likelihood ratios (DLRs), and derive the asymptotic distributions for suitable likelihood ratio statistics under certain regularity conditions. To skirt the bandwidth selection problem that arises in smooth estimation, we propose an empirical likelihood estimator for the same DLRs that is based on non-smooth estimating equations (NEL). Via simulation studies, we compare the statistical properties of these empirical likelihood estimators (SEL, NEL) to certain natural competitors, and identify situations in which SEL and NEL provide superior estimation capabilities. Next, we focus on deriving an empirical likelihood estimator of a baseline cumulative hazard ratio with respect to covariate adjustments under two nonproportional hazard model assumptions. Under typical regularity conditions, we show that suitable empirical likelihood ratio statistics each converge in distribution to a 2 random variable. Through simulation studies, we investigate the advantages of this empirical likelihood approach compared to use of the usual normal approximation. Two examples from previously published clinical studies illustrate the use of the empirical likelihood methods we have described. Empirical likelihood has obvious appeal in deriving point and interval estimators for time-to-event data. However, when we use this method and its asymptotic critical value to construct simultaneous confidence bands for survival or cumulative hazard functions, it typically necessitates very large sample sizes to achieve reliable coverage accuracy. We propose using a bootstrap method to recalibrate the critical value of the sampling distribution of the sample log-likelihood ratios. Via simulation studies, we compare our EL-based bootstrap estimator for the survival function with EL-HW and EL-EP bands proposed by Hollander et al. (1997) and apply this method to obtain a simultaneous confidence band for the cumulative hazard ratios in the two clinical studies that we mentioned above. While copulas have been a popular statistical tool for modeling dependent data in recent years, selecting a parametric copula is a nontrivial task that may lead to model misspecification because different copula families involve different correlation structures. This observation motivates us to use empirical likelihood to estimate a copula nonparametrically. With this EL-based estimator of a copula, we derive a goodness-of-fit test for assessing a specific parametric copula model. By means of simulations, we demonstrate the merits of our EL-based testing procedure. We demonstrate this method using the data from Wieand et al. (1989). In the final chapter of the thesis, we provide a brief introduction to several areas for future research involving the empirical likelihood approach.
164

Optimal designs for statistical inferences in nonlinear models with bivariate response variables

Hsu, Hsiang-Ling 27 January 2011 (has links)
Bivariate or multivariate correlated data may be collected on a sample of unit in many applications. When the experimenters concern about the failure times of two related subjects for example paired organs or two chronic diseases, the bivariate binary data is often acquired. This type of data consists of a observation point x and indicators which represent whether the failure times happened before or after the observation point. In this work, the observed bivariate data can be written with the following form {x, £_1=I(X1≤ x), £_2=I(X2≤ x)}.The corresponding optimal design problems for parameter estimation under this type of bivariate data are discussed. For this kind of the multivariate responses with explanatory variables, their marginal distributions may be from different distributions. Copula model is a way to formulate the relationship of these responses, and the association between pairs of responses. Copula models for bivariate binary data are considered useful in practice due to its flexibility. In this dissertation for bivariate binary data, the marginal functions are assumed from exponential or Weibull distributions and two assumptions, independent or correlated, about the joint function between variables are considered. When the bivariate binary data is assumed correlated, the Clayton copula model is used as the joint cumulative distribution function. There are few works addressed the optimal design problems for bivariate binary data with copula models. The D-optimal designs aim at minimizing the volume of the confidence ellipsoid for estimating unknown parameters including the association parameter in bivariate copula models. They are used to determine the best observation points. Moreover, the Ds-optimal designs are mainly used for estimation of the important association parameter in Clayton model. The D- and Ds-optimal designs for the above copula model are found through the general equivalence theorem with numerical algorithm. Under different model assumptions, it is observed that the number of support points for D-optimal designs is at most as the number of model parameters for the numerical results. When the difference between the marginal distributions and the association are significant, the association becomes an influential factor which makes the number of supports gets larger. The performances of estimation based on optimal designs are reasonably well by simulation studies. In survival experiments, the experimenter customarily takes trials at some specific points such as the position of the 25, 50 and 75 percentile of distributions. Hence, we consider the design efficiencies when the design points for trials are at three or four particular percentiles. Although it is common in practice to take trials at several quantile positions, the allocations of the proportion of sample size also have great influence on the experimental results. To use a locally optimal design in practice, the prior information for models or parameters are needed. In case there is not enough prior knowledge about the models or parameters, it would be more flexible to use sequential experiments to obtain information in several stages. Hence with robustness consideration, a sequential procedure is proposed by combining D- and Ds-optimal designs under independent or correlated distribution in different stages of the experiment. The simulation results based on the sequential procedure are compared with those by the one step procedures. When the optimal designs obtained from an incorrect prior parameter values or distributions, those results may have poor efficiencies. The sample mean of estimators and corresponding optimal designs obtained from sequential procedure are close to the true values and the corresponding efficiencies are close to 1. Huster (1989) analyzed the corresponding modeling problems for the paired survival data and applied to the Diabetic Retinopathy Study. Huster (1989) considered the exponential and Weibull distributions as possible marginal distributions and the Clayton model as the joint function for the Diabetic Retinopathy data. This data was conducted by the National Eye Institute to assess the effectiveness of laser photocoagulation in delaying the onset of blindness in patients with diabetic retinopathy. This study can be viewed as a prior experiment and provide the experimenter some useful guidelines for collecting data in future studies. As an application with Diabetic Retinopathy Study, we develop optimal designs to collect suitable data and information for estimating the unknown model parameters. In the second part of this work, the optimal design problems for parameter estimations are considered for the type of proportional data. The nonlinear model, based on Jorgensen (1997) and named the dispersion model, provides a flexible class of non-normal distributions and is considered in this research. It can be applied in binary or count responses, as well as proportional outcomes. For continuous proportional data where responses are confined within the interval (0,1), the simplex dispersion model is considered here. D-optimal designs obtained through the corresponding equivalence theorem and the numerical results are presented. In the development of classical optimal design theory, weighted polynomial regression models with variance functions which depend on the explanatory variable have played an important role. The problem of constructing locally D-optimal designs for simplex dispersion model can be viewed as a weighted polynomial regression model with specific variance function. Due to the complex form of the weight function in the information matrix is considered as a rational function, an approximation of the weight function and the corresponding optimal designs are obtained with different parameters. These optimal designs are compared with those using the original weight function.
165

The Acquisition Of The Copula Be In Present Simple Tense In English By Native Speakers Of Russian

Antonova Unlu, Elena 01 June 2010 (has links) (PDF)
This thesis investigates the acquisition of the copula be in present Simple Tense in English by native speakers of Russian. The aim of the study is to determine whether or not Russian students with different levels of English proficiency encounter any problems while using the copula be in Present Simple Tense in English. The study also identifies the domains related to the use of the copula be that appear to be most problematic for native speakers of Russian. To carry out the current research two diagnostic tests measuring receptive and productive skills related to the use of the copula be in Present Simple Tense in English were developed. The data were collected from three groups of Russian students who were in the first, fourth and eighth years of learning English. The data in each of the domains related to the use of the copula be in Present Simple Tense in English were classified under four main categories: (i) correct use, (ii) omission, (iii) misinformation, (iv) addition. Both, quantitative and qualitative analyses were used in the study. The results of the study indicated that all the native speakers of Russian who participated in the study had difficulties with the acquisition of the copula be in Present Simple Tense in English. The findings of the study revealed that along with the developmental mistakes/errors (i.e., omissions of the copula be and misuse of the forms of the copula be), which seem to disappear with the lasting exposure to English, there are other mistakes/errors in the performance of the native speakers of Russian which are persistent. Negative transfer at the morphological level and incomplete understanding and application of the rule are suggested as the underlying reasons for the persistent mistakes/errors made by the Russian learners.
166

Financial Derivatives Pricing and Hedging - A Dynamic Semiparametric Approach

Huang, Shih-Feng 26 June 2008 (has links)
A dynamic semiparametric pricing method is proposed for financial derivatives including European and American type options and convertible bonds. The proposed method is an iterative procedure which uses nonparametric regression to approximate derivative values and parametric asset models to derive the continuation values. Extension to higher dimensional option pricing is also developed, in which the dependence structure of financial time series is modeled by copula functions. In the simulation study, we valuate one dimensional American options, convertible bonds and multi-dimensional American geometric average options and max options. The considered one-dimensional underlying asset models include the Black-Scholes, jump-diffusion, and nonlinear asymmetric GARCH models and for multivariate case we study copula models such as the Gaussian, Clayton and Gumbel copulae. Convergence of the method is proved under continuity assumption on the transition densities of the underlying asset models. And the orders of the supnorm errors are derived. Both the theoretical findings and the simulation results show the proposed approach to be tractable for numerical implementation and provides a unified and accurate technique for financial derivative pricing. The second part of this thesis studies the option pricing and hedging problems for conditional leptokurtic returns which is an important feature in financial data. The risk-neutral models for log and simple return models with heavy-tailed innovations are derived by an extended Girsanov change of measure, respectively. The result is applicable to the option pricing of the GARCH model with t innovations (GARCH-t) for simple eturn series. The dynamic semiparametric approach is extended to compute the option prices of conditional leptokurtic returns. The hedging strategy consistent with the extended Girsanov change of measure is constructed and is shown to have smaller cost variation than the commonly used delta hedging under the risk neutral measure. Simulation studies are also performed to show the effect of using GARCH-normal models to compute the option prices and delta hedging of GARCH-t model for plain vanilla and exotic options. The results indicate that there are little pricing and hedging differences between the normal and t innovations for plain vanilla and Asian options, yet significant disparities arise for barrier and lookback options due to improper distribution setting of the GARCH innovations.
167

Aspect and the categorization of states: the case of ser and estar in Spanish

Roby, David Brian, 1972- 28 August 2008 (has links)
In this work, the primary goal will be to construct the most descriptively and explanatorily adequate analysis possible to account for the complementary distribution of the Spanish copula verbs ser and estar. Over the past several decades, numerous theoretical accounts have been put forth in an attempt to accomplish this goal. Though such accounts accurately predict most types of stative sentences with the two copulas, they often fall short of predicting a significant number of them that are used in everyday speech. The first chapters of this dissertation will be devoted to reviewing a number of existing approaches that have been taken to account for the uses of ser and estar by testing their theoretical viability and descriptive adequacy. Among these are traditional conventions such as the inherent qualities vs. current condition distinction and the analysis of estar as an indicator of change. Those of a more recent theoretical framework, which will receive the most attention, include the application of Kratzer's (1995) individual-level vs. stage-level distinction to stative predicates and Maienborn's (2005) discourse-based interpretation of Spanish copulative predication. Schmitt's (2005) compositionally-based analysis of Portuguese ser and estar, which treats only estar as an aspectual copula, will be of special interest. After testing each of these analyses, it will be shown that the least costly and most accurate course to take for analyzing ser and estar is to treat both verbs as aspectual morphemes along the lines of Luján (1981). As aspectual copulas, ser and estar denote the aspectual distinction [±Perfective]. In my proposed analysis, I will argue that aspect applies to both events and states, but does so internally and externally respectively. By adapting Verkuyl's (2004) feature algebra to states, I will posit that aspect for stative predication is compositionally calculated, and the individual aspectual values for ser and estar remain constant in co-composition. In light of its descriptive adequacy for Spanish stative sentences and universality in natural language, it will also be shown that the [±Perfective] aspectual distinction is very strong in terms of explanatory adequacy as well. / text
168

以技術分析指標建構台灣股票市場最適資產配置 / The Optimal Asset Allocation According to Technical Indicators in Taiwan Stock Market

陳怡如, Chen, I Ju Unknown Date (has links)
本研究以2006年至2015年4月30日台灣股票市場所有上市櫃股票為樣本,首先利用每季公布之財務報表,以市值、股票月週轉率、每股盈餘、股東權益報酬率、本益比等六項指標作為第一階段篩選股票之準則。接著進行第二階段之股票篩選,先透過ASKSR篩選出現最好之兩倍投資組合數的股票後,再透過計算其技術指標總分篩選出符合投資組合數的股票。選好股票後再由多元Gaussian Copula-GARCH(1,1)-t與元Gaussian Copula-GJR(1,1)-t模型進行估計並以蒙地卡羅法模擬,藉由CRRA效用函數、mean-variance效用函數、Sharpe ratio、CARA效用函數最適化權重來投資。樣本期間內採Rolling window方式不斷調整投資組合直到結束。   本論文欲探討結合財務資訊指標、股票評分指標與技術指標去選股,並嘗試比較以多元Gaussian-Copula-GARCH(1,1)-t資產模型與多元Gaussian-Copula-GJR(1,1)-t資產模型進行資產配置之效果,希望達到穩健獲利的效果。
169

Empirical Likelihood Method for Ratio Estimation

Dong, Bin 22 February 2011 (has links)
Empirical likelihood, which was pioneered by Thomas and Grunkemeier (1975) and Owen (1988), is a powerful nonparametric method of statistical inference that has been widely used in the statistical literature. In this thesis, we investigate the merits of empirical likelihood for various problems arising in ratio estimation. First, motivated by the smooth empirical likelihood (SEL) approach proposed by Zhou & Jing (2003), we develop empirical likelihood estimators for diagnostic test likelihood ratios (DLRs), and derive the asymptotic distributions for suitable likelihood ratio statistics under certain regularity conditions. To skirt the bandwidth selection problem that arises in smooth estimation, we propose an empirical likelihood estimator for the same DLRs that is based on non-smooth estimating equations (NEL). Via simulation studies, we compare the statistical properties of these empirical likelihood estimators (SEL, NEL) to certain natural competitors, and identify situations in which SEL and NEL provide superior estimation capabilities. Next, we focus on deriving an empirical likelihood estimator of a baseline cumulative hazard ratio with respect to covariate adjustments under two nonproportional hazard model assumptions. Under typical regularity conditions, we show that suitable empirical likelihood ratio statistics each converge in distribution to a 2 random variable. Through simulation studies, we investigate the advantages of this empirical likelihood approach compared to use of the usual normal approximation. Two examples from previously published clinical studies illustrate the use of the empirical likelihood methods we have described. Empirical likelihood has obvious appeal in deriving point and interval estimators for time-to-event data. However, when we use this method and its asymptotic critical value to construct simultaneous confidence bands for survival or cumulative hazard functions, it typically necessitates very large sample sizes to achieve reliable coverage accuracy. We propose using a bootstrap method to recalibrate the critical value of the sampling distribution of the sample log-likelihood ratios. Via simulation studies, we compare our EL-based bootstrap estimator for the survival function with EL-HW and EL-EP bands proposed by Hollander et al. (1997) and apply this method to obtain a simultaneous confidence band for the cumulative hazard ratios in the two clinical studies that we mentioned above. While copulas have been a popular statistical tool for modeling dependent data in recent years, selecting a parametric copula is a nontrivial task that may lead to model misspecification because different copula families involve different correlation structures. This observation motivates us to use empirical likelihood to estimate a copula nonparametrically. With this EL-based estimator of a copula, we derive a goodness-of-fit test for assessing a specific parametric copula model. By means of simulations, we demonstrate the merits of our EL-based testing procedure. We demonstrate this method using the data from Wieand et al. (1989). In the final chapter of the thesis, we provide a brief introduction to several areas for future research involving the empirical likelihood approach.
170

極值理論與整合風險衡量

黃御綸 Unknown Date (has links)
自從90年代以來,許多機構因為金融商品的操縱不當或是金融風暴的衝擊數度造成全球金融市場的動盪,使得風險管理的重要性與日俱增,而量化風險模型的準確性也益受重視,基於財務資料的相關性質如異質變異、厚尾現象等,本文主要結合AR(1)-GARCH(1,1)模型、極值理論、copula函數三種模型應用在風險值的估算,且將報酬分配的假設區分為三類,一是無母數模型的歷史模擬法,二是基於常態分配假設下考量隨機波動度的有母數模型,三是利用歷史資料配適尾端分配的極值理論法來對聯電、鴻海、國泰金、中鋼四檔個股和台幣兌美元、日圓兌美元、英鎊兌美元三種外匯資料作一日風險值、十日風險值、組合風險值的測試。 實證結果發現,在一日風險值方面,95%信賴水準下以動態風險值方法表現相對較好,99%信賴水準下動態極值理論法和動態歷史模擬法皆有不錯的估計效果;就十日風險值而言,因為未來十日資產的報酬可能受到特定事件影響,所以估計上較為困難,整體看來在99%信賴水準下以條件GPD+蒙地卡羅模擬的表現相對較理想;以組合風險值來說, copula、Clayton copula+GPD marginals模擬股票或外匯組合的聯合分配不論在95%或99%信賴水準下對其風險值的估計都獲得最好的結果;雖然台灣個股股價受到上下漲跌幅7%的限制,台幣兌美元的匯率也受到央行的干涉,但以極值理論來描述資產尾端的分配情形相較於假設其他兩種分配仍有較好的估計效果。

Page generated in 0.0429 seconds