• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 34
  • 9
  • 9
  • 8
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 181
  • 42
  • 39
  • 39
  • 34
  • 25
  • 23
  • 22
  • 16
  • 15
  • 15
  • 15
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

匯率不確定性對台灣出口波動之影響

郭佩婷, Kuo, Pei Ting Unknown Date (has links)
本文目的在於探討匯率不確定性對台灣出口波動之影響。本文應用Barkoulas et al.(2002)理論架構,利用台灣1989年至2007年的月資料。實證結果發現:美元、日圓兌新台幣的匯率波動對於台灣出口美、日兩國的數量並無明顯的影響。美元兌新台幣的匯率波動對於以美國為進口國的台灣出口波動則有正向的影響;日圓兌新台幣的匯率波動對於以日本為進口國的台灣出口波動卻沒有顯著影響。本文認為:造成美元匯率波動主要支配力量,來自於貨幣政策制定者掌握之資訊優勢差異;造成日圓匯率波動的來源則無主要支配力量的存在。造成此種結果的原因在於貨幣政策制定者長久以來所建立的政策可信度所致,削減了造成美元匯率波動的另外二股力量。因此,新台幣兌換美元匯率波動取決於貨幣政策制定者掌握經濟真實狀況的能力與其貨幣政策方向。 / This paper investigates into the effect of exchange rate uncertainty on Taiwan export volatility. Under the theoretical framework of Barkoulas et al.(2002) and the empirical monthly data of Taiwan exports from 1989 to 2007, it is summarized that the exchange rate volatility of NTD/USD and NTD/JPY had no effect on the Taiwan exporting volume toward U.S. or Japan. However, the exchange rate volatility of NTD/USD did have positive effect on the export volatility of Taiwan to U.S. while that of NTD/JPY had no significant effect on the export volatility of Taiwan to Japan.It is argued that the dominant source of NTD/USD exchange rate volatility resulted from the variance of monetary authorities’ information advantage. On the other hand, it exists no such a dominant source in NTD/JPY exchange rate volatility.
152

Test d'adéquation à la loi de Poisson bivariée au moyen de la fonction caractéristique

Koné, Fangahagnian 09 1900 (has links)
Les tests d’adéquation font partie des pratiques qu’ont les statisticiens pour prendre une décision concernant l’hypothèse de l’utilisation d’une distribution paramétrique pour un échantillon. Dans ce mémoire, une application du test d’adéquation basé sur la fonction caractéristique proposé par Jiménez-Gamero et al. (2009) est faite dans le cas de la loi de Poisson bivariée. Dans un premier temps, le test est élaboré dans le cas de l’adéquation à une loi de Poisson univariée et nous avons trouvé son niveau bon. Ensuite cette élaboration est étendue au cas de la loi de Poisson bivariée et la puissance du test est calculée et comparée à celle des tests de l’indice de dispersion, du Quick test de Crockett et des deux familles de tests proposés par Novoa-Muñoz et Jiménez-Gamero (2014). Les résultats de la simulation ont permis de constater que le test avait un bon niveau comparativement aux tests de l’indice de dispersion et au Quick test de Crockett et qu’il était généralement moins puissant que les autres tests. Nous avons également découvert que le test de l’indice de dispersion devrait être bilatéral alors qu’il ne rejette que pour de grandes valeurs de la statistique de test. Finalement, la valeur-p de tous ces tests a été calculée sur un jeu de données de soccer et les conclusions comparées. Avec une valeur-p de 0,009, le test a rejeté l’hypothèse que les données provenaient d’une loi de Poisson bivariée alors que les tests proposés par Novoa-Muñoz et Jiménez-Gamero (2014) donnaient une conclusion différente. / Our aim in this thesis is to conduct the goodness-of-fit test based on empirical characteristic functions proposed by Jiménez-Gamero et al. (2009) in the case of the bivariate Poisson distribution. We first evaluate the test’s behaviour in the case of the univariate Poisson distribution and find that the estimated type I error probabilities are close to the nominal values. Next, we extend it to the bivariate case and calculate and compare its power with the dispersion index test for the bivariate Poisson, Crockett’s Quick test for the bivariate Poisson and the two test families proposed by Novoa-Muñoz et Jiménez-Gamero (2014). Simulation results show that the probability of type I error is close to the claimed level and that it is generally less powerful than other tests. We also discovered that the dispersion index test should be bilateral whereas it rejects for large values only. Finally, the p-value of all these tests is calculated on a real dataset from soccer. The p-value of the test is 0,009 and we reject the hypothesis that the data come from a Poisson bivariate while the tests proposed by Novoa-Muñoz et Jiménez-Gamero (2014) leads to a different conclusion.
153

Utilisation de splines monotones afin de condenser des tables de mortalité dans un contexte bayésien

Patenaude, Valérie 04 1900 (has links)
Dans ce mémoire, nous cherchons à modéliser des tables à deux entrées monotones en lignes et/ou en colonnes, pour une éventuelle application sur les tables de mortalité. Nous adoptons une approche bayésienne non paramétrique et représentons la forme fonctionnelle des données par splines bidimensionnelles. L’objectif consiste à condenser une table de mortalité, c’est-à-dire de réduire l’espace d’entreposage de la table en minimisant la perte d’information. De même, nous désirons étudier le temps nécessaire pour reconstituer la table. L’approximation doit conserver les mêmes propriétés que la table de référence, en particulier la monotonie des données. Nous travaillons avec une base de fonctions splines monotones afin d’imposer plus facilement la monotonie au modèle. En effet, la structure flexible des splines et leurs dérivées faciles à manipuler favorisent l’imposition de contraintes sur le modèle désiré. Après un rappel sur la modélisation unidimensionnelle de fonctions monotones, nous généralisons l’approche au cas bidimensionnel. Nous décrivons l’intégration des contraintes de monotonie dans le modèle a priori sous l’approche hiérarchique bayésienne. Ensuite, nous indiquons comment obtenir un estimateur a posteriori à l’aide des méthodes de Monte Carlo par chaînes de Markov. Finalement, nous étudions le comportement de notre estimateur en modélisant une table de la loi normale ainsi qu’une table t de distribution de Student. L’estimation de nos données d’intérêt, soit la table de mortalité, s’ensuit afin d’évaluer l’amélioration de leur accessibilité. / This master’s thesis is about the estimation of bivariate tables which are monotone within the rows and/or the columns, with a special interest in the approximation of life tables. This problem is approached through a nonparametric Bayesian regression model, in particular linear combinations of regression splines. By condensing a life table, our goal is to reduce its storage space without losing the entries’ accuracy. We will also study the reconstruction time of the table with our estimators. The properties of the reference table, specifically its monotonicity, must be preserved in the estimation. We are working with a monotone spline basis since splines are flexible and their derivatives can easily be manipulated. Those properties enable the imposition of constraints of monotonicity on our model. A brief review on univariate approximations of monotone functions is then extended to bivariate estimations. We use hierarchical Bayesian modeling to include the constraints in the prior distributions. We then explain the Markov chain Monte Carlo algorithm to obtain a posterior estimator. Finally, we study the estimator’s behaviour by applying our model on the Standard Normal table and the Student’s t table. We estimate our data of interest, the life table, to establish the improvement in data accessibility.
154

Méthodes d'analyse génétique de traits quantitatifs corrélés : application à l'étude de la densité minérale osseuse / Statistical methods for genetic analysis of correlated quantitative traits : application to the study of bone mineral density

Saint Pierre, Aude 03 January 2011 (has links)
La plupart des maladies humaines ont une étiologie complexe avec des facteurs génétiques et environnementaux qui interagissent. Utiliser des phénotypes corrélés peut augmenter la puissance de détection de locus de trait quantitatif. Ce travail propose d’évaluer différentes approches d’analyse bivariée pour des traits corrélés en utilisantl’information apportée par les marqueurs au niveau de la liaison et de l’association. Legain relatif de ces approches est comparé aux analyses univariées. Ce travail a étéappliqué à la variation de la densité osseuse à deux sites squelettiques dans une cohorted’hommes sélectionnés pour des valeurs phénotypiques extrêmes. Nos résultats montrentl’intérêt d’utiliser des approches bivariées en particulier pour l’analyse d’association. Parailleurs, dans le cadre du groupe de travail GAW16, nous avons comparé lesperformances relatives de trois méthodes d’association dans des données familiales. / The majority of complex diseases in humans are likely determined by both genetic andenvironmental factors. Using correlated phenotypes may increase the power to map theunderlying Quantitative Trait Loci (QTLs). This work aims to evaluate and compare theperformance of bivariate methods for detecting QTLs in correlated phenotypes by linkageand association analyses. We applied these methods to data on Bone Mineral Density(BMD) variation, measured at the two skeletal sites, in a sample of males selected forextreme trait values. Our results demonstrate the relative gain, in particular for associationanalysis, of bivariate approaches when compared to univariate analyses. Finally, we studythe performances of association methods to detect QTLs in the GAW16 simulated familydata.
155

[en] STATE SPACE MODEL FOR TIME SERIES WITH BIVARIATE POISSON DISTRIBUTION: AN APPLICATION OF DURBIN-KOOPMAN METODOLOGY / [pt] MODELO EM ESPAÇO DE ESTADO PARA SÉRIES TEMPORAIS COM DISTRIBUIÇÃO POISSON BIVARIADA: UMA APLICAÇÃO DA METODOLOGIA DURBIN-KOOPMAN

SERGIO EDUARDO CONTRERAS ESPINOZA 15 September 2004 (has links)
[pt] Nesta tese, consideramos um modelo de espaço de estado bivariado para dados de contagem. A abordagem usada para resolver integrais não-analíticas que se apresentam no modelo é uma natural extensão da metodologia proposta por Durbin e Koopman - (DK), no sentido de que o Modelo Gaussiano Aproximador deve possuir algumas matrizes de covariâncias diagonais. Esta modificação traz a vantagem de viabilizar o uso do tratamento univariado para séries multivariadas com as recursões de Kalman, o qual, como se sabe, é mais eficiente do que o tratamento usual e facilita o uso de inicializações exatas destas mesmas recursões. O vetor de estado do modelo proposto é definido usando-se abordagem estrutural, onde os elementos do vetor de estado têm interpretação direta como tendência e sazonalidade. Apresentamos exemplos simulados e reais para ilustrar o modelo. / [en] In this thesis we consider a state space model for bivariate observations of count data. The approach used to solve the non analytical integrals that appears as the solution of the resulting non-Gaussian filter is a natural extension of the methodology advocated by Durbin and Koopman (DK). In our approach the aproximated Gaussian Model (AGM), has a diagonal Covariance matrix, while in the original DK, this is a full matrix. This modification make it possible to use univariate Kalman recursoes to construct the AGM, resulting in a computationally more efficient solution for the estimation of a Bivariate Poisson model. This also facilitates the use of exact initialization of those recursions. The state vector is specified using the structural approach, where the state elements are components which have direct interpretation, such as trend and seasonals. In our bivariate set up the dependence between the bivariate vector of time series is accomplished by use of common components which drive both series. We present both simulation and real life examples illustrating the use of our model.
156

Une approche générique pour l'analyse et le filtrage des signaux bivariés / A general approach for the analysis and filtering of bivariate signals

Flamant, Julien 27 September 2018 (has links)
Les signaux bivariés apparaissent dans de nombreuses applications (optique, sismologie, océanographie, EEG, etc.) dès lors que l'analyse jointe de deux signaux réels est nécessaire. Les signaux bivariés simples ont une interprétation naturelle sous la forme d'une ellipse dont les propriétés (taille, forme, orientation) peuvent évoluer dans le temps. Cette propriété géométrique correspondant à la notion de polarisation en physique est fondamentale pour la compréhension et l'analyse des signaux bivariés. Les approches existantes n'apportent cependant pas de description directe des signaux bivariés ou des opérations de filtrage en termes de polarisation. Cette thèse répond à cette limitation par l'introduction d'une nouvelle approche générique pour l'analyse et le filtrage des signaux bivariés. Celle-ci repose sur deux ingrédients essentiels : (i) le plongement naturel des signaux bivariés -- vus comme signaux à valeurs complexes -- dans le corps des quaternions H et (ii) la définition d'une transformée de Fourier quaternionique associée pour une représentation spectrale interprétable de ces signaux. L'approche proposée permet de définir les outils de traitement de signal usuels tels que la notion de densité spectrale, de filtrage linéaire ou encore de spectrogramme ayant une interprétation directe en termes d'attributs de polarisation. Nous montrons la validité de l'approche grâce à des garanties mathématiques et une implémentation numériquement efficace des outils proposés. Diverses expériences numériques illustrent l'approche. En particulier, nous démontrons son potentiel pour la caractérisation de la polarisation des ondes gravitationnelles. / Bivariate signals appear in a broad range of applications (optics, seismology, oceanography, EEG, etc.) where the joint analysis of two real-valued signals is required. Simple bivariate signals take the form of an ellipse, whose properties (size, shape, orientation) may evolve with time. This geometric feature of bivariate signals has a natural physical interpretation called polarization. This notion is fundamental to the analysis and understanding of bivariate signals. However, existing approaches do not provide straightforward descriptions of bivariate signals or filtering operations in terms of polarization or ellipse properties. To this purpose, this thesis introduces a new and generic approach for the analysis and filtering of bivariate signals. It essentially relies on two key ingredients: (i) the natural embedding of bivariate signals -- viewed as complex-valued signals -- into the set of quaternions H and (ii) the definition of a dedicated quaternion Fourier transform to enable a meaningful spectral representation of bivariate signals. The proposed approach features the definition of standard signal processing quantities such as spectral densities, linear time-invariant filters or spectrograms that are directly interpretable in terms of polarization attributes. More importantly, the framework does not sacrifice any mathematical guarantee and the newly introduced tools admit computationally fast implementations. Numerical experiments support throughout our theoretical developments. We also demonstrate the potential of the approach for the nonparametric characterization of the polarization of gravitational waves.
157

雙變量脆弱性韋伯迴歸模式之研究

余立德, Yu, Li-Ta Unknown Date (has links)
摘要 本文主要考慮群集樣本(clustered samples)的存活分析,而每一群集中又分為兩種組別(groups)。假定同群集同組別內的個體共享相同但不可觀測的隨機脆弱性(frailty),因此面臨的是雙變量脆弱性變數的多變量存活資料。首先,驗證雙變量脆弱性對雙變量對數存活時間及雙變量存活時間之相關係數所造成的影響。接著,假定雙變量脆弱性服從雙變量對數常態分配,條件存活時間模式為韋伯迴歸模式,我們利用EM法則,推導出雙變量脆弱性之多變量存活模式中母數的估計方法。 關鍵詞:雙變量脆弱性,Weibull迴歸模式,對數常態分配,EM法則 / Abstract Consider survival analysis for clustered samples, where each cluster contains two groups. Assume that individuals within the same cluster and the same group share a common but unobservable random frailty. Hence, the focus of this work is on bivariate frailty model in analysis of multivariate survival data. First, we derive expressions for the correlation between the two survival times to show how the bivariate frailty affects these correlation coefficients. Then, the bivariate log-normal distribution is used to model the bivariate frailty. We modified EM algorithm to estimate the parameters for the Weibull regression model with bivariate log-normal frailty. Key words:bivariate frailty, Weibull regression model, log-normal distribution, EM algorithm.
158

Frequency Analysis of Droughts Using Stochastic and Soft Computing Techniques

Sadri, Sara January 2010 (has links)
In the Canadian Prairies recurring droughts are one of the realities which can have significant economical, environmental, and social impacts. For example, droughts in 1997 and 2001 cost over $100 million on different sectors. Drought frequency analysis is a technique for analyzing how frequently a drought event of a given magnitude may be expected to occur. In this study the state of the science related to frequency analysis of droughts is reviewed and studied. The main contributions of this thesis include development of a model in Matlab which uses the qualities of Fuzzy C-Means (FCMs) clustering and corrects the formed regions to meet the criteria of effective hydrological regions. In FCM each site has a degree of membership in each of the clusters. The algorithm developed is flexible to get number of regions and return period as inputs and show the final corrected clusters as output for most case scenarios. While drought is considered a bivariate phenomena with two statistical variables of duration and severity to be analyzed simultaneously, an important step in this study is increasing the complexity of the initial model in Matlab to correct regions based on L-comoments statistics (as apposed to L-moments). Implementing a reasonably straightforward approach for bivariate drought frequency analysis using bivariate L-comoments and copula is another contribution of this study. Quantile estimation at ungauged sites for return periods of interest is studied by introducing two new classes of neural network and machine learning: Radial Basis Function (RBF) and Support Vector Machine Regression (SVM-R). These two techniques are selected based on their good reviews in literature in function estimation and nonparametric regression. The functionalities of RBF and SVM-R are compared with traditional nonlinear regression (NLR) method. As well, a nonlinear regression with regionalization method in which catchments are first regionalized using FCMs is applied and its results are compared with the other three models. Drought data from 36 natural catchments in the Canadian Prairies are used in this study. This study provides a methodology for bivariate drought frequency analysis that can be practiced in any part of the world.
159

A Study of Gamma Distributions and Some Related Works

Chou, Chao-Wei 11 May 2004 (has links)
Characterization of distributions has been an important topic in statistical theory for decades. Although there have been many well known results already developed, it is still of great interest to find new characterizations of commonly used distributions in application, such as normal or gamma distribution. In practice, sometimes we make guesses on the distribution to be fitted to the data observed, sometimes we use the characteristic properties of those distributions to do so. In this paper we will restrict our attention to the characterizations of gamma distribution as well as some related studies on the corresponding parameter estimation based on the characterization properties. Some simulation studies are also given.
160

Frequency Analysis of Droughts Using Stochastic and Soft Computing Techniques

Sadri, Sara January 2010 (has links)
In the Canadian Prairies recurring droughts are one of the realities which can have significant economical, environmental, and social impacts. For example, droughts in 1997 and 2001 cost over $100 million on different sectors. Drought frequency analysis is a technique for analyzing how frequently a drought event of a given magnitude may be expected to occur. In this study the state of the science related to frequency analysis of droughts is reviewed and studied. The main contributions of this thesis include development of a model in Matlab which uses the qualities of Fuzzy C-Means (FCMs) clustering and corrects the formed regions to meet the criteria of effective hydrological regions. In FCM each site has a degree of membership in each of the clusters. The algorithm developed is flexible to get number of regions and return period as inputs and show the final corrected clusters as output for most case scenarios. While drought is considered a bivariate phenomena with two statistical variables of duration and severity to be analyzed simultaneously, an important step in this study is increasing the complexity of the initial model in Matlab to correct regions based on L-comoments statistics (as apposed to L-moments). Implementing a reasonably straightforward approach for bivariate drought frequency analysis using bivariate L-comoments and copula is another contribution of this study. Quantile estimation at ungauged sites for return periods of interest is studied by introducing two new classes of neural network and machine learning: Radial Basis Function (RBF) and Support Vector Machine Regression (SVM-R). These two techniques are selected based on their good reviews in literature in function estimation and nonparametric regression. The functionalities of RBF and SVM-R are compared with traditional nonlinear regression (NLR) method. As well, a nonlinear regression with regionalization method in which catchments are first regionalized using FCMs is applied and its results are compared with the other three models. Drought data from 36 natural catchments in the Canadian Prairies are used in this study. This study provides a methodology for bivariate drought frequency analysis that can be practiced in any part of the world.

Page generated in 0.0577 seconds