Spelling suggestions: "subject:"[een] LINEAR MODEL"" "subject:"[enn] LINEAR MODEL""
251 |
Quantile regression in risk calibrationChao, Shih-Kang 05 June 2015 (has links)
Die Quantilsregression untersucht die Quantilfunktion QY |X (τ ), sodass ∀τ ∈ (0, 1), FY |X [QY |X (τ )] = τ erfu ̈llt ist, wobei FY |X die bedingte Verteilungsfunktion von Y gegeben X ist. Die Quantilsregression ermo ̈glicht eine genauere Betrachtung der bedingten Verteilung u ̈ber die bedingten Momente hinaus. Diese Technik ist in vielerlei Hinsicht nu ̈tzlich: beispielsweise fu ̈r das Risikomaß Value-at-Risk (VaR), welches nach dem Basler Akkord (2011) von allen Banken angegeben werden muss, fu ̈r ”Quantil treatment-effects” und die ”bedingte stochastische Dominanz (CSD)”, welches wirtschaftliche Konzepte zur Messung der Effektivit ̈at einer Regierungspoli- tik oder einer medizinischen Behandlung sind. Die Entwicklung eines Verfahrens zur Quantilsregression stellt jedoch eine gro ̈ßere Herausforderung dar, als die Regression zur Mitte. Allgemeine Regressionsprobleme und M-Scha ̈tzer erfordern einen versierten Umgang und es muss sich mit nicht- glatten Verlustfunktionen besch ̈aftigt werden. Kapitel 2 behandelt den Einsatz der Quantilsregression im empirischen Risikomanagement w ̈ahrend einer Finanzkrise. Kapitel 3 und 4 befassen sich mit dem Problem der h ̈oheren Dimensionalit ̈at und nichtparametrischen Techniken der Quantilsregression. / Quantile regression studies the conditional quantile function QY|X(τ) on X at level τ which satisfies FY |X QY |X (τ ) = τ , where FY |X is the conditional CDF of Y given X, ∀τ ∈ (0,1). Quantile regression allows for a closer inspection of the conditional distribution beyond the conditional moments. This technique is par- ticularly useful in, for example, the Value-at-Risk (VaR) which the Basel accords (2011) require all banks to report, or the ”quantile treatment effect” and ”condi- tional stochastic dominance (CSD)” which are economic concepts in measuring the effectiveness of a government policy or a medical treatment. Given its value of applicability, to develop the technique of quantile regression is, however, more challenging than mean regression. It is necessary to be adept with general regression problems and M-estimators; additionally one needs to deal with non-smooth loss functions. In this dissertation, chapter 2 is devoted to empirical risk management during financial crises using quantile regression. Chapter 3 and 4 address the issue of high-dimensionality and the nonparametric technique of quantile regression.
|
252 |
Verallgemeinerte Maximum-Likelihood-Methoden und der selbstinformative GrenzwertJohannes, Jan 16 December 2002 (has links)
Es sei X eine Zufallsvariable mit unbekannter Verteilung P. Zu den Hauptaufgaben der Mathematischen Statistik zählt die Konstruktion von Schätzungen für einen abgeleiteten Parameter theta(P) mit Hilfe einer Beobachtung X=x. Im Fall einer dominierten Verteilungsfamilie ist es möglich, das Maximum-Likelihood-Prinzip (MLP) anzuwenden. Eine Alternative dazu liefert der Bayessche Zugang. Insbesondere erweist sich unter Regularitätsbedingungen, dass die Maximum-Likelihood-Schätzung (MLS) dem Grenzwert einer Folge von Bayesschen Schätzungen (BSen) entspricht. Eine BS kann aber auch im Fall einer nicht dominierten Verteilungsfamilie betrachtet werden, was als Ansatzpunkt zur Erweiterung des MLPs genutzt werden kann. Weiterhin werden zwei Ansätze einer verallgemeinerten MLS (vMLS) von Kiefer und Wolfowitz sowie von Gill vorgestellt. Basierend auf diesen bekannten Ergebnissen definieren wir einen selbstinformativen Grenzwert und einen selbstinformativen a posteriori Träger. Im Spezialfall einer dominierten Verteilungsfamilie geben wir hinreichende Bedingungen an, unter denen die Menge der MLSen einem selbstinformativen a posteriori Träger oder, falls die MLS eindeutig ist, einem selbstinformativen Grenzwert entspricht. Das Ergebnis für den selbstinformativen a posteriori Träger wird dann auf ein allgemeineres Modell ohne dominierte Verteilungsfamilie erweitert. Insbesondere wird gezeigt, dass die Menge der vMLSen nach Kiefer und Wolfowitz ein selbstinformativer a posteriori Träger ist. Weiterhin wird der selbstinformative Grenzwert bzw. a posteriori Träger in einem Modell mit nicht identifizierbarem Parameter bestimmt. Im Mittelpunkt dieser Arbeit steht ein multivariates semiparametrisches lineares Modell. Zunächst weisen wir jedoch nach, dass in einem rein nichtparametrischen Modell unter der a priori Annahme eines Dirichlet Prozesses der selbstinformative Grenzwert existiert und mit der vMLS nach Kiefer und Wolfowitz sowie der nach Gill übereinstimmt. Anschließend untersuchen wir das multivariate semiparametrische lineare Modell und bestimmen die vMLSen nach Kiefer und Wolfowitz bzw. nach Gill sowie den selbstinformativen Grenzwert unter der a priori Annahme eines Dirichlet Prozesses und einer Normal-Wishart-Verteilung. Im Allgemeinen sind die so erhaltenen Schätzungen verschieden. Abschließend gehen wir dann auf den Spezialfall eines semiparametrischen Lokationsmodells ein, in dem die vMLSen nach Kiefer und Wolfowitz bzw. nach Gill und der selbstinformative Grenzwert wieder identisch sind. / We assume to observe a random variable X with unknown probability distribution. One major goal of mathematical statistics is the estimation of a parameter theta(P) based on an observation X=x. Under the assumption that P belongs to a dominated family of probability distributions, we can apply the maximum likelihood principle (MLP). Alternatively, the Bayes approach can be used to estimate the parameter. Under some regularity conditions it turns out that the maximum likelihood estimate (MLE) is the limit of a sequence of Bayes estimates (BE's). Note that BE's can even be defined in situations where no dominating measure exists. This allows us to derive an extension of the MLP using the Bayes approach. Moreover, two versions of a generalised MLE (gMLE) are presented, which have been introduced by Kiefer and Wolfowitz and Gill, respectively. Based on the known results, we define a selfinformative limit and a posterior carrier. In the special case of a model with dominated distribution family, we state sufficient conditions under which the set of MLE's is a selfinformative posterior carrier or, in the case of a unique MLE, a selfinformative limit. The result for the posterior carrier is extended to a more general model without dominated distributions. In particular we show that the set of gMLE's of Kiefer and Wolfowitz is a posterior carrier. Furthermore we calculate the selfinformative limit and posterior carrier, respectively, in the case of a model with possibly nonidentifiable parameters. In this thesis we focus on a multivariate semiparametric linear model. At first we show that, in the case of a nonparametric model, the selfinformative limit coincides with the gMLE of Kiefer and Wolfowitz as well as that of Gill, if a Dirichlet process serves as prior. Then we investigate both versions of gMLE's and the selfinformative limit in the multivariate semiparametric linear model, where the prior for the latter estimator is given by a Dirichlet process and a normal-Wishart distribution. In general the estimators are not identical. However, in the special case of a location model we find again that the three considered estimates coincide.
|
253 |
Décodage neuronal dans le système auditif central à l'aide d'un modèle bilinéaire généralisé et de représentations spectro-temporelles bio-inspirées / Neural decoding in the central auditory system using bio-inspired spectro-temporal representations and a generalized bilinear modelSiahpoush, Shadi January 2015 (has links)
Résumé : Dans ce projet, un décodage neuronal bayésien est effectué sur le colliculus inférieur du cochon d'Inde. Premièrement, On lit les potentiels évoqués grâce aux électrodes et ensuite on en déduit les potentiels d'actions à l'aide de technique de classification des décharges des neurones.
Ensuite, un modèle linéaire généralisé (GLM) est entraîné en associant un stimulus acoustique en même temps que les mesures de potentiel qui sont effectuées.
Enfin, nous faisons le décodage neuronal de l'activité des neurones en utilisant une méthode d'estimation statistique par maximum à posteriori afin de reconstituer la représentation spectro-temporelle du signal acoustique qui correspond au stimulus acoustique.
Dans ce projet, nous étudions l'impact de différents modèles de codage neuronal ainsi que de différentes représentations spectro-temporelles (qu'elles sont supposé représenter le stimulus acoustique équivalent) sur la précision du décodage bayésien de l'activité neuronale enregistrée par le système auditif central. En fait, le modèle va associer une représentation spectro-temporelle équivalente au stimulus acoustique à partir des mesures faites dans le cerveau. Deux modèles de codage sont comparés: un GLM et un modèle bilinéaire généralisé (GBM), chacun avec trois différentes représentations spectro-temporelles des stimuli d'entrée soit un spectrogramme ainsi que deux représentations bio-inspirées: un banc de filtres gammatones et un spikegramme. Les paramètres des GLM et GBM, soit le champ récepteur spectro-temporel, le filtre post décharge et l'entrée non linéaire (seulement pour le GBM) sont adaptés en utilisant un algorithme d'optimisation par maximum de vraisemblance (ML). Le rapport signal sur bruit entre la représentation reconstruite et la représentation originale est utilisé pour évaluer le décodage, c'est-à-dire la précision de la reconstruction. Nous montrons expérimentalement que la précision de la reconstruction est meilleure avec une représentation par spikegramme qu'avec une représentation par spectrogramme et, en outre, que l'utilisation d'un GBM au lieu d'un GLM augmente la précision de la reconstruction. En fait, nos résultats montrent que le rapport signal à bruit de la reconstruction d'un spikegramme avec le modèle GBM est supérieur de 3.3 dB au rapport signal à bruit de la reconstruction d'un spectrogramme avec le modèle GLM. / Abstract : In this project, Bayesian neural decoding is performed on the neural activity recorded from the inferior colliculus of the guinea pig following the presentation of a vocalization. In particular, we study the impact of different encoding models on the accuracy of reconstruction of different spectro-temporal representations of the input stimulus. First voltages recorded from the inferior colliculus of the guinea pig are read and the spike trains are obtained. Then, we fit an encoding model to the stimulus and associated spike trains. Finally, we do neural decoding on the pairs of stimuli and neural activities using the maximum a posteriori optimization method to obtain the reconstructed spectro-temporal representation of the signal. Two encoding models, a generalized linear model (GLM) and a generalized bilinear model (GBM), are compared along with three different spectro-temporal representations of the input stimuli: a spectrogram and two bio-inspired representations, i.e. a gammatone filter bank (GFB) and a spikegram. The parameters of the GLM and GBM including spectro-temporal receptive field, post spike filter and input non linearity (only for the GBM) are fitted using the maximum likelihood optimization (ML) algorithm. Signal to noise ratios between the reconstructed and original representations are used to evaluate the decoding, or reconstruction accuracy. We experimentally show that the reconstruction accuracy is better with the spikegram representation than with the spectrogram and GFB representation. Furthermore, using a GBM instead of a GLM significantly increases the reconstruction accuracy. In fact, our results show that the spikegram reconstruction accuracy with a GBM fitting yields an SNR that is 3.3 dB better than when using the standard decoding approach of reconstructing a spectrogram with GLM fitting.
|
254 |
門檻式自動迴歸模型參數之近似信賴區間 / Approximate confidence sets for parameters in a threshold autoregressive model陳慎健, Chen, Shen Chien Unknown Date (has links)
本論文主要在估計門檻式自動迴歸模型之參數的信賴區間。由線性自動迴歸
模型衍生出來的非線性自動迴歸模型中,門檻式自動迴歸模型是其中一種經常會被應用到的模型。雖然,門檻式自動迴歸模型之參數的漸近理論已經發展了許多;但是,相較於大樣本理論,有限樣本下參數的性質討論則較少。對於有限樣本的研究,Woodroofe (1989) 提出一種近似法:非常弱近似法。 Woodroofe 和 Coad (1997) 則利用此方法去架構一適性化線性模型之參數的修正信賴區間。Weng 和 Woodroofe (2006) 則將此近似法應用於線性自動迴歸模型。這個方法的應用始於定義一近似樞紐量,接著利用此方法找出近似樞紐量的近似期望值及近似變異數,並對此近似樞紐量標準化,則標準化後的樞紐量將近似於標準常態分配,因此得以架構參數的修正信賴區間。而在線性自動迴歸模型下,利用非常弱展開所導出的近似期望值及近似變異數僅會與一階動差及二階動差的微分有關。因此,本論文的研究目的就是在樣本數為適當的情況下,將線性自動迴歸模型的結果運用於門檻式自動迴歸模型。由於大部分門檻式自動迴歸模型的動差並無明確之形式;因此,本研究採用蒙地卡羅法及插分法去近似其動差及微分。最後,以第一階門檻式自動迴歸模型去配適美國的國內生產總值資料。 / Threshold autoregressive (TAR) models are popular nonlinear extension of the linear autoregressive (AR) models. Though many have developed the asymptotic theory for parameter estimates in the TAR models, there have been less studies about the finite sample properties. Woodroofe (1989) and Woodroofe and Coad (1997) developed a very weak approximation and used it to construct corrected confidence sets for parameters in an adaptive linear model. This approximation was further developed by Woodroofe and Coad (1999) and Weng and Woodroofe (2006), who derived the corrected confidence sets for parameters in the AR(p) models and other adaptive models. This approach starts with an approximate pivot, and employs the very weak expansions to determine the mean and variance corrections of the pivot. Then, the renormalized pivot is used to form corrected confidence sets. The correction terms have simple forms, and for AR(p) models it involves only the first two moments of the process and the derivatives of these moments. However, for TAR models the analytic forms for moments are known only in some cases when the autoregression function has special structures. The goal of this research is to extend the very weak method to the TAR models to form corrected confidence sets when sample size is moderate. We propose using the difference quotient method and Monte Carlo simulations to approximate the derivatives. Some simulation studies are provided to assess the accuracy of the method. Then, we apply the approach to a real U.S. GDP data.
|
255 |
遺漏值存在時羅吉斯迴歸模式分析之研究 / Logistic Regression Analysis with Missing Value劉昌明, Liu, Chang Ming Unknown Date (has links)
無
|
256 |
Essays on Consumption : - Aggregation, Asymmetry and Asset DistributionsBjellerup, Mårten January 2005 (has links)
The dissertation consists of four self-contained essays on consumption. Essays 1 and 2 consider different measures of aggregate consumption, and Essays 3 and 4 consider how the distributions of income and wealth affect consumption from a macro and micro perspective, respectively. Essay 1 considers the empirical practice of seemingly interchangeable use of two measures of consumption; total consumption expenditure and consumption expenditure on nondurable goods and services. Using data from Sweden and the US in an error correction model, it is shown that consumption functions based on the two measures exhibit significant differences in several aspects of econometric modelling. Essay 2, coauthored with Thomas Holgersson, considers derivation of a univariate and a multivariate version of a test for asymmetry, based on the third central moment. The logic behind the test is that the dependent variable should correspond to the specification of the econometric model; symmetric with linear models and asymmetric with non-linear models. The main result in the empirical application of the test is that orthodox theory seems to be supported for consumption of both nondurable and durable consumption. The consumption of durables shows little deviation from symmetry in the four-country sample, while the consumption of nondurables is shown to be asymmetric in two out of four cases, the UK and the US. Essay 3 departs from the observation that introducing income uncertainty makes the consumption function concave, implying that the distributions of wealth and income are omitted variables in aggregate Euler equations. This implication is tested through estimation of the distributions over time and augmentation of consumption functions, using Swedish data for 1963-2000. The results show that only the dispersion of wealth is significant, the explanation of which is found in the marked changes of the group of households with negative wealth; a group that according to a concave consumption function has the highest marginal propensity to consume. Essay 4 attempts to empirically specify the nature of the alleged concavity of the consumption function. Using grouped household level Swedish data for 1999-2001, it is shown that the marginal propensity to consume out of current resources, i.e. current income and net wealth, is strictly decreasing in current resources and net wealth, but approximately constant in income. Also, an empirical reciprocal to the stylized theoretical consumption function is estimated, and shown to bear a close resemblance to the theoretical version.
|
257 |
Evaluación en el modelado de las respuestas de recuentoLlorens Aleixandre, Noelia 10 June 2005 (has links)
Este trabajo presenta dos líneas de investigación desarrolladas en los últimos años en torno a la etapa de evaluación en datos de recuento. Los campos de estudio han sido: los datos de recuento, concretamente el estudio del modelo de regresión de Poisson y sus extensiones y la etapa de evaluación como punto de inflexión en el proceso de modelado estadístico. Los resultados obtenidos ponen de manifiesto la importancia de aplicar el modelo adecuado a las características de los datos así como de evaluar el ajuste del mismo. Por otra parte la comparación de pruebas, índices, estimadores y modelos intentan señalar la adecuación o la preferencia de unos sobre otros en determinadas circunstancias y en función de los objetivos del investigador. / This paper presents two lines of research that have been developed in recent years on the evaluation stage in count data. The areas of study have been both count data, specifically the study of Poisson regression modelling and its extension, and the evaluation stage as a point of reflection in the statistical modelling process. The results obtained demonstrate the importance of applying appropriate models to the characteristics of data as well as evaluating their fit. On the other hand, comparisons of trials, indices, estimators and models attempt to indicate the suitability or preference for one over the others in certain circumstances and according to research objectives.
|
258 |
Impact Of Large-Scale Coupled Atmospheric-Oceanic Circulation On Hydrologic Variability And Uncertainty Through Hydroclimatic TeleconnectionMaity, Rajib 01 January 2007 (has links)
In the recent scenario of climate change, the natural variability and uncertainty associated with the hydrologic variables is of great concern to the community. This thesis opens up a new area of multi-disciplinary research. It is a promising field of research in hydrology and water resources that uses the information from the field of atmospheric science. A new way to identify and capture the variability and uncertainty associated with the hydrologic variables is established through this thesis. Assessment of hydroclimatic teleconnection for Indian subcontinent and its use in basin-scale hydrologic time series analysis and forecasting is the broad aim of this PhD thesis.
The initial part of the thesis is devoted to investigate and establish the dependence of Indian summer monsoon rainfall (ISMR) on large-scale Oceanic-atmospheric circulation phenomena from tropical Pacific Ocean and Indian Ocean regions. El Niño-Southern Oscillation (ENSO) is the well established coupled Ocean-atmosphere mode of tropical Pacific Ocean whereas Indian Ocean Dipole (IOD) mode is the recently identified coupled Ocean-atmosphere mode of tropical Indian Ocean. Equatorial Indian Ocean Oscillation (EQUINOO) is known as the atmospheric component of IOD mode. The potential of ENSO and EQUINOO for predicting ISMR is investigated by Bayesian dynamic linear model (BDLM). A major advantage of this method is that, it is able to capture the dynamic nature of the cause-effect relationship between large-scale circulation information and hydrologic variables, which is quite expected in the climate change scenario. Another new method, proposed to capture the dependence between the teleconnected hydroclimatic variables is based on the theory of copula, which itself is quite new to the field of hydrology. The dependence of ISMR on ENSO and EQUINOO is captured and investigated for its potential use to predict the monthly variation of ISMR using the proposed method.
The association of monthly variation of ISMR with the combined information of ENSO and EQUINOO, denoted by monthly composite index (MCI), is also investigated and established. The spatial variability of such association is also investigated. It is observed that MCI is significantly associated with monthly rainfall variation all over India, except over North-East (NE) India, where it is poor.
Having established the hydroclimatic teleconnection at a comparatively larger scale, the hydroclimatic teleconnection for basin-scale hydrologic variables is then investigated and established. The association of large-scale atmospheric circulation with inflow during monsoon season into Hirakud reservoir, located in the state of Orissa in India, has been investigated. The strong predictive potential of the composite index of ENSO and EQUINOO is established for extreme inflow conditions. So the methodology of inflow prediction using the information of hydroclimatic teleconnection would be very suitable even for ungauged or poorly gauged watersheds as this approach does not use any information about the rainfall in the catchment.
Recognizing the basin-scale hydroclimatic association with both ENSO and EQUINOO at seasonal scale, the information of hydroclimatic teleconnection is used for streamflow forecasting for the Mahanadi River basin in the state of Orissa, India, both at seasonal and monthly scale. It is established that the basin-scale streamflow is influenced by the large-scale atmospheric circulation phenomena. Information of streamflow from previous month(s) alone, as used in most of the traditional modeling approaches, is shown to be inadequate. It is successfully established that incorporation of large-scale atmospheric circulation information significantly improves the performance of prediction at monthly scale. Again, the prevailing conditions/characteristics of watershed are also important. Thus, consideration of both the information of previous streamflow and large-scale atmospheric circulations are important for basin-scale streamflow prediction at monthly time-scale.
Adopting the developed approach of using the information of hydroclimatic teleconnection, hydrologic variables can be predicted with better accuracy which will be a very useful input for better management of water resources.
|
259 |
Essays in Municipal FinanceFound, Adam 18 July 2014 (has links)
Chapter 1:
I analyze economies of scale for fire and police services by considering how per-household costs are affected by a municipality’s size. Using 2005-2008 municipal data for the Province of Ontario, I employ a partial-linear model to non-parametrically estimate per-household cost curves for each service. The results show that cost per household is a U-shaped function of municipal size for each service. For fire services, these costs are minimized at a population of about 20,000 residents, while for police services they are minimized at about 50,000 residents. Based on these results, implications are drawn for municipal amalgamation policy.
Chapter 2:
I review how the literature has continued to exclude the business property tax (BPT) from the marginal effective tax rate (METR) on capital investment for over 25 years. I recast the METR theory as it relates to the BPT and compute 2013 estimates of the METR for all 10 provinces in Canada with provincial BPTs included. Building on these estimates, I compute the METR inclusive of municipal BPTs for the largest municipality in each province. I find the BPT to be substantially damaging to municipal, provincial and international competitiveness. With the business property tax representing over 60% of the Canadian METR, among the various capital taxes it is by far the largest contributor to Canada’s investment barrier.
Chapter 3:
I estimate the responsiveness of structure investment and the tax base to commercial property taxes, taking a new step toward resolving the “benefit view” vs. “capital tax view” debate within the literature. Using a first-difference structural model to analyze 2006-2013 municipal data for the Province of Ontario, I improve upon past studies and build onto the literature in a number of ways. I find that commercial structure investment and tax base are highly sensitive to the property tax with Ontario’s assessment-weighted average tax elasticity (and tax-base elasticity) ranging from -0.80 to -0.90 at 2011 taxation levels. The results support the capital tax view of the business property tax, building onto the growing consensus that business property taxes substantially impact investment in structures and the value of the tax base.
|
260 |
From group to patient-specific analysis of brain function in arterial spin labelling and BOLD functional MRIMaumet, Camille 29 May 2013 (has links) (PDF)
This thesis deals with the analysis of brain function in Magnetic Resonance Imaging (MRI) using two sequences: BOLD functional MRI (fMRI) and Arterial Spin Labelling (ASL). In this context, group statistical analyses are of great importance in order to understand the general mechanisms underlying a pathology, but there is also an increasing interest towards patient-specific analyses that draw conclusions at the patient level. Both group and patient-specific analyses are studied in this thesis. We first introduce a group analysis in BOLD fMRI for the study of specific language impairment, a pathology that was very little investigated in neuroimaging. We outline atypical patterns of functional activity and lateralisation in language regions. Then, we move forward to patient-specific analysis. We propose the use of robust estimators to compute cerebral blood flow maps in ASL. Then, we analyse the validity of the assumptions underlying standard statistical analyses in the context of ASL. Finally, we propose a new locally multivariate statistical method based on an a contrario approach and apply it to the detection of atypical patterns of perfusion in ASL and to activation detection in BOLD functional MRI.
|
Page generated in 0.0729 seconds