Spelling suggestions: "subject:"aximum likelihood estimation"" "subject:"aximum iikelihood estimation""
11 |
Sensory Integration During Goal Directed Reaches: The Effects of Manipulating Target AvailabilityKhanafer, Sajida 19 October 2012 (has links)
When using visual and proprioceptive information to plan a reach, it has been proposed that the brain combines these cues to estimate the object and/or limb’s location. Specifically, according to the maximum-likelihood estimation (MLE) model, more reliable sensory inputs are assigned a greater weight (Ernst & Banks, 2002). In this research we examined if the brain is able to adjust which sensory cue it weights the most. Specifically, we asked if the brain changes how it weights sensory information when the availability of a visual cue is manipulated. Twenty-four healthy subjects reached to visual (V), proprioceptive (P), or visual + proprioceptive (VP) targets under different visual delay conditions (e.g. on V and VP trials, the visual target was available for the entire reach, it was removed with the go-signal or it was removed 1, 2 or 5 seconds before the go-signal). Subjects completed 5 blocks of trials, with 90 trials per block. For 12 subjects, the visual delay was kept consistent within a block of trials, while for the other 12 subjects, different visual delays were intermixed within a block of trials. To establish which sensory cue subjects weighted the most, we compared endpoint positions achieved on V and P reaches to VP reaches. Results indicated that all subjects weighted sensory cues in accordance with the MLE model across all delay conditions and that these weights were similar regardless of the visual delay. Moreover, while errors increased with longer visual delays, there was no change in reaching variance. Thus, manipulating the visual environment was not enough to change subjects’ weighting strategy, further i
|
12 |
Estimação em modelos funcionais com erro normais e repetições não balanceadas / Estimation in functional models by using a normal error and replications unbalancedJoan Neylo da Cruz Rodriguez 29 April 2008 (has links)
Esta dissertação compreende um estudo da eficiência de estimadores dos parâmetros no modelo funcional com erro nas variáveis, com repetições para contornar o problema de falta de identificação. Nela, discute-se os procedimentos baseados nos métodos de máxima verossimilhança e escore corrigido. As estimativas obtidas pelos dois métodos levam a resultados similares. / This work is concerned with a study on the efficiency of parameter estimates in the functional linear relashionship with constant variances. Where the lack of identification is resolved of by considering replications. Estimation is dealt with by using maximum likelihood and the corrected score approach. Comparisons between the approaches are illustrated by using simulated data.
|
13 |
The ACD Model with an application to the brazilian interbank rate futures marketAssun????o, Ad??o Vone Teixeira de 31 March 2016 (has links)
Submitted by Jadi Castro (jadiana.castro@ucb.br) on 2017-02-21T18:01:09Z
No. of bitstreams: 1
AdaoVoneTeixeiradeAssuncaoDissertacao2016.pdf: 1069424 bytes, checksum: 2cec9cf848a40c34902559e8d8f0c95c (MD5) / Approved for entry into archive by Kelson Anthony de Menezes (kelson@ucb.br) on 2017-02-21T18:02:23Z (GMT) No. of bitstreams: 1
AdaoVoneTeixeiradeAssuncaoDissertacao2016.pdf: 1069424 bytes, checksum: 2cec9cf848a40c34902559e8d8f0c95c (MD5) / Made available in DSpace on 2017-02-21T18:02:24Z (GMT). No. of bitstreams: 1
AdaoVoneTeixeiradeAssuncaoDissertacao2016.pdf: 1069424 bytes, checksum: 2cec9cf848a40c34902559e8d8f0c95c (MD5)
Previous issue date: 2016-03-31 / Aplicamos o Modelo Autoregressivo de Dura????o Condicional (ACD)
Mercado de Futuros de Taxa Interbanc??ria Brasileira. A amostra foi constru??da com base em contratos M??s antes da expira????o para replicar a curva de obriga????es de um m??s eo per??odo estudado Vai de julho de 2013 a setembro de 2015. Utilizamos M??xima Verossimilhan??a Estimativa baseada nas distribui????es de probabilidade mais populares na literatura ACD: Exponencial, gama e Weibull e verificou-se que a estimativa baseada na
A distribui????o exponencial foi a melhor op????o para modelar os dados. / We applied the basic Autoregressive Conditional Duration Model (ACD) to the
Brazilian Interbank Rate Futures Market. The sample was built using contracts in the
month prior to expiration to replicate a one month bond curve and the period studied
goes from july of 2013 to september of 2015. We used Maximum Likelihood
Estimation based on the most popular probability distributions in the ACD literature:
exponential, gamma and Weibull and we found that the estimation based on the
exponential distributional was the best option to model the data.
|
14 |
Estimação em modelos funcionais com erro normais e repetições não balanceadas / Estimation in functional models by using a normal error and replications unbalancedCruz Rodriguez, Joan Neylo da 29 April 2008 (has links)
Esta dissertação compreende um estudo da eficiência de estimadores dos parâmetros no modelo funcional com erro nas variáveis, com repetições para contornar o problema de falta de identificação. Nela, discute-se os procedimentos baseados nos métodos de máxima verossimilhança e escore corrigido. As estimativas obtidas pelos dois métodos levam a resultados similares. / This work is concerned with a study on the efficiency of parameter estimates in the functional linear relashionship with constant variances. Where the lack of identification is resolved of by considering replications. Estimation is dealt with by using maximum likelihood and the corrected score approach. Comparisons between the approaches are illustrated by using simulated data.
|
15 |
Extensões do Modelo Potência Normal / Power Normal Model extensionsSiroky, Andressa Nunes 29 March 2019 (has links)
Em análise de dados que apresentam certo grau de assimetria, curtose ou bimodalidade, a suposição de normalidade não é válida, sendo necessários modelos que capturem estas características dos dados. Neste contexto, uma nova classe de distribuições bimodais assimétricas gerada por um mecanismo de mistura é proposta neste trabalho. Algumas propriedades para o caso particular que inclui a distribuição normal como família base desta classe são estudadas e apresentadas, tal caso resulta no chamado Modelo Mistura de Potência Normal (MPN). Dois algoritmos de simulação são desenvolvidos com a finalidade de obter variáveis aleatórias com esta distribuição. A abordagem frequentista é empregada para a inferência dos parâmetros do modelo proposto. São realizados estudos de simulação com o objetivo de avaliar o comportamento das estimativas de máxima verossimilhança dos parâmetros. Adicionalmente, um modelo de regressão para dados bimodais é proposto, utilizando a distribuição MPN como variável resposta nos modelos Generalizados Aditivos para Posição, Escala e Forma, cuja sigla em inglês é GAMLSS. Para este modelo de regressão estudos de simulação também são realizados. Em ambos os casos estudados, o modelo proposto é ilustrado utilizando um conjunto de dados reais referente à pontuação de jogadores na Super Liga Brasileira de Voleibol Masculino 2014/2015. Com relação a este conjunto de dados, o modelo MPN apresenta melhor ajuste quando comparado à modelos já existentes na literatura para dados bimodais. / In analysis of data that present a certain degree of asymmetry, kurtosis or bimodality, the assumption of normality is not valid and models that capture these characteristics of the data are required. In this context, a new class of bimodal asymmetric distributions generated by a mixture mechanism is proposed. Some properties for the particular case that includes the normal distribution as the base family of this class are studied and presented, such case results in the so-called Power Normal Mixture Model. Two simulation algorithms are developed with the purpose of obtaining random variables with this new distribution. The frequentist approach is used to the inference of the model parameters. Simulation studies are carried out with the aim of assessing the behavior of the maximum likelihood estimates of the parameters. In addition, the power normal mixture distribution is introduced as the response variable for the Generalized Additives Models for Location, Scale and Shape (GAMLSS). For this regression model, simulation studies are also performed. In both cases studied, the proposed model is illustrated using a data set on players\' scores in the Male Brazilian Volleyball Superliga 2014/2015. With respect to this dataset, the power normal mixture model presents better fit when compared to models already existing in the literature to bimodal data.
|
16 |
Essays on random effects models and GARCHSkoglund, Jimmy January 2001 (has links)
This thesis consists of four essays, three in the field of random effects models and one in the field of GARCH. The first essay in this thesis, ''Maximum likelihood based inference in the two-way random effects model with serially correlated time effects'', considers maximum likelihood estimation and inference in the two-way random effects model with serial correlation. We derive a straightforward maximum likelihood estimator when the time-specific component follow an AR(1) or MA(1) process. The estimator is also easily generalized to allow for arbitrary stationary and strictly invertible ARMA processes. In addition we consider the model selection problem and derive tests of the null hypothesis of no serial correlation as well as tests for discriminating between the AR(1) and MA(1) specifications. A Monte-Carlo experiment evaluates the finite-sample properties of the estimators, test-statistics and model selection procedures. The second essay, ''Asymptotic properties of the maximum likelihood estimator of random effects models with serial correlation'', considers the large sample behavior of the maximum likelihood estimator of random effects models with serial correlation in the form of AR(1) for the idiosyncratic or time-specific error component. Consistent estimation and asymptotic normality is established for a comprehensive specification which nests these models as well as all commonly used random effects models. The third essay, ''Specification and estimation of random effects models with serial correlation of general form'', is also concerned with maximum likelihood based inference in random effects models with serial correlation. Allowing for individual effects we introduce serial correlation of general form in the time effects as well as the idiosyncratic errors. A straightforward maximum likelihood estimator is derived and a coherent model selection strategy is suggested for determining the orders of serial correlation as well as the importance of time or individual effects. The methods are applied to the estimation of a production function using a sample of 72 Japanese chemical firms observed during 1968-1987. The fourth essay, entitled ''A simple efficient GMM estimator of GARCH models'', considers efficient GMM based estimation of GARCH models. Sufficient conditions for the estimator to be consistent and asymptotically normal are established for the GARCH(1,1) conditional variance process. In addition efficiency results are obtained for a GARCH(1,1) model where the conditional variance is allowed to enter the mean as well. That is, the GARCH(1,1)-M model. An application to the returns to the SP500 index illustrates. / <p>Diss. Stockholm : Handelshögskolan, 2001</p>
|
17 |
Credit Risk in Corporate Securities and Derivatives : valuation and optimal capital structure choiceEricsson, Jan January 1997 (has links)
This volume consists of four papers, which in principle could be read in any order. The common denominator is that they deal with contingent claims models of a firm's securities or related derivatives. A Framework for Valuing Corporate Securities Early applications of contingent claims analysis to the pricing of corporate liabilities tend to restrict themselves to situations where debt is perpetual or where financial distress can only occur at debt maturity. This paper relaxes these restrictions and provides an exposition of how most corporate liabilities can be valued as packages of two fundamental barrier contingent claims: a down-and-out call and a binary option. Furthermore, it is shown how the comparative statics of the resulting pricing formulae can be derived.A New Compound Option Pricing ModelThis paper extends the Geske (1979) compound option pricing model to the case where the security on which the option is written is a down-and-out call as opposed to a standard Black and Scholes call. Furthermore, we develop a general and flexible framework for valuing options on more complex packages of contingent claims - any claim that can be valued using the ideas in chapter 1. This allows us to study the interaction between the detailed characteristics of a firm's capital structure and the prices of for example stock options.Implementing Firm Value Based ModelsThis paper evaluates an implementation procedure for contingent claims models suggested by Duan (1994). Duan's idea is to use time series data of traded securities such as shares of common stock in order to estimate the dynamics of the firm's asset value. Furthermore, we provide an argument which allows us to relax the (common) assumption that the firm's assets may be continuously traded. It is sufficient to assume that the firm's assets are traded at one particular point in time.Asset Substitution, Debt Pricing, Optimal Leverage and MaturityChapters 1-3 have focused on the problem of pricing corporate securities.They have thus abstracted strategic aspects of corporate finance theory. This paper is an attempt to combine the contingent claims literature with the non-dynamic corporate finance literature. I allow the management of the firm to alter its investment policy strategically. This yields a model which allows us to examine the relationship between bond prices, agency costs, optimal leverage and maturity. / Diss. Stockholm : Handelshögsk.
|
18 |
Sensory Integration During Goal Directed Reaches: The Effects of Manipulating Target AvailabilityKhanafer, Sajida 19 October 2012 (has links)
When using visual and proprioceptive information to plan a reach, it has been proposed that the brain combines these cues to estimate the object and/or limb’s location. Specifically, according to the maximum-likelihood estimation (MLE) model, more reliable sensory inputs are assigned a greater weight (Ernst & Banks, 2002). In this research we examined if the brain is able to adjust which sensory cue it weights the most. Specifically, we asked if the brain changes how it weights sensory information when the availability of a visual cue is manipulated. Twenty-four healthy subjects reached to visual (V), proprioceptive (P), or visual + proprioceptive (VP) targets under different visual delay conditions (e.g. on V and VP trials, the visual target was available for the entire reach, it was removed with the go-signal or it was removed 1, 2 or 5 seconds before the go-signal). Subjects completed 5 blocks of trials, with 90 trials per block. For 12 subjects, the visual delay was kept consistent within a block of trials, while for the other 12 subjects, different visual delays were intermixed within a block of trials. To establish which sensory cue subjects weighted the most, we compared endpoint positions achieved on V and P reaches to VP reaches. Results indicated that all subjects weighted sensory cues in accordance with the MLE model across all delay conditions and that these weights were similar regardless of the visual delay. Moreover, while errors increased with longer visual delays, there was no change in reaching variance. Thus, manipulating the visual environment was not enough to change subjects’ weighting strategy, further i
|
19 |
Carrier Recovery in burst-mode 16-QAMChen, Jingxin 30 June 2004
Wireless communication systems such as multipoint communication systems (MCS) are becoming attractive as cost-effective means for providing network access in sparsely populated, rugged, or developing areas of the world. Since the radio spectrum is limited, it is desirable to use spectrally efficient modulation methods such as quadrature amplitude modulation (QAM) for high data rate channels. Many MCS employ time division multiple access (TDMA) and/or time division duplexing (TDD) techniques, in which transmissions operate in bursts. In many cases, a preamble of known symbols is appended to the beginning of each burst for carrier and symbol timing recovery (symbol timing is assumed known in this thesis). Preamble symbols consume bandwidth and power and are not used to convey information. In order for burst-mode communications to provide efficient data throughput, the synchronization time must be short compared to the user data portion of the burst. <p> Traditional methods of communication system synchronization such as phase-locked loops (PLLs) have demonstrated reduced performance when operated in burst-mode systems. In this thesis, a feedforward (FF) digital carrier recovery technique to achieve rapid carrier synchronization is proposed. The estimation algorithms for determining carrier offsets in carrier acquisition and tracking in a linear channel environment corrupted by additive white Gaussian noise (AWGN) are described. The estimation algorithms are derived based on the theory of maximum likelihood (ML) parameter estimation. The estimations include data-aided (DA) carrier frequency and phase estimations in acquisition and non-data-aided (NDA) carrier phase estimation in tracking. The DA carrier frequency and phase estimation algorithms are based on oversampling of a known preamble. The NDA carrier phase estimation makes use of symbol timing knowledge and estimates are extracted from the random data portion of the burst. The algorithms have been simulated and tested using Matlab® to verify their functionalities. The performance of these estimators is also evaluated in the burst-mode operations for 16-QAM and compared in the presence of non-ideal conditions (frequency offset, phase offset, and AWGN). The simulation results show that the carrier recovery techniques presented in this thesis proved to be applicable to the modulation schemes of 16-QAM. The simulations demonstrate that the techniques provide a fast carrier acquisition using a short preamble (about 111 symbols) and are suitable for burst-mode communication systems.
|
20 |
Essays on Agricultural Adaptation to Climate Change and Ethanol Market Integration in the U.S.Aisabokhae, Ruth 1980- 14 March 2013 (has links)
Climate factors like precipitation and temperature, being closely intertwined with agriculture, make a changing climate a big concern for the entire human race and its basic survival. Adaptation to climate is a long-running characteristic of agriculture evidenced by the varying types and forms of agricultural enterprises associated with differing climatic conditions. Nevertheless climate change poses a substantial, additional adaptation challenge for agriculture. Mitigation encompasses efforts to reduce the current and future extent of climate change. Biofuels production, for instance, expands agriculture’s role in climate change mitigation.
This dissertation encompasses adaptation and mitigation strategies as a response to climate change in the U.S. by examining comprehensively scientific findings on agricultural adaptation to climate change; developing information on the costs and benefits of select adaptations to examine what adaptations are most desirable, for which society can further devote its resources; and studying how ethanol prices are interrelated across, and transmitted within the U.S., and the markets that play an important role in these dynamics.
Quantitative analysis using the Forestry and Agricultural Sector Optimization Model (FASOM) shows adaptation to be highly beneficial to agriculture. On-farm varietal and other adaptations contributions outweigh a mix shift northwards significantly, implying progressive technical change and significant returns to adaptation research and investment focused on farm management and varietal adaptations could be quite beneficial over time. Northward shift of corn-acre weighted centroids observed indicates that substantial production potential may shift across regions with the possibility of less production in the South, and more in the North, and thereby, potential redistribution of income. Time series techniques employed to study ethanol price dynamics show that the markets studied are co-integrated and strongly related, with the observable high levels of interaction between all nine cities. Information is transmitted rapidly between these markets. Price seems to be discovered (where shocks originate from) in regions of high demand and perhaps shortages, like Los Angeles and Chicago (metropolitan population centers). The Maximum Likelihood approach following Spiller and Huang’s model however shows cities may not belong to the same economic market and the possibility of arbitrage does not exist between all markets.
|
Page generated in 0.106 seconds