1 |
Transaction size and effective spread: an informational relationshipXiao, Yuewen, Banking & Finance, Australian School of Business, UNSW January 2008 (has links)
The relationship between quantity traded and transaction costs has been one of the main focuses among financial scholars and practitioners. The purpose of this thesis is to investigate the informational relationship between these variables. Following insights and results of Milgrom (1981), Feldman (2004), and Feldman and Winer (2004), we use New York Stock Exchange (NYSE) data and kernel estimation methods to construct the distribution of one variable conditional on the other. We then study the information in these conditional distributions: the extent to which they are ordered by first order stochastic dominance (FOSD) and by monotone likelihood ratio property (MLRP). We find that transaction size and effective spread are statistically significantly orrelated. FOSD, a necessary condition for a "separating signaling equilibrium", holds under certain conditions. We start from two-subsample case. We choose a cut-off point in transaction size and categorize the observations with transaction sizes smaller than the cut-off point into group "low". The remaining data is classified as "high". We repeat this procedure for all possible transaction size cut-off points. It turns out that FOSD holds nowhere. However, once we eliminate transactions at the quote midpoint, the "crossings" between exchange members not specialists, FOSD holds for all the cut-off points fewer than 15800 shares. MLRP, a necessary and sufficient condition for the separating equilibrium to hold point by point of the conditional density functions, does not hold but might not be ruled out considering the error in the estimates. We also find that large trades are not necessarily associated with large spread. Instead, it is more likely that larger trades are transacted at the quote midpoint (again, the non-specialist "crossings") than smaller trades. Our results confirm the findings of Barclay and Warner (1993) regarding the informativeness of medium-size transactions: we identify informational relationships between mid-size transactions and spreads but not for trades at the quote midpoint and large-size transactions. That is, we identify two regimes, an informational one and a non-informational/liquidity one.
|
2 |
Modeling synthetic aperture radar image dataMatthew Pianto, Donald 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T18:29:09Z (GMT). No. of bitstreams: 2
arquivo4274_1.pdf: 5027595 bytes, checksum: 37a31f281a0f888465edbdc60cb2db39 (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2008 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Nessa tese estudamos a estimação por máxima verossimilhança (MV) do parâmetro de aspereza
da distribuição G 0
A de imagens com speckle (Frery et al., 1997). Descobrimos que, satisfeita
uma certa condição dos momentos amostrais, a função de verossimilhança é monótona e as estimativas
MV são infinitas, implicando uma região plana. Implementamos quatro estimadores
de correção de viés em uma tentativa de obter estimativas MV finitas. Três dos estimadores
são obtidos da literatura sobre verossimilhança monótona (Firth, 1993; Jeffreys, 1946) e um,
baseado em reamostragem, é proposto pelo autor. Fazemos experimentos numéricos de Monte
Carlo para comparar os quatro estimadores e encontramos que não existe um favorito claro, a
menos quando um parâmetro (dado a priori da estimação) toma um valor específico. Também
aplicamos os estimadores a dados reais de radar de abertura sintética. O resultado desta análise
mostra que os estimadores precisam ser comparados com base em suas habilidades de classificar
regiões corretamente como ásperas, planas, ou intermediárias e não pelos seus vieses e
erros quadráticos médios
|
3 |
Linear programming algorithms for detecting separated data in binary logistic regression modelsKonis, Kjell Peter January 2007 (has links)
This thesis is a study of the detection of separation among the sample points in binary logistic regression models. We propose a new algorithm for detecting separation and demonstrate empirically that it can be computed fast enough to be used routinely as part of the fitting process for logistic regression models. The parameter estimates of a binary logistic regression model fit using the method of maximum likelihood sometimes do not converge to finite values. This phenomenon (also known as monotone likelihood or infinite parameters) occurs because of a condition among the sample points known as separation. There are two classes of separation. When complete separation is present among the sample points, iterative procedures for maximizing the likelihood tend to break down, when it would be clear that there is a problem with the model. However, when quasicomplete separation is present among the sample points, the iterative procedures for maximizing the likelihood tend to satisfy their convergence criterion before revealing any indication of separation. The new algorithm is based on a linear program with a nonnegative objective function that has a positive optimal value when separation is present among the sample points. We compare several approaches for solving this linear program and find that a method based on determining the feasibility of the dual to this linear program provides a numerically reliable test for separation among the sample points. A simulation study shows that this test can be computed in a similar amount of time as fitting the binary logistic regression model using the method of iteratively reweighted least squares: hence the test is fast enough to be used routinely as part of the fitting procedure. An implementation of our algorithm (as well as the other methods described in this thesis) is available in the R package safeBinaryRegression.
|
Page generated in 0.0449 seconds