81 |
Inferência via Bootstrap na Conjoint Analysis / Inference by Bootstrap in Conjoint AnalysisBarbosa, Eduardo Campana 14 December 2017 (has links)
Submitted by Reginaldo Soares de Freitas (reginaldo.freitas@ufv.br) on 2018-02-22T18:20:05Z
No. of bitstreams: 1
texto completo.pdf: 826585 bytes, checksum: 3724f9d33c7d0253426d44dc0a59b8fc (MD5) / Made available in DSpace on 2018-02-22T18:20:05Z (GMT). No. of bitstreams: 1
texto completo.pdf: 826585 bytes, checksum: 3724f9d33c7d0253426d44dc0a59b8fc (MD5)
Previous issue date: 2017-12-14 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A presente tese teve como objetivo introduzir o método de reamostragem com reposição ou Bootstrap na Conjoint Analysis. Apresenta-se no texto uma revisão conceitual (Revisão de Literatura) sobre a referida metodologia (Conjoint Analysis) e também sobre o método proposto (Bootstrap). Adicionalmente, no Capítulo I e II, define-se a parte teórica e metodológica da Conjoint Analysis e do método Bootstrap, ilustrando o funcionamento conjunto dessas abordagens via aplicação real, com dados da área de tecnologia de alimentos. Inferências adicionais que até então não eram fornecidas no contexto clássico ou frequentista podem agora ser obtidas via análise das distribuições empíricas dos estimadores das Importâncias Relativas (abordagem por notas) e das Probabilidades e Razão de Escolhas (abordagem por escolhas). De forma geral, os resultados demonstraram que o método Bootstrap forneceu estimativas pontuais mais precisas e tornou ambas as abordagens da Conjoint Analysis mais informativas, uma vez que medidas de erro padrão e, principalmente, intervalos de confiança puderam ser facilmente obtidos para certas quantidades de interesse, possibilitando a realização de testes ou comparações estatísticas sobre as mesmas. / The aim of this thesis was introduce the Booststrap resampling method in Conjoint Analysis. We present in the text a conceptual review (Literature Review) about this methodology (Conjoint Analysis) and also about the proposed method (Bootstrap). In addition, in Chapter I and II, the theoretical and methodological aspects of Conjoint Analysis and the Bootstrap method are defined, illustrating the joint operation of these approaches via real application, with data from the food technology area.. Additional inferences have not been provided in the classic or frequentist context can now be obtained by analyzing the empirical distributions of Relative Importance (ratings based approach) and Probability and Choice Ratio (choice based approach) estimators. Overall, the results demonstrated that the Bootstrap method provided more accurate point estimates and made both Conjoint Analysis approaches more informative, since standard error measures, and mainly confidence intervals, could be easily obtained for certain quantities of interest, making it possible to perform statistical tests or comparisons on them.
|
82 |
Robust spectrum sensing techniques for cognitive radio networksHuang, Qi January 2016 (has links)
Cognitive radio is a promising technology that improves the spectral utilisation by allowing unlicensed secondary users to access underutilised frequency bands in an opportunistic manner. This task can be carried out through spectrum sensing: the secondary user monitors the presence of primary users over the radio spectrum periodically to avoid harmful interference to the licensed service. Traditional energy based sensing methods assume the value of noise power as prior knowledge. They suffer from the noise uncertainty problem as even a mild noise level mismatch will lead to significant performance loss. Hence, developing an efficient robust detection method is important. In this thesis, a novel sensing technique using the F-test is proposed. By assuming a multiple antenna assisted receiver, this detector uses the F-statistic as the test statistic which offers absolute robustness against the noise variance uncertainty. In addition, since the channel state information (CSI) is required to be known, the impact of CSI uncertainty is also discussed. Results show the F-test based sensing method performs better than the energy detector and has a constant false alarm probability, independent of the accuracy of the CSI estimate. Another main topic of this thesis is to address the sensing problem for non-Gaussian noise. Most of the current sensing techniques consider Gaussian noise as implied by the central limit theorem (CLT) and it offers mathematical tractability. However, it sometimes fails to model the noise in practical wireless communication systems, which often shows a non-Gaussian heavy-tailed behaviour. In this thesis, several sensing algorithms are proposed for non-Gaussian noise. Firstly, a non-parametric eigenvalue based detector is developed by exploiting the eigenstructure of the sample covariance matrix. This detector is blind as no information about the noise, signal and channel is required. In addition, the conventional energy detector and the aforementioned F-test based detector are generalised to non-Gaussian noise, which require the noise power and CSI to be known, respectively. A major concern of these detection methods is to control the false alarm probability. Although the test statistics are easy to evaluate, the corresponding null distributions are difficult to obtain as they depend on the noise type which may be unknown and non-Gaussian. In this thesis, we apply the powerful bootstrap technique to overcome this difficulty. The key idea is to reuse the data through resampling instead of repeating the experiment a large number of times. By using the nonparametric bootstrap approach to estimate the null distribution of the test statistic, the assumptions on the data model are minimised and no large sample assumption is invoked. In addition, for the F-statistic based method, we also propose a degrees-of-freedom modification approach for null distribution approximation. This method assumes a known noise kurtosis and yields closed form solutions. Simulation results show that in non-Gaussian noise, all the three detectors maintain the desired false alarm probability by using the proposed algorithms. The F-statistic based detector performs the best, e.g., to obtain a 90% detection probability in Laplacian noise, it provides a 2.5 dB and 4 dB signal-to-noise ratio (SNR) gain compared with the eigenvalue based detector and the energy based detector, respectively.
|
83 |
Modelo ARFIMA Espaço-Temporal em Estudos de Poluição do ArMONROY, N. A. J. 28 August 2013 (has links)
Made available in DSpace on 2016-08-29T15:09:50Z (GMT). No. of bitstreams: 1
tese_7242_Tese Nataly Adriana Jimenez Monroy.pdf: 1647826 bytes, checksum: 043d39d2450d7eba488a63e03d4918ad (MD5)
Previous issue date: 2013-08-28 / Nos estudos de polui¸c ao atmosf´erica ´e comum observar dados medidos em diferentes posi¸c oes no espa¸co e no tempo, como ´e o caso da medi¸c ao de concentra¸c oes de poluentes em uma cole¸c ao de esta¸c oes de monitoramento. A din amica desse tipo de observa¸c oes pode ser representada por meio de modelos estat´ısticos que consideram a depend encia entre as observa¸c oes em cada localiza¸c ao ou regi ao e as observa¸c oes nas regi oes vizinhas, assim como a depend encia entre as observa¸c oes medidas sequencialmente. Nesse contexto, a classe de Modelos Espa¸co-Temporais Autorregressivos e de M´edias M´oveis (STARMA) ´e de grande utilidade, pois permite explicar a incerteza em sistemas que apresentam uma complexa variabilidade nas escalas temporal e espacial. O processo com representa¸c ao STARMA ´e uma extens ao dos modelos ARMA para s´eries temporais univariadas, sendo que al´em de modelar uma s´erie simples atrav´es do tempo, considera-se tamb´em sua evolu¸c ao em uma grade espacial. A aplica¸c ao dos modelos STARMA em estudos de polui¸c ao atmosf´erica ´e ainda pouco explorada. Nessa dire¸c ao, propomos nesta Tese uma classe de modelos espa¸co-temporais que considera as caracter´ısticas de longa depend encia comumente observadas em s´eries temporais
de concentra¸c oes de poluentes atmosf´ericos. Este modelo ´e aplicado a s´eries reais provenientes de observa¸c oes di´arias de concentra¸c ao m´edia de PM10 e SO2 na Regi ao da Grande Vit´oria,
ES, Brasil. Os resultados evidenciaram que a din amica de dispers ao dos poluentes estudados pode ser bem descrita usando modelos STARMA e STARFIMA, propostos nesta Tese. Essas classes de modelos permitiram estimar a influ encia dos poluentes sobre os n´ıveis de polui¸c ao nas regi oes vizinhas. O processo STARFIMA mostrou-se apropriado nas s´eries sob estudo, pois essas apresentaram caracter´ısticas de longa mem´oria no tempo. A considera¸c ao dessa
propriedade no modelo conduziu a uma melhora significativa do ajuste e das previs oes, no tempo e no espa¸co.
|
84 |
Hipótese de mercados eficientesCunha, Jefferson da January 2002 (has links)
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Sócio-Econômico. Programa de Pós-Graduação em Economia. / Made available in DSpace on 2012-10-19T17:15:11Z (GMT). No. of bitstreams: 0Bitstream added on 2014-09-26T01:25:28Z : No. of bitstreams: 1
186927.pdf: 3030870 bytes, checksum: 84ce2aba60d9b9099519eaedb2bc7435 (MD5) / Talvez a mais controversa discussão na área financeira seja a hipótese de mercados eficientes (HME). Esta hipótese implica que o preço corrente de um ativo reflete plenamente todas as informações que estão disponíveis publicamente sobre os aspectos econômicos fundamentais que afetam o valor do ativo, isto é, a moderna teoria de finanças, os estudos econométricos e mesmo os modelos baseados em inteligência artificial seriam inúteis na obtenção de retornos acima da média do mercado. Entretanto, existem poucos estudos sobre a HME na forma fraca que utilizam estratégias diferentes de médias móveis, regressões ou osciladores estocásticos, ficando mais de uma centena de indicadores e padrões da análise técnica (AT) carecendo de estudos empíricos. A proposta deste trabalho é utilizar os padrões candlesticks como estratégia de investimento nos mercados acionários, de câmbio e futuros brasileiros, utilizando dados diários entre 01.04.1997 e 31.03.2002. As séries estudadas serão compostas por vetores com as cotações diárias máximas, mínimas, de abertura e fechamento. Nada foi encontrado na literatura que se assemelhe ao aqui proposto. A metodologia bootstrap será utilizada para validação estatística dos resultados e o processo de overlapping para aumentar o período de análise e avaliar a incidência de choques. No trabalho foram incluidos ativos com diferentes níveis de liquidez além dos custos transacionais reais dos mercados brasileiros. Os resultados contestam a HME, apresentando fortes indícios do poder de previsibilidade da estratégia e superando os resultados de importantes trabalhos acadêmicos que testaram a AT.
|
85 |
Identificação de pontos influentes em uma amostra aleatória de pré-formas da distribuição bingham complexa (distãncia de Cook e métodos boostrap)Patrícia Reyes Flórez, Olga 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T18:02:32Z (GMT). No. of bitstreams: 2
arquivo3968_1.pdf: 494550 bytes, checksum: dc123436c569f80c9bebfbc057d36cbc (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2009 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O objetivo desta dissertação é avaliar e aplicar métodos de análise de influência na análise estatística
de formas. A partir do modelo de deleção de casos (CDM) obtem-se uma medida da distância de Cook
quando o conjunto de dados tem distribuição Bingham complexa. Mediante simulações de Monte
Carlo e o método bootstrap tem-se a estimação da região de confiança para a distância de Cook em
diferentes conjuntos de dados com distribuição Bingham complexa. Além disso, é mostrado nesta
dissertação que outros métodos para análise de influência podem funcionar em análise de formas. A
eficácia da distância de Cook frente aos métodos apresentados é avaliada
|
86 |
Aplicação do metodo Bootstrap na estimação da variancia do estimador de razãoBiscola, Jair 20 March 1985 (has links)
Orientador: Sebastião de Amorim / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Ciencia da Computação / Made available in DSpace on 2018-07-14T11:51:51Z (GMT). No. of bitstreams: 1
Biscola_Jair_M.pdf: 1002661 bytes, checksum: 9dd032a468ddb40c9d7092c33c7d6d07 (MD5)
Previous issue date: 1985 / Resumo: Não informado. / Abstract: Not informed. / Mestrado / Mestre em Estatística
|
87 |
Statistical inference and efficient portfolio investment performanceLiu, Shibo January 2014 (has links)
Two main methods have been used in mutual funds evaluation. One is portfolio evaluation, and the other is data envelopment analysis (DEA). The history of portfolio evaluation dates from the 1960s with emphasis on both expected return and risk. However, there are many criticisms of traditional portfolio analysis which focus on their sensitivity to chosen benchmarks. Imperfections in portfolio analysis models have led to the exploration of other methodologies to evaluate fund performance, in particular data envelopment analysis (DEA). DEA is a non-parametric methodology for measuring relative performance based on mathematical programming. Based on the unique characteristics of investment trusts, Morey and Morey (1999) developed a mutual funds efficiency measure in a traditional mean-variance model. It was based on Markowitz portfolio theory and related the non-parametric methodologies to the foundations of traditional performance measurement in mean-variance space. The first application in this thesis is to apply the non-linear programming calculation of the efficient frontier in mean variance space outlined in Morey and Morey (1999) to a new modern data set comprising a multi-year sample of investment funds. One limitation of DEA is the absence of sampling error from the methodology. Therefore the second innovation in this thesis extends Morey and Morey (1999) model by the application of bootstrapped probability density functions in order to develop confidence intervals for the relative performance indicators. This has not previously been achieved for the DEA frontier in mean variance space so that the DEA efficiency scores obtained through Morey and Morey (1999) model have not hitherto been tested for statistical significance. The third application in this thesis is to examine the efficiency of investment trusts in order to analyze the factors contributing to investment trusts' performance and detect the determinants of inefficiency. Robust-OLS regression, Tobit models and Papke-Wooldridge (PW) models are conducted and compared to evaluate contextual variables affecting the performance of investment funds. From the thesis, new and original Matlab codes designed for Morey and Morey (1999) models are presented. With the Matlab codes, not only the results are obtained, but also how this quadratic model is programming could be very clearly seen, with all the details revealed.
|
88 |
Quasi-Monte Carlo methods for bootstrapYam, Chiu Yu 01 January 2000 (has links)
No description available.
|
89 |
Some problems in the theory of nuclear interactionsBarrett, S. M. January 1967 (has links)
No description available.
|
90 |
Extremal and probabilistic bootstrap percolationPrzykucki, Michał Jan January 2013 (has links)
In this dissertation we consider several extremal and probabilistic problems in bootstrap percolation on various families of graphs, including grids, hypercubes and trees. Bootstrap percolation is one of the simplest cellular automata. The most widely studied model is the so-called r-neighbour bootstrap percolation, in which we consider the spread of infection on a graph G according to the following deterministic rule: infected vertices of G remain infected forever and in successive rounds healthy vertices with at least r already infected neighbours become infected. Percolation is said to occur if eventually every vertex is infected. In Chapter 1 we consider a particular extremal problem in 2-neighbour bootstrap percolation on the n \times n square grid. We show that the maximum time an infection process started from an initially infected set of size n can take to infect the entire vertex set is equal to the integer nearest to (5n^2-2n)/8. In Chapter 2 we relax the condition on the size of the initially infected sets and show that the maximum time for sets of arbitrary size is 13n^2/18+O(n). In Chapter 3 we consider a similar problem, namely the maximum percolation time for 2-neighbour bootstrap percolation on the hypercube. We give an exact answer to this question showing that this time is \lfloor n^2/3 \rfloor. In Chapter 4 we consider the following probabilistic problem in bootstrap percolation: let T be an infinite tree with branching number \br(T) = b. Initially, infect every vertex of T independently with probability p > 0. Given r, define the critical probability, p_c(T,r), to be the value of p at which percolation becomes likely to occur. Answering a problem posed by Balogh, Peres and Pete, we show that if b \geq r then the value of b itself does not yield any non-trivial lower bound on p_c(T,r). In other words, for any \varepsilon > 0 there exists a tree T with branching number \br(T) = b and critical probability p_c(T,r) < \varepsilon. However, in Chapter 5 we prove that this is false if we limit ourselves to the well-studied family of Galton--Watson trees. We show that for every r \geq 2 there exists a constant c_r>0 such that if T is a Galton--Watson tree with branching number \br(T) = b \geq r then \[ p_c(T,r) > \frac{c_r}{b} e^{-\frac{b}{r-1}}. \] We also show that this bound is sharp up to a factor of O(b) by describing an explicit family of Galton--Watson trees with critical probability bounded from above by C_r e^{-\frac{b}{r-1}} for some constant C_r>0.
|
Page generated in 0.0766 seconds