• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 11
  • 5
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 30
  • 12
  • 10
  • 10
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Applications of conic finance on the South African financial markets /| by Masimba Energy Sonono.

Sonono, Masimba Energy January 2012 (has links)
Conic finance is a brand new quantitative finance theory. The thesis is on the applications of conic finance on South African Financial Markets. Conic finance gives a new perspective on the way people should perceive financial markets. Particularly in incomplete markets, where there are non-unique prices and the residual risk is rampant, conic finance plays a crucial role in providing prices that are acceptable at a stress level. The theory assumes that price depends on the direction of trade and there are two prices, one for buying from the market called the ask price and one for selling to the market called the bid price. The bid-ask spread reects the substantial cost of the unhedgeable risk that is present in the market. The hypothesis being considered in this thesis is whether conic finance can reduce the residual risk? Conic finance models bid-ask prices of cashows by applying the theory of acceptability indices to cashows. The theory of acceptability combines elements of arbitrage pricing theory and expected utility theory. Combining the two theories, set of arbitrage opportunities are extended to the set of all opportunities that a wide range of market participants are prepared to accept. The preferences of the market participants are captured by utility functions. The utility functions lead to the concepts of acceptance sets and the associated coherent risk measures. The acceptance sets (market preferences) are modeled using sets of probability measures. The set accepted by all market participants is the intersection of all the sets, which is convex. The size of this set is characterized by an index of acceptabilty. This index of acceptability allows one to speak of cashows acceptable at a level, known as the stress level. The relevant set of probability measures that can value the cashows properly is found through the use of distortion functions. In the first chapter, we introduce the theory of conic finance and build a foundation that leads to the problem and objectives of the thesis. In chapter two, we build on the foundation built in the previous chapter, and we explain in depth the theory of acceptability indices and coherent risk measures. A brief discussion on coherent risk measures is done here since the theory of acceptability indices builds on coherent risk measures. It is also in this chapter, that some new acceptability indices are introduced. In chapter three, focus is shifted to mathematical tools for financial applications. The chapter can be seen as a prerequisite as it bridges the gap from mathematical tools in complete markets to incomplete markets, which is the market that conic finance theory is trying to exploit. As the chapter ends, models used for continuous time modeling and simulations of stochastic processes are presented. In chapter four, the attention is focussed on the numerical methods that are relevant to the thesis. Details on obtaining parameters using the maximum likelihood method and calibrating the parameters to market prices are presented. Next, option pricing by Fourier transform methods is detailed. Finally a discussion on the bid-ask formulas relevant to the thesis is done. Most of the numerical implementations were carried out in Matlab. Chapter five gives an introduction to the world of option trading strategies. Some illustrations are used to try and explain the option trading strategies. Explanations of the possible scenarios at the expiration date for the different option strategies are also included. Chapter six is the appex of the thesis, where results from possible real market scenarios are presented and discussed. Only numerical results were reported on in the thesis. Empirical experiments could not be done due to limitations of availabilty of real market data. The findings from the numerical experiments showed that the spreads from conic finance are reduced. This results in reduced residual risk and reduced low cost of entering into the trading strategies. The thesis ends with formal discussions of the findings in the thesis and some possible directions for further research in chapter seven. / Thesis (MSc (Risk Analysis))--North-West University, Potchefstroom Campus, 2013.
22

Applications of conic finance on the South African financial markets /| by Masimba Energy Sonono.

Sonono, Masimba Energy January 2012 (has links)
Conic finance is a brand new quantitative finance theory. The thesis is on the applications of conic finance on South African Financial Markets. Conic finance gives a new perspective on the way people should perceive financial markets. Particularly in incomplete markets, where there are non-unique prices and the residual risk is rampant, conic finance plays a crucial role in providing prices that are acceptable at a stress level. The theory assumes that price depends on the direction of trade and there are two prices, one for buying from the market called the ask price and one for selling to the market called the bid price. The bid-ask spread reects the substantial cost of the unhedgeable risk that is present in the market. The hypothesis being considered in this thesis is whether conic finance can reduce the residual risk? Conic finance models bid-ask prices of cashows by applying the theory of acceptability indices to cashows. The theory of acceptability combines elements of arbitrage pricing theory and expected utility theory. Combining the two theories, set of arbitrage opportunities are extended to the set of all opportunities that a wide range of market participants are prepared to accept. The preferences of the market participants are captured by utility functions. The utility functions lead to the concepts of acceptance sets and the associated coherent risk measures. The acceptance sets (market preferences) are modeled using sets of probability measures. The set accepted by all market participants is the intersection of all the sets, which is convex. The size of this set is characterized by an index of acceptabilty. This index of acceptability allows one to speak of cashows acceptable at a level, known as the stress level. The relevant set of probability measures that can value the cashows properly is found through the use of distortion functions. In the first chapter, we introduce the theory of conic finance and build a foundation that leads to the problem and objectives of the thesis. In chapter two, we build on the foundation built in the previous chapter, and we explain in depth the theory of acceptability indices and coherent risk measures. A brief discussion on coherent risk measures is done here since the theory of acceptability indices builds on coherent risk measures. It is also in this chapter, that some new acceptability indices are introduced. In chapter three, focus is shifted to mathematical tools for financial applications. The chapter can be seen as a prerequisite as it bridges the gap from mathematical tools in complete markets to incomplete markets, which is the market that conic finance theory is trying to exploit. As the chapter ends, models used for continuous time modeling and simulations of stochastic processes are presented. In chapter four, the attention is focussed on the numerical methods that are relevant to the thesis. Details on obtaining parameters using the maximum likelihood method and calibrating the parameters to market prices are presented. Next, option pricing by Fourier transform methods is detailed. Finally a discussion on the bid-ask formulas relevant to the thesis is done. Most of the numerical implementations were carried out in Matlab. Chapter five gives an introduction to the world of option trading strategies. Some illustrations are used to try and explain the option trading strategies. Explanations of the possible scenarios at the expiration date for the different option strategies are also included. Chapter six is the appex of the thesis, where results from possible real market scenarios are presented and discussed. Only numerical results were reported on in the thesis. Empirical experiments could not be done due to limitations of availabilty of real market data. The findings from the numerical experiments showed that the spreads from conic finance are reduced. This results in reduced residual risk and reduced low cost of entering into the trading strategies. The thesis ends with formal discussions of the findings in the thesis and some possible directions for further research in chapter seven. / Thesis (MSc (Risk Analysis))--North-West University, Potchefstroom Campus, 2013.
23

Modelos não lineares para dados de contagem longitudinais / Non linear models for count longitudinal data

Ana Maria Souza de Araujo 16 February 2007 (has links)
Experimentos em que medidas são realizadas repetidamente na mesma unidade experimental são comuns na área agronômica. As técnicas estatísticas utilizadas para análise de dados desses experimentos são chamadas de análises de medidas repetidas, tendo como caso particular o estudo de dados longitudinais, em que uma mesma variável resposta é observada em várias ocasiões no tempo. Além disso, o comportamento longitudinal pode seguir um padrão não linear, o que ocorre com freqüência em estudos de crescimento. Também são comuns experimentos em que a variável resposta refere-se a contagem. Este trabalho abordou a modelagem de dados de contagem, obtidos a partir de experimentos com medidas repetidas ao longo do tempo, em que o comportamento longitudinal da variável resposta é não linear. A distribuição Poisson multivariada, com covariâncias iguais entre as medidas, foi utilizada de forma a considerar a dependência entre os componentes do vetor de observações de medidas repetidas em cada unidade experimental. O modelo proposto por Karlis e Meligkotsidou (2005) foi estendido para dados longitudinais provenientes de experimentos inteiramente casualizados. Modelos para experimentos em blocos casualizados, supondo-se efeitos fixos ou aleatórios para blocos, foram também propostos. A ocorrência de superdispersão foi considerada e modelada através da distribuição Poisson multivariada mista. A estimação dos parâmetros foi realizada através do método de máxima verossimilhança, via algoritmo EM. A metodologia proposta foi aplicada a dados simulados para cada uma das situações estudadas e a um conjunto de dados de um experimento em blocos casualizados em que foram observados o número de folhas de bromélias em seis instantes no tempo. O método mostrou-se eficiente na estimação dos parâmetros para o modelo considerando o delineamento completamente casualizado, inclusive na ocorrência de superdispersão, e delineamento em blocos casualizados com efeito fixo, sem superdispersão e efeito aleatório para blocos. No entanto, a estimação para o modelo que considera efeito fixo para blocos, na presença de superdispersão e para o parâmetro de variância do efeito aleatório para blocos precisa ser aprimorada. / Experiments in which measurements are taken in the same experimental unit are common in agriculture area. The statistical techniques used to analyse data from those experiments are called repeated measurement analysis, and longitudinal study, in which the response variable is observed along the time, is a particular case. The longitudinal behaviour can be non linear, occuring freq¨uently in growth studies. It is also common to have experiments in which the response variable refers to count data. This work approaches the modelling of count data, obtained from experiments with repeated measurements through time, in which the response variable longitudinal behaviour is non linear. The multivariate Poisson distribution, with equal covariances between measurements, was used to consider the dependence between the components of the repeated measurement observation vector in each experimental unit. The Karlis and Meligkotsidou (2005) proposal was extended to longitudinal data obtained from completely randomized. Models for randomized blocks experiments, assuming fixed or random effects for blocks, were also proposed. The occurence of overdispersion was considered and modelled through mixed multivariate Poisson distribution. The parameter estimation was done using maximum likelihood method, via EM algorithm. The methodology was applied to simulated data for all the cases studied and to a data set from a randomized block experiment in which the number of Bromeliads leaves were observed through six instants in time. The method was efficient to estimate the parameters for the completely randomized experiment, including the occurence of overdispersion, and for the randomized blocks experiments assuming fixed effect, with no overdispersion, and random effect for blocks. The estimation for the model that considers fixed effect for block, with overdispersion and for the variance parameters of the random effect for blocks must be improved.
24

Estimativa do valor da taxa de penetrância em doenças autossômicas dominantes: estudo teórico de modelos e desenvolvimento de um programa computacional / Penetrance rate estimation for autosomal dominant diseases: study of models and development of a computer program

Andréa Roselí Vançan Russo Horimoto 17 September 2009 (has links)
O objetivo principal do trabalho foi o desenvolvimento de um programa computacional, em linguagem Microsoft Visual Basic 6.0 (versão executável), para estimativa da taxa de penetrância a partir da análise de genealogias com casos de doenças com herança autossômica dominante. Embora muitos dos algoritmos empregados no programa tenham se baseado em idéias já publicadas na literatura (em sua maioria por pesquisadores e pós-graduandos do Laboratório de Genética Humana do Instituto de Biociências da Universidade de São Paulo), desenvolvemos alguns métodos inéditos para lidar com situações encontradas com certa frequência nos heredogramas publicados na literatura, como: a) ausência de informações sobre o fenótipo do indivíduo gerador da genealogia; b) agrupamento de árvores de indivíduos normais sem a descrição da distribuição de filhos entre os progenitores; c) análise de estruturas da genealogia contendo uniões consanguíneas, utilizando um método alternativo ao descrito na literatura; d) determinação de soluções gerais para as funções de verossimilhança de árvores de indivíduos normais com ramificação regular e para as probabilidades de heterozigose de qualquer indivíduo pertencente a essas árvores. Além da versão executável, o programa, denominado PenCalc, é apresentado também numa versão para Internet (PenCalc Web), a qual fornece adicionalmente as probabilidades de heterozigose e o cálculo de afecção na prole de todos os indivíduos da genealogia. Essa versão pode ser acessada livre e gratuitamente no endereço http://www.ib.usp.br/~otto/pencalcweb. Desenvolvemos também um modelo com taxa de penetrância variável dependente da geração, uma vez que a inspeção de famílias com doenças autossômicas dominantes, como é o caso da síndrome da ectrodactilia associada à hemimelia tibial (EHT), sugere a existência de um fenômeno similar à antecipação, em relação à taxa de penetrância. Os modelos com taxa de penetrância constante e variável, e os métodos desenvolvidos neste trabalho foram aplicados a 21 heredogramas de famílias com afetados pela EHT e ao conjunto das informações de todas essas genealogias (meta-análise), obtendo-se em todos os casos estimativas da taxa de penetrância. / The main objective of this dissertation was the development of a computer program, in Microsoft® Visual Basic® 6.0, for estimating the penetrance rate of autosomal dominant diseases by means of the information contained on genealogies. Some of the algorithms we used in the program were based on ideas already published in the literature by researchers and (post-) graduate students of the Laboratory of Human Genetics, Department of Genetics and Evolutionary Biology, Institute of Biosciences, University of São Paulo. We developed several other methods to deal with particular structures found frequently in the genealogies published in the literature, such as: a) the absence of information on the phenotype of the individual generating of the genealogy; b) the grouping of trees of normal individuals without the separate description of the offspring number per individual; c) the analysis of structures containing consanguineous unions; d) the determination of general solutions in simple analytic form for the likelihood functions of trees of normal individuals with regular branching and for the heterozygosis probabilities of any individual belonging to these trees. In addition to the executable version of the program summarized above, we also prepared, in collaboration with the dissertation supervisor and the undergraduate student Marcio T. Onodera (main author of this particular version), another program, represented by a web version (PenCalc Web). It enables the calculation of heterozygosis probabilities and the offspring risk for all individuals of the genealogy, two details we did not include in the present version of our program. The program PenCalc Web can be accessed freely at the home-page address http://www.ib.usp.br/~otto/pencalcweb. Another important contribution of this dissertation was the development of a model of estimation with generationdependent penetrance rate, as suggested by the inspection of families with some autosomal dominant diseases, such as the ectrodactyly-tibial hemimelia syndrome (ETH), a condition which exhibits a phenomenon similar to anticipation in relation to the penetrance rate. The models with constant and variable penetrance rates, as well as practically all the methods developed in this dissertation, were applied to 21 individual genealogies from the literature with cases of ETH and to the set of all these genealogies (meta-analysis). The corresponding results of all these analysis are comprehensively presented.
25

Užití modelů diskrétních dat / Application of count data models

Reichmanová, Barbora January 2018 (has links)
Při analýze dat růstu rostlin v řádku dané délky bychom měli uvažovat jak pravděpodobnost, že semínko zdárně vyroste, tak i náhodný počet semínek, které byly zasety. Proto se v celé práci věnujeme analýze náhodných sum, kde počet nezávisle stejně rozdělených sčítanců je na nich nezávislé náhodné číslo. První část práce věnuje pozornost teoretickému základu, definuje pojem náhodná suma a uvádí vlastnosti, jako jsou číslené míry polohy nebo funkční charakteristiky popisující dané rozdělení. Následně je diskutována metoda odhadu parametrů pomocí maximální věrohodnosti a zobecněné lineární modely. Metoda kvazi-věrohodnosti je též krátce zmíněna. Tato část je ilustrována příklady souvisejícími s výchozím problémem. Poslední kapitola se věnuje aplikaci na reálných datech a následné analýze.
26

Statistická analýza výběrů ze zobecněného exponenciálního rozdělení / Statistical analysis of samples from the generalized exponential distribution

Votavová, Helena January 2014 (has links)
Diplomová práce se zabývá zobecněným exponenciálním rozdělením jako alternativou k Weibullovu a log-normálnímu rozdělení. Jsou popsány základní charakteristiky tohoto rozdělení a metody odhadu parametrů. Samostatná kapitola je věnována testům dobré shody. Druhá část práce se zabývá cenzorovanými výběry. Jsou uvedeny ukázkové příklady pro exponenciální rozdělení. Dále je studován případ cenzorování typu I zleva, který dosud nebyl publikován. Pro tento speciální případ jsou provedeny simulace s podrobným popisem vlastností a chování. Dále je pro toto rozdělení odvozen EM algoritmus a jeho efektivita je porovnána s metodou maximální věrohodnosti. Vypracovaná teorie je aplikována pro analýzu environmentálních dat.
27

Asymptotic Analysis for Nonlinear Spatial and Network Econometric Models

Xu, Xingbai, Xu 28 September 2016 (has links)
No description available.
28

Métodos de estimação de parâmetros em modelos geoestatísticos com diferentes estruturas de covariâncias: uma aplicação ao teor de cálcio no solo. / Parameter estimation methods in geostatistic models with different covariance structures: an application to the calcium content in the soil.

Oliveira, Maria Cristina Neves de 17 March 2003 (has links)
A compreensão da dependência espacial das propriedades do solo vem sendo cada vez mais requerida por pesquisadores que objetivam melhorar a interpretação dos resultados de experimentos de campo fornecendo, assim, subsídios para novas pesquisas a custos reduzidos. Em geral, variáveis como, por exemplo, o teor de cálcio no solo, estudado neste trabalho, apresentam grande variabilidade impossibilitando, na maioria das vezes, a detecção de reais diferenças estatísticas entre os efeitos de tratamentos. A consideração de amostras georreferenciadas é uma abordagem importante na análise de dados desta natureza, uma vez que amostras mais próximas são mais similares do que as mais distantes e, assim, cada realização desta variável contém informação de sua vizinhança. Neste trabalho, métodos geoestatísticos que baseiam-se na modelagem da dependência espacial, nas pressuposições Gaussianas e nos estimadores de máxima verossimilhança são utilizados para analisar e interpretar a variabilidade do teor de cálcio no solo, resultado de um experimento realizado na Fazenda Angra localizada no Estado do Rio de Janeiro. A área experimental foi dividida em três regiões em função dos diferentes períodos de adubação realizadas. Neste estudo foram utilizados dados do teor de cálcio obtidos das camadas 0-20cm e 20-40cm do solo, de acordo com as coordenadas norte e leste. Modelos lineares mistos, apropriados para estudar dados com esta característica, e que permitem a utilização de diferentes estruturas de covariâncias e a incorporação da região e tendência linear das coordenadas foram usados. As estruturas de covariâncias utilizadas foram: a exponencial e a Matérn. Para estimar e avaliar a variabilidade dos parâmetros utilizaram-se os métodos de máxima verossimilhança, máxima verossimilhança restrita e o perfil de verossimilhança. A identificação da dependência e a predição foram realizadas por meio de variogramas e mapas de krigagem. Além disso, a seleção do modelo adequado foi feita pelo critério de informação de Akaike e o teste da razão de verossimilhanças. Observou-se, quando utilizado o método de máxima verossimilhança, o melhor modelo foi aquele com a covariável região e, com o método de máxima verossimilhança restrita, o modelo com a covariável região e tendência linear nas coordenadas (modelo 2). Com o teor de cálcio, na camada 0-20cm e considerando-se a estrutura de covariância exponencial foram obtidas as menores variâncias nugget e a maior variância espacial (sill - nugget). Com o método de máxima verossimilhança e com o modelo 2 foram observadas variâncias de predição mais precisas. Por meio do perfil de verossimilhança pode-se observar menor variabilidade dos parâmetros dos variogramas ajustados com o modelo 2. Utilizando-se vários modelos e estruturas de covariâncias, deve-se ser criterioso, pois a precisão das estimativas, depende do tamanho da amostra e da especificação do modelo para a média. Os resultados obtidos foram analisados, com a subrotina geoR desenvolvida por Ribeiro Junior & Diggle (2000), e por meio dela pode-se obter estimativas confiáveis para os parâmetros dos diferentes modelos estimados. / The understanding of the spatial dependence of the properties of the soil becomes more and more required by researchers that attempt to improve the interpretation of the results of field experiments supplying subsidies for new researches at reduced costs. In general, variables as, for example, the calcium content in the soil, studied in this work, present great variability disabling, most of the time, the detection of real statistical differences among the treatment effects. The consideration of georeferenced samples is an important approach in the analysis of data of this nature, because closer samples are more similar than the most distant ones and, thus, each realization of this variable contains information of its neighborhood. In this work, geostatistics methods that are based on the modeling of the spatial dependence, under the Gaussian assumptions and the maximum likelihood estimators, are used to analyze and to interpret the variability of calcium content in the soil, obtained from an experiment carried on at Fazenda Angra, located in Rio de Janeiro, Brazil. The experimental area was divided in three areas depending on the different periods of fertilization. In this study, data of the calcium soil content from the layers 0-20cm and 20-40cm, were used, according to the north and east coordinates. Mixed linear models, ideal to study data with this characteristic, and that allow the use of different covariance structures, and the incorporation of the region and linear tendency of the coordinates, were used. The covariance structures were: the exponential and the Matérn. Maximum likelihood, maximum restricted likelihood and the profile of likelihood methods were used to estimate and to evaluate the variability of the parameters. The identification of the dependence and the prediction were realized using variograms and krigging maps. Besides, the selection of the appropriate model was made through the Akaike information criterion and the likelihood ratio test. It was observed that when maximum likelihood method was used the most appropriate model was that with the region covariate and, with the maximum restricted likelihood method, the best model was the one with the region covariate and linear tendency in the coordinates (model 2). With the calcium content, in the layer 0-20cm and considering the exponential covariance structure, the smallest nugget variances and the largest spatial variance (sill - nugget) were obtained. With the maximum likelihood method and with the model 2 more precise prediction variances were observed. Through the profile of likelihood method, smaller variability of the adjusted variogram parameters can be observed with the model 2. With several models and covariance structures being used, one should be very critical, because the precision of the estimates depends on the size of the sample and on the specification of the model for the average. The obtained results were analyzed, with the subroutine geoR developed by Ribeiro Junior & Diggle (2000), and through this subroutine, reliable estimates for the parameters of the different estimated models can be obtained.
29

Revision Moment for the Retail Decision-Making System

Juszczuk, Agnieszka Beata, Tkacheva, Evgeniya January 2010 (has links)
In this work we address to the problems of the loan origination decision-making systems. In accordance with the basic principles of the loan origination process we considered the main rules of a clients parameters estimation, a change-point problem for the given data and a disorder moment detection problem for the real-time observations. In the first part of the work the main principles of the parameters estimation are given. Also the change-point problem is considered for the given sample in the discrete and continuous time with using the Maximum likelihood method. In the second part of the work the disorder moment detection problem for the real-time observations is considered as a disorder problem for a non-homogeneous Poisson process. The corresponding optimal stopping problem is reduced to the free-boundary problem with a complete analytical solution for the case when the intensity of defaults increases. Thereafter a scheme of the real time detection of a disorder moment is given.
30

Proje??o diam?trica com base em dados observados antes e ap?s o desbaste em povoamentos de eucalipto

Lacerda, Talles Hudson Souza 16 February 2017 (has links)
?rea de concentra??o: Manejo florestal e silvicultura. / Submitted by Jos? Henrique Henrique (jose.neves@ufvjm.edu.br) on 2017-06-09T22:52:32Z No. of bitstreams: 2 talles_hudson_souza_lacerda.pdf: 1852089 bytes, checksum: 5f25d81aee4d02d93913bfc83196ecb3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Rodrigo Martins Cruz (rodrigo.cruz@ufvjm.edu.br) on 2017-06-14T19:22:36Z (GMT) No. of bitstreams: 2 talles_hudson_souza_lacerda.pdf: 1852089 bytes, checksum: 5f25d81aee4d02d93913bfc83196ecb3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2017-06-14T19:22:36Z (GMT). No. of bitstreams: 2 talles_hudson_souza_lacerda.pdf: 1852089 bytes, checksum: 5f25d81aee4d02d93913bfc83196ecb3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior (CAPES) / O objetivo desse trabalho foi avaliar, do ponto de vista estat?stico e biol?gico, simula??es realizadas por dois modelos de distribui??o diam?trica, ajustados pelos m?todos de aproxima??o linear e m?xima verossimilhan?a, em planta??es de eucalipto submetidos a desbaste. Os dados foram provenientes de um povoamento h?brido de Eucalyptus grandis x Eucalyptus urophylla, sob regime de desbaste, localizado no nordeste da Bahia, vinculados ? empresa BAHIA SPECIALTY CELLULOSE. Os dados utilizados neste estudo foram obtidos nas idades 27, 40, 50, 61, 76, 87, 101, 112, 122, 137, 147, 158 e 165 meses. Esse povoamento foi submetido a tratamentos de remo??o seletiva de 20%, 35% e 50%, nas idades 58 e 142 meses. Utilizou-se dois modelos de distribui??o diam?trica, empregando bases de dados observadas aos 27 meses (antes do primeiro desbaste), aos 61 meses (ap?s o primeiro desbaste) e aos 147 meses (ap?s o segundo desbaste). Por meio dos modelos gerou-se tr?s sistemas, os quais se diferiram no m?todo de ajuste da fun??o Weibull. No sistema 1 os par?metros da fun??o Weibull foram ajustados pelo m?todo de aproxima??o linear. No sistema 2 e no sistema 3, os par?metros foram ajustados pelo m?todo da m?xima verossimilhan?a. As proje??es realizadas pelos sistemas foram confrontadas com as distribui??es diam?tricas observadas, por meio do teste de ader?ncia Kolmogorov-Smirnov a 1% de signific?ncia, e pelo teste F de Graybill, com n?vel de signific?ncia de 5%. Os tr?s sistemas proporcionaram distribui??es diam?tricas projetadas estatisticamente semelhantes ?s observadas, antes e ap?s o desbastes. O sistema 2 apresentou um maior percentual de proje??es n?o significativas para os dois testes estat?sticos empregados. As simula??es realizadas pelos modelos apresentaram realismo estat?stico e tend?ncia do crescimento da distribui??o de di?metros para diferentes porcentagens de desbaste. Houve maior efici?ncia dos modelos ao se utilizar distribui??es diam?tricas observadas em idades imediatamente antes do desbaste. As proje??es das distribui??es diam?tricas, empregando-se como base inicial as distribui??es observadas antes do primeiro desbaste e imediatamente ap?s os desbastes (simula??es 1, 2 e 3), foram mais precisas do que as proje??es obtidas quando foram utilizadas somente as distribui??es diam?tricas observadas antes do primeiro desbaste como base inicial para as proje??es e, em seguida, simulados os desbastes nas idades previstas e, por ?ltimo, realizadas as proje??es empregando-se a distribui??o estimada remanescente do desbaste como base inicial para projetar as distribui??es para idades subsequentes (simula??es 4, 5 e 6). / Disserta??o (Mestrado) ? Programa de P?s-Gradua??o em Ci?ncia Florestal, Universidade Federal dos Vales do Jequitinhonha e Mucuri, 2017. / The objective of the study was evaluated from the statistical and biological point of view, simulations performed by two models of diametric distribution, adjusted by linear approximation and maximum likelihood methods, in eucalyptus plantations submitted to thinning. The data were found in a hybrid settlement of Eucalyptus grandis x Eucalyptus urophylla, under thinning regime, located in the northeast region of Bahia, linked to the company BAHIA ESPECIALIDADE CELULOSE. The data used in this study 27, 40, 50, 61, 76, 87, 101, 112, 122, 137, 147, 158 and 165 months. This population was submitted to treatments of selective removal of 20%, 35% and 50%, in the ages 58 and 142 months. Two diametric distribution models were used, using data bases observed at 27 months (before the first thinning), at 61 months (after the first thinning) and at 147 months (after the second thinning). By means of the models three systems were generated, the channels did not differ any method of adjustment of the Weibull function. No system 1 of the Weibull function parameters were adjusted by the linear approximation method. In system 2 and in system 3 the parameters were adjusted by the maximum likelihood method. As the projections performed by the systems were compared with the observed diametric distributions, using the Kolmogorov-Smirnov test at 1% significance, by the Graybill F test, with a significance level of 5%. The three systems provided by the statistically projected diametric distributions for observations, before and after the deviations. System 2 presents a higher percentage of nonsignificant projections for the two statistical tests used. As simulations of model execution demonstrated statistical realism and tendency of growth of the distribution of diameters for different percentages of thinning. There was greater efficiency of the models of use of diametric distributions observed in ages before thinning. As the projections of the diametric distributions, using as an initial basis as distributions observed before the first thinning and after the slabs (simulations 1, 2 and 3), were more accurate than the projections obtained when only diametric distributions observed before the first Thinning as the initial basis for the projections and then simulated the lagging at the predicted ages and finally performed as projections using an estimated remnant distribution of the thinning as the initial basis for designing as distributions for subsequent ages (simulations 4, 5 and 6).

Page generated in 0.0467 seconds