• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 67
  • 36
  • 32
  • 20
  • 20
  • 18
  • 6
  • 6
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 344
  • 344
  • 71
  • 65
  • 63
  • 53
  • 53
  • 40
  • 34
  • 33
  • 32
  • 28
  • 26
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Análise de contagens multivariadas. / Multivariate count analysis.

Linda Lee Ho 15 September 1995 (has links)
Este trabalho apresenta uma análise estatística de contagens multivariadas proveniente de várias populações através de modelos de regressão. Foram considerados casos onde os vetores respostas obedeçam às distribuições Poisson multivariada e Poisson log-normal multivariada. Esta distribuição admite correlação de ambos sinais entre componentes do vetor resposta, enquanto que as distribuições mais usuais para dados de contagens (como a Poisson multivariada) admitem apenas correlação positiva entre as componentes do vetor resposta. São discutidos métodos de estimação e testes de hipóteses sobre os parâmetros do modelo para o caso bivariado. Estes modelos de regressão foram aplicados a um conjunto de dados referentes a contagens de dois tipos de defeitos em 100 gramas de fibras têxteis de quatro máquinas craqueadeiras, sendo duas de um fabricante e as outras de um segundo fabricante. Os resultados obtidos nos diferentes modelos de regressão foram comparados. Para estudar o comportamento das estimativas dos parâmetros de uma distribuição Poisson Log-Normal, amostras foram simuladas segundo esta distribuição. / Regression models are presented to analyse multivariate counts from many populations. Due to the random vector characteristic, we consider two classes of probability models: Multivariate Poisson distribution and Multivariate Poisson Log-Normal distribution. The last distribution admits negative and positive correlations between two components of a random vector under study, while other distributions (as Multivariate Poisson) admit only positive correlation. Estimation methods and test of hypothese on the parameters in bivariate case are discussed. The proposed techniques are illustrated by numerical examples, considering counts of two types of defects in 100g of textile fibers produced by four machines, two from one manufacturer and the other two from another one. The results from different regression models are compared. The empirical distribution of Poisson Log-Normal parameter estimations are studied by simulated samples.
92

Estudo da estabilidade da reação industrial de formação de óxido de etileno a partir do gerenciamento das variáveis críticas de processo. / Stability study of ethylene oxide industrial reaction from the management of critical process variables.

Luciano Gonçalves Ribeiro 03 October 2013 (has links)
O desempenho de um processo de produção de óxido de etileno é normalmente avaliado através da seletividade da reação. Neste trabalho, uma unidade produtiva foi estudada com o objetivo de se maximizar a seletividade através da atuação sobre as principais variáveis de processo. Uma análise estatística de um conjunto de dados de processo mostrou que quatro variáveis (vazão de oxigênio, vazão de gás de reciclo, temperatura da reação e teor de clorados) são as de maior influência sobre a seletividade e explicam mais de 60% das variações ocorridas no processo produtivo. Com base nessa análise de dados, modelos de regressão multilinear foram desenvolvidos e testados com o objetivo de representar o comportamento do processo em função apenas do comportamento dessas quatro variáveis. O modelo matemático empírico proposto para representar esse processo foi validado estatisticamente e fenomenologicamente, demonstrando consistência com os dados obtidos em processo. O modelo também foi desdobrado em 24 submodelos que representam condições possíveis de operação da unidade e para os quais foram elaboradas superfícies de respostas que permitiram definir a melhor forma de gestão das 4 variáveis críticas conjuntamente, de modo a se obter a máxima seletividade possível para a reação em função desses cenários operacionais. / The performance of an ethylene oxide manufacturing process is normally measured by the selectivity reaction. In this work, a production unit was studied in order to maximize selectivity through the development of a strategic plan to main to manage the key process variables. A statistical analysis of a data set indicated that only four variables (oxygen flow, recycle gas flow, temperature reaction and chlorine content) are responsible for the greater influence over the selectivity and explain more than 60% of process variations. As consequence, regression models were developed and tested in order to represent the process behavior as a function of these four variables. The proposed mathematical model was statistically and phenomenologically validated, demonstrating consistency with the current process data. The model was rewritten in 24 sub-models, named deployed models which represent possible operational conditions of the unit. A set of surface responses was defined for each deployed model, providing to identify the best way for the management of these 4 critical variables. Furthermore, this analysis leads to a management tool for achieving the best results in selectivity, as function of the possible operational scenarios for this unit.
93

Modelagem da biomassa e da quantidade de carbono de clones de Eucalyptus da Chapada do Araripe-PE

SILVA, José Wesley Lima 25 February 2016 (has links)
Submitted by Mario BC (mario@bc.ufrpe.br) on 2016-05-31T16:24:30Z No. of bitstreams: 1 Jose Wesley Lima Silva.pdf: 2387251 bytes, checksum: 617d26e69a59dee1d8bcb33e79bdba13 (MD5) / Made available in DSpace on 2016-05-31T16:24:30Z (GMT). No. of bitstreams: 1 Jose Wesley Lima Silva.pdf: 2387251 bytes, checksum: 617d26e69a59dee1d8bcb33e79bdba13 (MD5) Previous issue date: 2016-02-25 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The objective of this study was to quantify and test different regression models to estimate the biomass and the amount of carbon from the aerial parts of Eucalyptus clones planted in the Northeastern semi-arid region, and select the best equations based on R2 aj, the Akaike information criterion (AIC), Furnival Index (FI), the graphical analysis of the residuals and through the Shapiro-Wilk, Breusch- Pagan and Durbin Watson tests. The database came from an experiment with 15 Eucalyptus spp. clones conducted at the Experimental Station of the Agricultural Research Institute of Pernambuco – IPA, located in the municipality of Araripina - PE. Through a completely random sampling process 75 trees were selected, in which were determined the fresh weight and leaf samples collected were samples of leaf, branch, bark and bole to determine the average wood density, biomass and carbon content. The most productive clone in terms of biomass and carbon was the hybrid E. urophylla natural crossing. The average plant biomass accumulation was 59.64 t h−1 and the amount of carbon 24.96 t h−1. The adjustment of the regression models showed that each partition presented particular behavior of dry biomass production and total carbon. It was not possible to select a common model that represents all of parts of the trees. For the variable biomass, the models of Schumacher and Hall, Spurr, the logistics and the exponential model 11 presented the best fits. For the amount of organic carbon the models 6 and exponential 11 presented best results. / O objetivo deste trabalho foi quantificar e testar diferentes modelos de regressão, para estimar a biomassa e a quantidade de carbono das partes aéreas de clones de Eucalyptus plantados na região semiárida nordestina, e selecionar as melhores equações com base no R2 aj, nos critérios de informação de Akaike (AIC), no Índice de Furnival (IF), pela análise gráfica dos resíduos e por meio dos testes de Shapiro-Wilk, Breusch-Pagan e Durbin-Watson. A base de dados foi proveniente de um experimento com 15 clones de Eucalyptus spp. realizado na Estação Experimental da Empresa Pernambucana de Pesquisa Agropecuária – IPA, localizado no município de Araripina – PE. Por meio do processo de amostragem inteiramente aleatória foram cubadas 75 árvores, nas quais se determinaram os pesos frescos, bem como foram coletadas amostras de folhas, galhos, casca e fuste para determinação da densidade média da madeira, biomassa e teor de carbono. O clone mais produtivo em termo de biomassa e carbono foi o híbrido de E. urophylla cruzamento natural. O acúmulo de biomassa médio da plantação foi de 59,64 t h−1 e da quantidade de carbono 24,96 t h h−1. No ajuste dos modelos de regressão, verificou-se que cada partição apresentou comportamento particular de produção de biomassa seca, carbono total, não sendo possível selecionar um modelo comum que representasse todas elas. Para a variável biomassa o os modelos de Shumarcher e Hall, de Spurr, o logístico e o exponencial modelo 11 foram os que melhor se ajustaram. Para a quantidade de carbono orgânico o modelo 6 e o exponencial 11 se ajustaram a maior parte dos componentes aéreos.
94

Um teste baseado em influência local para avaliar qualidade do ajuste em modelos de Regressão Beta

RIBEIRO, Terezinha Késsia de Assis 12 February 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-07-26T12:10:38Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertação_final_cd_TT.pdf: 4588819 bytes, checksum: 5127176322bfc06990cbd3eaa1fc5687 (MD5) / Made available in DSpace on 2016-07-26T12:10:38Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertação_final_cd_TT.pdf: 4588819 bytes, checksum: 5127176322bfc06990cbd3eaa1fc5687 (MD5) Previous issue date: 2016-02-12 / CAPEs / A classe de modelos de regressão beta introduzida por Ferrari & Cribari-Neto (2004) é muito útil para modelar taxas e proporções. O modelo proposto pelos autores é baseado na suposição de que a variável resposta tem distribuição beta com uma parametrização que é indexada pela média e por um parâmetro de precisão. Após a construção de um modelo de regressão é de extrema importância realizar a análise de diagnóstico, objetivando verificar possíveis afastamentos das suposições feitas para o modelo apresentado, bem como detectar possíveis observações que causem influência desproporcional nas estimativas dos parâmetros. A análise de influência local introduzida por Cook (1986) é uma abordagem que objetiva avaliar a influência das observações. Com base no método de influência local, Zhu & Zhang (2004) propuseram um teste de hipóteses para detectar o grau de discrepância entre o modelo suposto e o modelo subjacente do qual dos dados são gerados. Nesse trabalho, foi densenvolvido esse teste para o modelo de regressão beta com dispersão fixa e variável, como também, foram propostos um melhoramento nesse teste baseados na metodologia bootstrap e um novo teste, também com base em influência local, mas considerando outro esquema de perturbação, a perturbação no parâmetro de precisão no modelo de regressão beta com dispersão fixa. O desempenho desses testes foram avaliados com base no tamanho e poder. Por fim, aplicamos a teoria desenvolvida a um conjunto de dados reais. / The class of beta regression models introduced by Ferrari & Cribari-Neto (2004) is very useful for modelling rates and proportions. The proposed model by the authors is based on the assumption that the response variable is beta distributed with indexed by mean and dispersion parameters. After fitting a regression model is very important to carry out the diagnostic analysis in sense that, verifying possible deviations of the model assumptions, as well as detect possible observations that cause disproportionate influence on the parameter estimates. The local influence analysis introduced by Cook (1986) is an approach that objective assess the influence of observations. Based on local influence method, Zhu & Zhang (2004) proposed a hypothesis test to detect the degree of discrepancy between the supposed model and the underlying model from which the data is generated. In this work, was developed this test for the beta regression model with fixed and varying dispersion, as well as, we proposed in addition, an improvement of this test based on bootstrap methodology and a new test, also based on local influence, but considering other perturbation scheme, the perturbation of the precision parameter in beta regression model with fixed dispersion. The performance of these tests were evaluated based on size and power. Finally, we applied the theory developed to a set of real data.
95

Predicting time-since-fire from forest inventory data in Saskatchewan, Canada

Schulz, Rueben J. 05 1900 (has links)
Time-since-fire data are used to describe wildfire disturbances, the major disturbance type in the Boreal forest, over a landscape. These data can be used to calculate various parameters about wildfire disturbances, such as size, shape and severity. Collecting time-since-fire data is expensive and time consuming; the ability to derive it from existing forest inventory data would result in availability of fire data over larger areas. The objective of this thesis was to explore the use of forest inventory information for the prediction of time-since-fire data in the mixedwood boreal forests of Saskatchewan. Regression models were used to predict time-since-fire from forest inventory variables for each inventory polygon with a stand age. Non-water polygons with no stand age value were assigned values from neighbouring polygons, after splitting long polygons that potentially crossed many historic fire boundaries. This procedure filled gaps that prevented polygons from being grouped together in latter analysis. The predicted time-since-fire ages were used to generate wildfire parameters such as age-class distributions and fire cycle. Three methods were examined to group forest inventory polygons together to predict fire event polygons: simple partitions, hierarchical clustering, and spatially constrained clustering. The predicted fire event polygons were used to generate polygon size distribution wildfire metrics. I found that there was a relationship between time-since-fire and forest inventory variables at this study site, although the relationship was not strong. As expected, the strongest relationship was between the age of trees in a stand as indicated by the inventory and the time-since-fire. This relationship was moderately improved by including tree species composition, harvest modification value, and the ages of the surrounding polygons. Assigning no-age polygons neighbouring values and grouping the forest inventory polygons improved the predicted time-since-fire results when compared spatially to the observed time-since-fire data. However, a satisfactory method of comparing polygon shapes was not found, and the map outputs were highly dependent on the grouping method and parameters used. Overall it was found that forest inventory data did not have sufficient detail and accuracy to be used to derive high quality time-since-fire information. / Forestry, Faculty of / Graduate
96

A Statistical Analysis of the Lake Levels at Lake Neusiedl

Leodolter, Johannes January 2008 (has links) (PDF)
A long record of daily data is used to study the lake levels of Lake Neusiedl, a large steppe lake at the eastern border of Austria. Daily lake level changes are modeled as functions of precipitation, temperature, and wind conditions. The occurrence and the amount of daily precipitation are modeled with logistic regressions and generalized linear models.
97

Air Pollution and Health: Toward Improving the Spatial Definition of Exposure, Susceptibility and Risk

Parenteau, Marie-Pierre January 2011 (has links)
The role of the spatial representation in the relation between chronic exposure to NO2 and respiratory health outcomes is studied through a spatial approach encompassing three conceptual components: the geography of susceptibility, the geography of exposure and the geography of risk. A spatially explicit methodology that defined natural neighbourhoods for the city of Ottawa is presented; it became the geography of analysis in this research. A LUR model for Ottawa is developed to study the geography of exposure. Model sensitivity to the spatial representation of population showed that dasymetric population mapping did not provide significant improvements to the LUR model over population at the dissemination block level. However, both the former were significantly better than population represented at the dissemination area. Spatial representation in the geography of exposure was also evaluated by comparing four kriging and cokriging interpolation models to the LUR. Geostatistically derived NO2 concentration maps were weakly correlated with LUR model results. The relationship between mean NO2 concentrations and respiratory health outcomes was assessed within the natural neighbourhoods. We find a statistically significant association between NO2 concentrations and respiratory health outcomes as measured by global bivariate Moran’s I. However, for regression model building, NO2 had to be forced into the model, demonstrating that NO2 is not one of the main contributing variables to respiratory health outcomes in Ottawa. The results point toward the importance of the socioeconomic status on the health condition of individuals. Finally, the role of spatial representation was assessed using three different spatial structures, which also permitted to better understand the role of the modifiable areal unit problem (MAUP) in the study of the relationship between exposure to NO2 and health. The results confirm that NO2 concentration is not a major contributing factor to the respiratory health in Ottawa but clearly demonstrate the implications that the use of opportunistic administrative boundaries can have on results of exposure studies. The effects of the MAUP, the scale effect and the zoning effect, were observed indicating that a spatial structure that embodies the scale of major social processes behind the health condition of individuals should be used when possible.
98

Sociálne tržné hospodárstvo v SRN - súčasný stav / Social market economy in Germany - current state of things

Valkovič, Tomáš January 2008 (has links)
The master thesis in the theoretical part tries to explain the functioning of the social market economy in Germany and also describes the individual reform steps which were aimed to increase the competitiveness of the German economy in the era of increasing global competition. In the practical part the author concentrated on the impact of the economic crisis on the system of social market economy in Germany. The author chose the revival of the private consumption as the key factor in overcoming the economic crisis and using the regression model tried to assess which economic indicators influence the private consumption of German households the most. From the results of this analysis he derived six reform measures which should help Germany overcome the current economic crisis.
99

Aplikace modelů diskrétní volby / The Application of the Discrete Choice Models

Čejková, Tereza January 2008 (has links)
This thesis treats with the theory, interpretation and application of the Discrete Choice Models. The theoretical part contains the Fitting the Logistic Regression Model, Testing for the Significance of the Coefficients, Testing for the Significance of the Model. The Multiple Logistic Regression is mentioned too. The model was applied to interview data from the International research called Reflex.
100

Specification testing of Garch regression models

Shadat, Wasel Bin January 2011 (has links)
This thesis analyses, derives and evaluates specification tests of Generalized Auto-Regressive Conditional Heteroskedasticity (GARCH) regression models, both univariate and multivariate. Of particular interest, in the first half of the thesis, is the derivation of robust test procedures designed to assess the Constant Conditional Correlation (CCC) assumption often employed in multivariate GARCH (MGARCH) models. New asymptotically valid conditional moment tests are proposed which are simple to construct, easily implementable following the full or partial Quasi Maximum Likelihood (QML) estimation and which are robust to non-normality. In doing so, a non-normality robust version of the Tse's (2000) LM test is provided. In addition, a new and easily programmable expressions of the expected Hessian matrix associated with the QMLE is obtained. The finite sample performances of these tests are investigated in an extensive Monte Carlo study, programmed in GAUSS.In the second half of the thesis, attention is devoted to nonparametric testing of GARCH regression models. First simultaneous consistent nonparametric tests of the conditional mean and conditional variance structure of univariate GARCH models are considered. The approach is developed from the Integrated Generalized Spectral (IGS) and Projected Integrated Conditional Moment (PICM) procedures proposed recently by Escanciano (2008 and 2009, respectively) for time series models. Extending Escanciano (2008), a new and simple wild bootstrap procedure is proposed to implement these tests. A Monte Carlo study compares the performance of these nonparametric tests and four parametric tests of nonlinearity and/or asymmetry under a wide range of alternatives. Although the proposed bootstrap scheme does not strictly satisfy the asymptotic requirements, the simulation results demonstrate its ability to control the size extremely well and therefore the power comparison seems justified. Furthermore, this suggests there may exist weaker conditions under which the tests are implementable. The simulation exercise also presents the new evidence of the effect of conditional mean misspecification on various parametric tests of conditional variance. The testing procedures are also illustrated with the help of the S&P 500 data. Finally the PICM and IGS approaches are extended to the MGARCH case. The procedure is illustrated with the help of a bivariate CCC-GARCH model, but can be generalized to other MGARCH specifications. Simulation exercise shows that these tests have satisfactory size and are robust to non-normality. The marginal mean and variance tests have excellent power; however the covariance marginal tests lack power for some alternatives.

Page generated in 0.1865 seconds