• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 44
  • 20
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 157
  • 157
  • 37
  • 35
  • 23
  • 20
  • 20
  • 17
  • 16
  • 15
  • 15
  • 14
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Hodnocení finanční situace vybraného podniku a návrhy na její zlepšení / Evaluation of the Financial Situation of the Selected Company and Proposals to its Improvement

Polášková, Lucie January 2015 (has links)
This thesis deals with evaluating the financial situation of the company LIPOELASTIC a.s. through deployment of specific tools of financial analysis. The data necessary for financial analysis comes from financial statements of the company. Structure of the thesis is divided onto three parts. Theoretical part contains a set of crucial terms, methods and workflows which are important for designing the individual parts of financial analysis. These methods are practically applied in the practical part. Last part includes the proposals and measures for enhancing financial situation of the company.
122

Investigation of Warm Convective Cloud Fields with Meteosat Observations and High Resolution Models

Bley, Sebastian 07 November 2017 (has links)
Die hohe raumzeitliche Variabilität von konvektiven Wolken hat erhebliche Auswirkungen auf die Quantifizierung des Wolkenstrahlungseffektes. Da konvektive Wolken in atmosphärischen Modellen üblicherweise parametrisiert werden müssen, sind Beobachtungsdaten notwendig, um deren Variabilität sowie Modellunsicherheiten zu quantifizieren. Das Ziel der vorliegenden Dissertation ist die Charakterisierung der raumzeitlichen Variabilität von warmen konvektiven Wolkenfeldern mithilfe von Meteosat Beobachtungen sowie deren Anwendbarkeit für die Modellevaluierung. Verschiedene Metriken wurden untersucht, um Unsicherheiten in Modell- und Satellitendaten sowie ihre Limitierungen zu quantifizieren. Mithilfe des hochaufgelösten sichtbaren (HRV) Kanals von Meteosat wurde eine Wolkenmaske entwickelt, welche mit 1×2 km² die Auflösung der operationellen Wolkenmaske von 3×6 km² deutlich übertrifft. Diese ermöglicht eine verbesserte Charakterisierung von kleinskaligen Wolken und bietet eine wichtige Grundlage für die Weiterentwicklung von satellitengestützten Wolkenalgorithmen. Für die Untersuchung der Lebenszyklen konvektiver Wolkenfelder wurde ein Tracking-Algorithmus entwickelt. Die raumzeitliche Entwicklung des Flüssigwasserpfads (LWP) wurde sowohl in einer Eulerschen Betrachtungsweise als auch entlang Lagrange’scher Trajektorien analysiert. Für die Wolkenfelder ergab sich eine charakteristische Längenskala von 7 km. Als Maß für die Wolkenlebenszeit ergab sich eine Lagrange’sche Dekorrelationszeit von 31 min. Unter Berücksichtigung des HRV Kanals verringern sich die Dekorrelationsskalen signifikant, was auf eine Sensitivität gegenüber der räumlichen Auflösung hindeutet. Für eine Quantifizierung dieser Sensitivität wurden Simulationen des ICON-LEM Modells mit einer Auflösung von bis zu 156 m berücksichtigt. Verbunden mit einem zwei- bis vierfach geringeren konvektiven Bedeckungsgrad besitzen die simulierten Wolken bei dieser hohen Auflösung deutlich größere LWP Werte. Diese Unterschiede verschwinden im Wesentlichen, wenn die simulierten Wolkenfelder auf die optische Auflösung von Meteosat gemittelt werden. Die Verteilungen der Wolkengrößen zeigen einen deutlichen Abfall für Größen unterhalb der 8- bis 10-fachen Modellauflösung, was der effektive Auflösung des Modells entspricht. Dies impliziert, dass eine noch höhere Auflösung wünschenswert wäre, damit mit ICON-LEM Wolkenprozesse unterhalb der 1 km-Skala realistisch simuliert werden können. Diese Skala wird zukünftig erfreulicherweise vom Meteosat der dritten Generation abgedeckt. Dies wird ein entscheidender Schritt für ein verbessertes Verständnis von kleinskaligen Wolkeneffekten sowie für die Parametrisierung von Konvektion in NWP und Klimamodellen sein.
123

Bankruptcy prediction models on Swedish companies.

Charraud, Jocelyn, Garcia Saez, Adrian January 2021 (has links)
Bankruptcies have been a sensitive topic all around the world for over 50 years. From their research, the authors have found that only a few bankruptcy studies have been conducted in Sweden and even less on the topic of bankruptcy prediction models. This thesis investigates the performance of the Altman, Ohlson and Zmijewski bankruptcy prediction models. This research investigates all Swedish companies during the years 2017 and 2018.  This study has the intention to shed light on some of the most famous bankruptcy prediction models. It is interesting to explore the predictive abilities and usability of those three models in Sweden. The second purpose of this study is to create two models from the most significant variable out of the three models studied and to test its prediction power with the aim to create two models designed for Swedish companies.  We identified a research gap in terms of Sweden, where bankruptcy prediction models have been rather unexplored and especially with those three models. Furthermore, we have identified a second research gap regarding the time period of the research. Only a few studies have been conducted on the topic of bankruptcy prediction models post the financial crisis of 2007/08.  We have conducted a quantitative study in order to achieve the purpose of the study. The data used was secondary data gathered from the Serrano database. This research followed an abductive approach with a positive paradigm. This research has studied all active Swedish companies between the years 2017 and 2018. Finally, this contributed to the current field of knowledge on the topic through the analysis of the results of the models on Swedish companies, using the liquidity theory, solvency and insolvency theory, the pecking order theory, the profitability theory, the cash flow theory, and the contagion effect. The results aligned with the liquidity theory, the solvency and insolvency theory and the profitability theory. Moreover, from this research we have found that the Altman model has the lowest performance out of the three models, followed by the Ohlson model that shows some mixed results depending on the statistical analysis. Lastly, the Zmijewski model has the best performance out of the three models. Regarding the performance and the prediction power of the two new models were significantly higher than the three models studied.
124

Evaluation and Optimization of Deep Learning Networks for Plant Disease Forecasting And Assessment of their Generalizability for Early Warning Systems

Hannah Elizabeth Klein (15375262) 05 May 2023 (has links)
<p>This research focused on developing adaptable models and protocols for early warning systems for forecasting plant diseases and datasets. It compared the performance of deep learning models in predicting soybean rust disease outbreaks using three years of public epidemiological data and gridded weather data. The models selected were a dense network and a Long Short-Term Memory (LSTM) network. The objectives included evaluating the effectiveness of small citizen science datasets and gridded meteorological weather in sequential forecasting, assessing the ideal window size and important inputs, and exploring the generalizability of the model protocol and models to other diseases. The model protocol was developed using a soybean rust dataset. Both the dense and the LSTM networks produced accuracies of over 90% during optimization. When tested for forecasting, both networks could forecast with an accuracy of 85% or higher over various window sizes. Experiments on window size indicated a minimum input of 8 -11 days. Generalizability was demonstrated by applying the same protocol to a southern corn rust dataset, resulting in 87.8% accuracy. In addition, transfer learning and pre-trained models were tested. Direct transfer learning between disease was not successful, while pre training models resulted both positive and negative results. Preliminary results are reported for building generalizable disease models using epidemiological and weather data that researchers could apply to generate forecasts for new diseases and locations.</p>
125

[pt] AJUSTE ÓTIMO POR LEVENBERG-MARQUARDT DE MÉTODOS DE PREVISÃO PARA INICIAÇÃO DE TRINCA / [en] OPTIMAL FIT BY LEVENBERG-MARQUARDT OF PREDICTION METHODS FOR CRACK INITIATION

GABRIELA WEGMANN LIMA 01 November 2022 (has links)
[pt] A grande maioria das estruturas que trabalham sob cargas alternadas precisa ser dimensionada para evitar a iniciação de trincas por fadiga, o principal mecanismo de dano mecânico nesses casos. Os vários parâmetros dos modelos de previsão de dano à fadiga usados nesses projetos devem ser preferencialmente medidos a partir do ajuste otimizado de suas equações a dados experimentais medidos de forma adequada. Na realidade, a precisão das previsões baseadas nesses modelos depende diretamente da qualidade dos ajustes utilizados para obtenção desses parâmetros. Sendo assim, o objetivo principal deste trabalho é estudar a melhor maneira de se obter os parâmetros dos principais modelos de previsão da iniciação de trincas por fadiga através de ajustes de dados experimentais baseados no algoritmo de LevenbergMarquardt. Primeiro, foram realizados diversos ensaios εN em uma liga de alumínio 6351-T6 para averiguar o desempenho do ajuste proposto para asequações de Coffin-Manson e de Ramberg-Osgood. Em seguida, foram usados dados da literatura de outros oito materiais para ajustar modelos deformaçãovida clássicos, assim como com o expoente de Walker, para assim avaliar o efeito de cargas médias não-nulas em testes εN. Por fim, foi estudado o ajuste de um modelo SN com expoente de Walker que considera limites de fadiga e efeitos de carga média. Esse estudo também inclui considerações estatísticas para quantificar o fator de confiabilidade a partir de diferentes hipóteses de funções densidade de probabilidade, baseadas em dez conjuntos de dados da literatura. / [en] Most structures working under alternate loadings must be dimensioned to prevent fatigue crack initiation, the main mechanism of mechanical damage in these cases. The various parameters from the fatigue damage prediction models used in these projects should preferably be measured by optimally fitting their equations to well-measured experimental data. In fact, the accuracy of the predictions based on these models depends directly on the quality of the adjustments used to obtain these parameters. As a result, the main purpose of this work is to study the best way to obtain the parameters of the leading prediction models of fatigue crack initiation through experimental data fittings based on the Levenberg-Marquardt algorithm. First, several εN tests were performed on a 6351-T6 aluminum alloy to verify the performance of the proposed fit for the Coffin-Manson and Ramberg-Osgood equations. Then, data from the literature of eight other materials were used to fit classic strainlife models, as well as models based on the Walker exponent, to evaluate the effect of non-zero mean loads in εN tests. Finally, the fitting of an SN model including the Walker exponent was studied, which considers fatigue limits and mean load effects. This study includes as well statistical considerations to quantify the reliability factor from different probability density function assumptions, based on ten data sets from the literature.
126

Using Portable X-ray Fluorescence to Predict Physical and Chemical Properties of California Soils

Frye, Micaela D 01 August 2022 (has links) (PDF)
Soil characterization provides the basic information necessary for understanding the physical, chemical, and biological properties of soils. Knowledge about soils can in turn be used to inform management practices, optimize agricultural operations, and ensure the continuation of ecosystem services provided by soils. However, current analytical standards for identifying each distinct property are costly and time-consuming. The optimization of laboratory grade technology for wide scale use is demonstrated by advances in a proximal soil sensing technique known as portable X-ray fluorescence spectrometry (pXRF). pXRF analyzers use high energy Xrays that interact with a sample to cause characteristic reflorescence that can be distinguished by the analyzer for its energy and intensity to determine the chemical composition of the sample. While pXRF only measures total elemental abundance, the concentrations of certain elements have been used as a proxy to develop models capable of predicting soil characteristics. This study aimed to evaluate existing models and model building techniques for predicting soil pH, texture, cation exchange capacity (CEC), soil organic carbon (SOC), total nitrogen (TN), and C:N ratio from pXRF spectra and assess their fittingness for California soils by comparing predictions to results from laboratory methods. Multiple linear regression (MLR) and random forest (RF) models were created for each property using a training subset of data and evaluated by R2 , RMSE, RPD and RPIQ on an unseen test set. The California soils sample set was comprised of 480 soil samples from across the state that were subject to laboratory and pXRF analysis in GeoChem mode. Results showed that existing data models applied to the CA soils dataset lacked predictive ability. In comparison, data models generated using MLR with 10-fold cross validation for variable selection improved predictions, while algorithmic modeling produced the best estimates for all properties besides pH. The best models produced for each property gave RMSE values of 0.489 for pH, 10.8 for sand %, 6.06 for clay % (together predicting the correct texture class 74% of the time), 6.79 for CEC (cmolc/kg soil), 1.01 for SOC %, 0.062 for TN %, and 7.02 for C:N ratio. Where R2 and RMSE were observed to fluctuate inconsistently with a change in the random train/test splits, RPD and RPIQ were more stable, which may indicate a more useful representation of out of sample applicability. RF modeling for TN content provided the best predictive model overall (R2 = 0.782, RMSE = 0.062, RPD = 2.041, and RPIQ = 2.96). RF models for CEC and TN % achieved RPD values >2, indicating stable predictive models (Cheng et al., 2021). Lower RPD values between 1.75 and 2 and RPIQ >2 were also found for MLR models of CEC, and TN %, as well as RF models for SOC. Better estimates for chemical properties (CEC, N, SOC) when compared to physical properties (texture), may be attributable to a correlation between elemental signatures and organic matter. All models were improved with the addition of categorical variables (land-use and sample set) but came at a great statistical cost (9 extra predictors). Separating models by land type and lab characterization method revealed some improvements within land types, but these effects could not be fully untangled from sample set. Thus, the consortia of characterizing bodies for ‘true’ lab data may have been a drawback in model performance, by confounding inter-lab errors with predictive errors. Future studies using pXRF analysis for soil property estimation should investigate how predictive v models are affected by characterizing method and lab body. While statewide models for California soils provided what may be an acceptable level of error for some applications, models calibrated for a specific site using consistent lab characterization methods likely provide a higher degree of accuracy for indirect measurements of some key soil properties.
127

Probabilistic and Statistical Learning Models for Error Modeling and Uncertainty Quantification

Zavar Moosavi, Azam Sadat 13 March 2018 (has links)
Simulations and modeling of large-scale systems are vital to understanding real world phenomena. However, even advanced numerical models can only approximate the true physics. The discrepancy between model results and nature can be attributed to different sources of uncertainty including the parameters of the model, input data, or some missing physics that is not included in the model due to a lack of knowledge or high computational costs. Uncertainty reduction approaches seek to improve the model accuracy by decreasing the overall uncertainties in models. Aiming to contribute to this area, this study explores uncertainty quantification and reduction approaches for complex physical problems. This study proposes several novel probabilistic and statistical approaches for identifying the sources of uncertainty, modeling the errors, and reducing uncertainty to improve the model predictions for large-scale simulations. We explore different computational models. The first class of models studied herein are inherently stochastic, and numerical approximations suffer from stability and accuracy issues. The second class of models are partial differential equations, which capture the laws of mathematical physics; however, they only approximate a more complex reality, and have uncertainties due to missing dynamics which is not captured by the models. The third class are low-fidelity models, which are fast approximations of very expensive high-fidelity models. The reduced-order models have uncertainty due to loss of information in the dimension reduction process. We also consider uncertainty analysis in the data assimilation framework, specifically for ensemble based methods where the effect of sampling errors is alleviated by localization. Finally, we study the uncertainty in numerical weather prediction models coming from approximate descriptions of physical processes. / Ph. D. / Computational models are used to understand the behavior of the natural phenomenon. Models are used to approximate the evolution of the true phenomenon or reality in time. We obtain more accurate forecast for the future by combining the model approximation together with the observation from reality. Weather forecast models, oceanography, geoscience, etc. are some examples of the forecasting models. However, models can only approximate the true reality to some extent and model approximation of reality is not perfect due to several sources of error or uncertainty. The noise in measurements or in observations from nature, the uncertainty in some model components, some missing components in models, the interaction between different components of the model, all cause model forecast to be different from reality. The aim of this study is to explore the techniques and approaches of modeling the error and uncertainty of computational models, provide solution and remedies to reduce the error of model forecast and ultimately improve the model forecast. Taking the discrepancy or error between model forecast and reality in time and mining that error provide valuable information about the origin of uncertainty in models as well as the hidden dynamics that is not considered in the model. Statistical and machine learning based solutions are proposed in this study to identify the source of uncertainty, capturing the uncertainty and using that information to reduce the error and enhancing the model forecast. We studied the error modeling, error or uncertainty quantification and reduction techniques in several frameworks from chemical models to weather forecast models. In each of the models, we tried to provide proper solution to detect the origin of uncertainty, model the error and reduce the uncertainty to improve the model forecast.
128

Konkursprognostisering : En tillämpning av tre internationella modeller

Malm, Hanna, Rodriguez, Edith January 2015 (has links)
Bakgrund: Varje år går många företag i konkurs och detta innebär stora kostnader på kort sikt. Kreditgivare, ägare, investerare, borgenärer, företagsledning, anställda samt samhället är de som i störst utsträckning drabbas av detta. För att kunna bedöma ett företags ekonomiska hälsa är det därför en viktig del att kunna prognostisera risken för en konkurs. Till hjälp har vi olika konkursmodeller som har utvecklats sedan början av 1960-talet och fram till idag. Syfte: Att undersöka tre internationella konkursmodeller för att se om dessa kan tillämpas på svenska företag samt jämföra träffsäkerheten från vår studie med konkursmodellernas originalstudier. Metod: Undersökningen är baserad på en kvantitativ forskningsstrategi med en deduktiv ansats. Urvalet grundas på företag som gick i konkurs år 2014. Till detta kommer också en kontrollgrupp bestående av lika stor andel friska företag att undersökas. Det slumpmässiga urvalet kom att bestå av 30 konkursföretag samt 30 friska företag från tillverknings- och industribranschen. Teori: I denna studie undersöks tre konkursmodeller; Altman, Fulmer och Springate. Dessa modeller och tidigare forskning presenteras utförligare i teoriavsnittet. Dessutom beskrivs under teoriavsnittet några nyckeltal som är relevanta vid konkursprediktion. Resultat och slutsats: Modellerna är inte tillämpbara på svenska företag då resultaten från vår studie inte visar tillräcklig träffsäkerhet och är därför måste betecknas som otillförlitliga. / Background: Each year many companies go bankrupt and it is associated with significant costs in the short term. Creditors, owners, investors, management, employees and society are those that gets most affected by the bankruptcy. To be able to estimate a company’s financial health it is important to be able to predict the risk of a bankruptcy. To help, we have different bankruptcy prediction models that have been developed through time, since the 1960s until today, year 2015. Purpose: To examine three international bankruptcy prediction models to see if they are  applicable to Swedish business and also compare the accuracy from our study with each bankruptcy prediction models original study. Method: The study was based on a quantitative research strategy and also a deductive research approach. The selection was based on companies that went bankrupt in year 2014. Added to this is a control group consisting of healthy companies that will also be examined. Finally, the random sample consisted of 30 bankrupt companies and 30 healthy companies that belong to the manufacturing and industrial sectors. Theory: In this study three bankruptcy prediction models are examined; Altman, Fulmer and Springate. These models and also previous research in bankruptcy prediction are further described in the theory section. In addition some financial ratios that are relevant in bankruptcy prediction are also described. Result and conclusion: The models are not applicable in the Swedish companies.  The results of this study have not showed sufficient accuracy and they can therefore be regarded as unreliable.
129

An analysis of a dust storm impacting Operation Iraqi Freedom, 25-27 March 2003

Anderson, John W. 12 1900 (has links)
Approved for public release; distribution in unlimited. / On day five of combat operations during Operation IRAQI FREEDOM, advances by coalition forces were nearly halted by a dust storm, initiated by the passage of a synoptically driven cold front. This storm impacted ground and air operations across the entire Area of Responsibility, and delayed an impending ground attack on the Iraqi capital. Military meteorologists were able to assist military planners in mitigating at least some of the effects of this storm. This thesis examines the synoptic conditions leading to the severe dust storm, evaluates the numerical weather prediction model performance in predicting the event, and reviews metrics pertaining to the overall impacts on the Operation IRAQI FREEDOM combined air campaign. In general, the numerical model guidance correctly predicted the location and onset of the dust storms on 25 March, 2003. As a result of this forecast guidance, mission planners were able to front load Air Tasking Orders with extra sorties prior to the onset of the dust storm, and were able to make changes to planned weapons loads, favoring GPS-guided munitions. / Captain, United States Air Force
130

Mensuração da biomassa e construção de modelos para construção de equações de biomassa / Biomass measurement and models selection for biomass equations

Vismara, Edgar de Souza 07 May 2009 (has links)
O interesse pela quantificação da biomassa florestal vem crescendo muito nos últimos anos, sendo este crescimento relacionado diretamente ao potencial que as florestas tem em acumular carbono atmosférico na sua biomassa. A biomassa florestal pode ser acessada diretamente, por meio de inventário, ou através de modelos empíricos de predição. A construção de modelos de predição de biomassa envolve a mensuração das variáveis e o ajuste e seleção de modelos estatísticos. A partir de uma amostra destrutiva de de 200 indivíduos de dez essências florestais distintas advindos da região de Linhares, ES., foram construídos modelos de predição empíricos de biomassa aérea visando futuro uso em projetos de reflorestamento. O processo de construção dos modelos consistiu de uma análise das técnicas de obtenção dos dados e de ajuste dos modelos, bem como de uma análise dos processos de seleção destes a partir do critério de Informação de Akaike (AIC). No processo de obtenção dos dados foram testadas a técnica volumétrica e a técnica gravimétrica, a partir da coleta de cinco discos de madeira por árvore, em posições distintas no lenho. Na técnica gravimétrica, estudou-se diferentes técnicas de composição do teor de umidade dos discos para determinação da biomassa, concluindo-se como a melhor a que utiliza a média aritmética dos discos da base, meio e topo. Na técnica volumétrica, estudou-se diferentes técnicas de composição da densidade do tronco com base nas densidades básicas dos discos, concluindo-se que em termos de densidade do tronco, a média aritmética das densidades básicas dos cinco discos se mostrou como melhor técnica. Entretanto, quando se multiplica a densidade do tronco pelo volume deste para obtenção da biomassa, a utilização da densidade básica do disco do meio se mostrou superior a todas as técnicas. A utilização de uma densidade básica média da espécie para determinação da biomassa, via técnica volumétrica, se apresentou como uma abordagem inferior a qualquer técnica que utiliza informação da densidade do tronco das árvores individualmente. Por fim, sete modelos de predição de biomassa aérea de árvores considerando seus diferentes compartimentos foram ajustados, a partir das funções de Spurr e Schumacher-Hall, com e sem a inclusão da altura como variável preditora. Destes modelos, quatro eram gaussianos e três eram lognormais. Estes mesmos sete modelos foram ajustados incluindo a medida de penetração como variável preditora, totalizando quatorze modelos testados. O modelo de Schumacher-Hall se mostrou, de maneira geral, superior ao modelo de Spurr. A altura só se mostrou efetiva na explicação da biomassa das árvores quando em conjunto com a medida de penetração. Os modelos selecionados foram do grupo que incluíram a medida de penetração no lenho como variável preditora e , exceto o modelo de predição da biomassa de folhas, todos se mostraram adequados para aplicação na predição da biomassa aérea em áreas de reflorestamento. / Forest biomass measurement implies a destructive procedure, thus forest inventories and biomass surveys apply indirect procedure for the determination of biomass of the different components of the forest (wood, branches, leaves, roots, etc.). The usual approch consists in taking a destructive sample for the measurment of trees attributes and an empirical relationship is established between the biomass and other attributes that can be directly measured on standing trees, e.g., stem diameter and tree height. The biomass determination of felled trees can be achived by two techniques: the gravimetric technique, that weights the components in the field and take a sample for the determination of water content in the laboratory; and the volumetric technique, that determines the volume of the component in the field and take a sample for the determination of the wood specific gravity (wood basic density) in the laboratory. The gravimetric technique applies to all components of the trees, while the volumetric technique is usually restricted to the stem and large branches. In this study, these two techniques are studied in a sample fo 200 trees of 10 different species from the region of Linhares, ES. In each tree, 5 cross-sections of the stem were taken to investigate the best procedure for the determination of water content in gravimetric technique and for determination of the wood specific gravity in the volumetric technique. Also, Akaike Information Criterion (AIC) was used to compare different statistical models for the prediction o tree biomass. For the stem water content determination, the best procedure as the aritmetic mean of the water content from the cross-sections in the base, middle and top of the stem. In the determination of wood specific gravity, the best procedure was the aritmetic mean of all five cross-sections discs of the stem, however, for the determination of the biomass, i.e., the product of stem volume and wood specific gravity, the best procedure was the use of the middle stem cross-section disc wood specific gravity. The use of an average wood specific gravity by species showed worse results than any procedure that used information of wood specific gravity at individual tree level. Seven models, as variations of Spurr and Schumacher-Hall volume equation models, were tested for the different tree components: wood (stem and large branches), little branches, leaves and total biomass. In general, Schumacher-Hall models were better than Spurr based models, and models that included only diameter (DBH) information performed better than models with diameter and height measurements. When a measure of penetration in the wood, as a surrogate of wood density, was added to the models, the models with the three variables: diameter, height and penetration, became the best models.

Page generated in 0.0777 seconds