Spelling suggestions: "subject:"discriminant analysis."" "subject:"oiscriminant analysis.""
301 |
Konkursprognostisering : En tillämpning av tre internationella modellerMalm, Hanna, Rodriguez, Edith January 2015 (has links)
Bakgrund: Varje år går många företag i konkurs och detta innebär stora kostnader på kort sikt. Kreditgivare, ägare, investerare, borgenärer, företagsledning, anställda samt samhället är de som i störst utsträckning drabbas av detta. För att kunna bedöma ett företags ekonomiska hälsa är det därför en viktig del att kunna prognostisera risken för en konkurs. Till hjälp har vi olika konkursmodeller som har utvecklats sedan början av 1960-talet och fram till idag. Syfte: Att undersöka tre internationella konkursmodeller för att se om dessa kan tillämpas på svenska företag samt jämföra träffsäkerheten från vår studie med konkursmodellernas originalstudier. Metod: Undersökningen är baserad på en kvantitativ forskningsstrategi med en deduktiv ansats. Urvalet grundas på företag som gick i konkurs år 2014. Till detta kommer också en kontrollgrupp bestående av lika stor andel friska företag att undersökas. Det slumpmässiga urvalet kom att bestå av 30 konkursföretag samt 30 friska företag från tillverknings- och industribranschen. Teori: I denna studie undersöks tre konkursmodeller; Altman, Fulmer och Springate. Dessa modeller och tidigare forskning presenteras utförligare i teoriavsnittet. Dessutom beskrivs under teoriavsnittet några nyckeltal som är relevanta vid konkursprediktion. Resultat och slutsats: Modellerna är inte tillämpbara på svenska företag då resultaten från vår studie inte visar tillräcklig träffsäkerhet och är därför måste betecknas som otillförlitliga. / Background: Each year many companies go bankrupt and it is associated with significant costs in the short term. Creditors, owners, investors, management, employees and society are those that gets most affected by the bankruptcy. To be able to estimate a company’s financial health it is important to be able to predict the risk of a bankruptcy. To help, we have different bankruptcy prediction models that have been developed through time, since the 1960s until today, year 2015. Purpose: To examine three international bankruptcy prediction models to see if they are applicable to Swedish business and also compare the accuracy from our study with each bankruptcy prediction models original study. Method: The study was based on a quantitative research strategy and also a deductive research approach. The selection was based on companies that went bankrupt in year 2014. Added to this is a control group consisting of healthy companies that will also be examined. Finally, the random sample consisted of 30 bankrupt companies and 30 healthy companies that belong to the manufacturing and industrial sectors. Theory: In this study three bankruptcy prediction models are examined; Altman, Fulmer and Springate. These models and also previous research in bankruptcy prediction are further described in the theory section. In addition some financial ratios that are relevant in bankruptcy prediction are also described. Result and conclusion: The models are not applicable in the Swedish companies. The results of this study have not showed sufficient accuracy and they can therefore be regarded as unreliable.
|
302 |
The identification and application of common principal componentsPepler, Pieter Theo 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: When estimating the covariance matrices of two or more populations,
the covariance matrices are often assumed to be either equal or completely
unrelated. The common principal components (CPC) model provides an
alternative which is situated between these two extreme assumptions: The
assumption is made that the population covariance matrices share the same
set of eigenvectors, but have di erent sets of eigenvalues.
An important question in the application of the CPC model is to determine
whether it is appropriate for the data under consideration. Flury (1988)
proposed two methods, based on likelihood estimation, to address this question.
However, the assumption of multivariate normality is untenable for
many real data sets, making the application of these parametric methods
questionable. A number of non-parametric methods, based on bootstrap
replications of eigenvectors, is proposed to select an appropriate common
eigenvector model for two population covariance matrices. Using simulation
experiments, it is shown that the proposed selection methods outperform the
existing parametric selection methods.
If appropriate, the CPC model can provide covariance matrix estimators
that are less biased than when assuming equality of the covariance matrices,
and of which the elements have smaller standard errors than the elements of
the ordinary unbiased covariance matrix estimators. A regularised covariance
matrix estimator under the CPC model is proposed, and Monte Carlo simulation
results show that it provides more accurate estimates of the population
covariance matrices than the competing covariance matrix estimators.
Covariance matrix estimation forms an integral part of many multivariate
statistical methods. Applications of the CPC model in discriminant analysis,
biplots and regression analysis are investigated. It is shown that, in cases
where the CPC model is appropriate, CPC discriminant analysis provides signi
cantly smaller misclassi cation error rates than both ordinary quadratic
discriminant analysis and linear discriminant analysis. A framework for the
comparison of di erent types of biplots for data with distinct groups is developed,
and CPC biplots constructed from common eigenvectors are compared
to other types of principal component biplots using this framework.
A subset of data from the Vermont Oxford Network (VON), of infants admitted to participating neonatal intensive care units in South Africa and
Namibia during 2009, is analysed using the CPC model. It is shown that
the proposed non-parametric methodology o ers an improvement over the
known parametric methods in the analysis of this data set which originated
from a non-normally distributed multivariate population.
CPC regression is compared to principal component regression and partial least squares regression in the tting of models to predict neonatal mortality
and length of stay for infants in the VON data set. The tted regression
models, using readily available day-of-admission data, can be used by medical
sta and hospital administrators to counsel parents and improve the
allocation of medical care resources. Predicted values from these models can
also be used in benchmarking exercises to assess the performance of neonatal
intensive care units in the Southern African context, as part of larger quality
improvement programmes. / AFRIKAANSE OPSOMMING: Wanneer die kovariansiematrikse van twee of meer populasies beraam
word, word dikwels aanvaar dat die kovariansiematrikse of gelyk, of heeltemal
onverwant is. Die gemeenskaplike hoofkomponente (GHK) model verskaf
'n alternatief wat tussen hierdie twee ekstreme aannames gele e is: Die
aanname word gemaak dat die populasie kovariansiematrikse dieselfde versameling
eievektore deel, maar verskillende versamelings eiewaardes het.
'n Belangrike vraag in die toepassing van die GHK model is om te bepaal
of dit geskik is vir die data wat beskou word. Flury (1988) het twee metodes,
gebaseer op aanneemlikheidsberaming, voorgestel om hierdie vraag aan te
spreek. Die aanname van meerveranderlike normaliteit is egter ongeldig vir
baie werklike datastelle, wat die toepassing van hierdie metodes bevraagteken.
'n Aantal nie-parametriese metodes, gebaseer op skoenlus-herhalings van
eievektore, word voorgestel om 'n geskikte gemeenskaplike eievektor model
te kies vir twee populasie kovariansiematrikse. Met die gebruik van simulasie
eksperimente word aangetoon dat die voorgestelde seleksiemetodes beter vaar
as die bestaande parametriese seleksiemetodes.
Indien toepaslik, kan die GHK model kovariansiematriks beramers verskaf
wat minder sydig is as wanneer aanvaar word dat die kovariansiematrikse
gelyk is, en waarvan die elemente kleiner standaardfoute het as die elemente
van die gewone onsydige kovariansiematriks beramers. 'n Geregulariseerde
kovariansiematriks beramer onder die GHK model word voorgestel, en Monte
Carlo simulasie resultate toon dat dit meer akkurate beramings van die populasie
kovariansiematrikse verskaf as ander mededingende kovariansiematriks
beramers.
Kovariansiematriks beraming vorm 'n integrale deel van baie meerveranderlike
statistiese metodes. Toepassings van die GHK model in diskriminantanalise,
bi-stippings en regressie-analise word ondersoek. Daar word
aangetoon dat, in gevalle waar die GHK model toepaslik is, GHK diskriminantanalise
betekenisvol kleiner misklassi kasie foutkoerse lewer as beide
gewone kwadratiese diskriminantanalise en line^ere diskriminantanalise. 'n
Raamwerk vir die vergelyking van verskillende tipes bi-stippings vir data
met verskeie groepe word ontwikkel, en word gebruik om GHK bi-stippings
gekonstrueer vanaf gemeenskaplike eievektore met ander tipe hoofkomponent
bi-stippings te vergelyk. 'n Deelversameling van data vanaf die Vermont Oxford Network (VON),
van babas opgeneem in deelnemende neonatale intensiewe sorg eenhede in
Suid-Afrika en Namibi e gedurende 2009, word met behulp van die GHK
model ontleed. Daar word getoon dat die voorgestelde nie-parametriese
metodiek 'n verbetering op die bekende parametriese metodes bied in die ontleding van hierdie datastel wat afkomstig is uit 'n nie-normaal verdeelde
meerveranderlike populasie.
GHK regressie word vergelyk met hoofkomponent regressie en parsi ele
kleinste kwadrate regressie in die passing van modelle om neonatale mortaliteit
en lengte van verblyf te voorspel vir babas in die VON datastel. Die
gepasde regressiemodelle, wat maklik bekombare dag-van-toelating data gebruik,
kan deur mediese personeel en hospitaaladministrateurs gebruik word
om ouers te adviseer en die toewysing van mediese sorg hulpbronne te verbeter.
Voorspelde waardes vanaf hierdie modelle kan ook gebruik word in
normwaarde oefeninge om die prestasie van neonatale intensiewe sorg eenhede
in die Suider-Afrikaanse konteks, as deel van groter gehalteverbeteringprogramme,
te evalueer.
|
303 |
授信風險分析方法對企業財務危機預測能力之研究--以logit模型驗證吳樂山 Unknown Date (has links)
授信風險分析是決定授信品質的關鍵。不管是聯合貸款、企業授信或消費性貸款,所有申貸案件必定經過徵信程序(credit analysis)來評估授信風險,再決定是否准予貸放。尤其企業授信一般貸放金額甚高,必須藉著嚴謹的審查過程來分析授信戶的借款用途是否合理、還款來源是否無虞。而這又必須瞭解其財務狀況、產銷情形、產業前景、研發創新、營運模式、經營者專業素養、管理能力等構面來分析風險成分。
傳統授信風險分析方法、理論,如五P分析、產業分析、財務分析等已行之多年,亦是國內商業銀行最普遍採用。然而隨著統計學、計量工具的發展,各種衡量信用風險的模型model被架構推出,世界知名銀行亦投注人力物力發展計量分析為主的風險管理部門,建立授信風險量化指標。除消費金融業務已藉著評分(credit scoring)作為准駁依據外,企業授信則因basel II即將公佈實施,亦使銀行業近幾年亦積極投入發展計量模型以建立IRB。然而計量分析與專家分析目前在國內銀行並未結合。我們將在文中探討主要授信分析工具並以89-92年間發生下市及打入全額交割股事件之公司為選樣範圍作為倒帳率分析基礎。
|
304 |
兩種正則化方法用於假設檢定與判別分析時之比較 / A comparison between two regularization methods for discriminant analysis and hypothesis testing李登曜, Li, Deng-Yao Unknown Date (has links)
在統計學上,高維度常造成許多分析上的問題,如進行多變量迴歸的假設檢定時,當樣本個數小於樣本維度時,其樣本共變異數矩陣之反矩陣不存在,使得檢定無法進行,本文研究動機即為在進行兩群多維常態母體的平均數檢定時,所遇到的高維度問題,並引發在分類上的研究,試圖尋找解決方法。本文研究目的為在兩種不同的正則化方法中,比較何者在檢定與分類上表現較佳。本文研究方法為以 Warton 與 Friedman 的正則化方法來分別進行檢定與分類上的分析,根據其檢定力與分類錯誤的表現來判斷何者較佳。由分析結果可知,兩種正則化方法並沒有絕對的優劣,須視母體各項假設而定。 / High dimensionality causes many problems in statistical analysis. For instance, consider the testing of hypotheses about multivariate regression models. Suppose that the dimension of the multivariate response is larger than the number of observations, then the sample covariance matrix is not invertible. Since the inverse of the sample covariance matrix is often needed when computing the usual likelihood ratio test statistic (under normality), the matrix singularity makes it difficult to implement the test . The singularity of the sample covariance matrix is also a problem in classification when the linear discriminant analysis (LDA) or the quadratic discriminant analysis (QDA) is used.
Different regularization methods have been proposed to deal with the singularity of the sample covariance matrix for different purposes. Warton (2008) proposed a regularization procedure for testing, and Friedman (1989) proposed a regularization procedure for classification. Is it true that Warton's regularization works better for testing and Friedman's regularization works better for classification? To answer this question, some simulation studies are conducted and the results are presented in this thesis.
It is found that neither regularization method is superior to the other.
|
305 |
Le décodage des expressions faciales émotionnelles à travers différentes bandes de fréquences spatiales et ses interactions avec l’anxiétéHarel, Yann 08 1900 (has links)
Le décodage des expressions faciales émotionnelles (EFE) est une fonction clé du système visuel humain puisqu’il est à la base de la communication non-verbale sur laquelle reposent les interactions sociales. De nombreuses études suggèrent un traitement différentiel des attributs diagnostiques du visage au sein des basses et des hautes fréquences spatiales (FS), respectivement sous-tendu par les voies magno- et parvocellulaires. En outre, des conditions telles que l’anxiété sociale sont susceptibles d’affecter ce traitement et d’entrainer une modulation des potentiels reliés aux évènements (PRE). Cette étude explore la possibilité de prédire le niveau d’anxiété social des individus à partir des corrélats électrophysiologiques du décodage d’EFE dans différentes bandes de FS. À cette fin, les PRE de 26 participants (âge moyen = 23.7 ± 4.7) ont été enregistrés lors de la présentation visuelle d’expressions neutres, de joie ou de colère filtrées pour ne retenir que les basses, moyennes ou hautes FS. L’anxiété sociale a été évaluée par l’administration préalable du questionnaire LSAS. Les latences et pics d’amplitude de la P100, N170, du complexe N2b/P3a et de la P3b ont été analysés statistiquement et utilisés pour entrainer différents algorithmes de classification. L’amplitude de la P100 était reliée au contenu en FS. La N170 a montré un effet des EFE. Le complexe N2b/P3a était plus ample pour les EFE et plus précoce pour les hautes FS. La P3b était moins ample pour les visages neutres, qui étaient aussi plus souvent omis. L’analyse discriminante linéaire a montré une précision de décodage d’en moyenne 56.11% au sein des attributs significatifs. La nature de ces attributs et leur sensibilité à l’anxiété sociale sera discutée. / The decoding of emotional facial expressions (EFE) is a key function of the human visual system since it lays at the basis of non-verbal communication that allows social interactions. Numerous studies suggests that the processing of faces diagnostic features may take place differently for low and high spatial frequencies (SF), respectively in the magno- and parvocellular pathways. Moreover, conditions such as social anxiety are supposed to influence this processing and the associated event-related potentials (ERP). This study explores the feasibility of predicting social anxiety levels using electrophysiological correlates of EFE processing across various SF bands. To this end, ERP from 26 participants (mean age = 23.7 ± 4.7) years old were recorded during visual presentation of neutral, angry and happy facial expressions, filtered to retain only low, medium or high SF. Social anxiety was previously assessed using the LSAS questionnary. Peak latencies and amplitudes of the P100, N170, N2b/P3a complex and P3b components were statistically analyzed and used to feed supervised machine learning algorithms. P100 amplitude was linked to SF content. N170 was effected by EFE. N2b/P3a complex was larger for EFE and earlier for high SF. P3b was lower for neutral faces, which were also more often omitted. The linear discriminant analysis showed a decoding accuracy across significant features with a mean of 56.11%. The nature of these features and their sensitivity to social anxiety will be discussed.
|
306 |
Dietary patterns associated with diet quality among First Nations women living on reserves in British ColumbiaMutoni, Sandrine 05 1900 (has links)
Les Indigènes canadiens vivent une rapide transition nutritionnelle marquée par une consommation accrue des produits commercialisés au dépit des aliments traditionnels. Ce mémoire cherche à identifier les patrons alimentaires associés à une meilleure alimentation des femmes autochtones vivant dans les réserves en Colombie Britannique. L’échantillon (n=493) a été sélectionné de l’étude ‘First Nations Food, Nutrition, and Environment Study’. L’étude a utilisé des rappels alimentaires de 24 heures. Pour identifier les patrons alimentaires, un indice de qualité alimentaire (QA) basé sur 10 éléments nutritionnels (fibre alimentaire, gras totaux/saturés, folate, magnésium, calcium, fer, vitamines A, C, D) a permis de classifier les sujets en trois groupes (tertiles). Ces groupes ont été comparés sur leur consommation de 25 groupes alimentaires (GAs) en employant des tests statistiques non-paramétriques (Kruskal-Wallis et ANCOVA). Une analyse discriminante (AD) a confirmé les GAs associés à la QA.
La QA des sujets était globalement faible car aucun rappel n’a rencontré les consommations recommandées pour tous les 10 éléments nutritionnels. L'AD a confirmé que les GAs associés de façon significative à la QA étaient ‘légumes et produits végétaux’, ‘fruits’, ‘aliments traditionnels’, ‘produits laitiers faibles en gras’, ‘soupes et bouillons’, et ‘autres viandes commercialisées’ (coefficients standardisés= 0,324; 0,295; 0,292; 0,282; 0,157; -0.189 respectivement). Le pourcentage de classifications correctes était 83.8%.
Nos résultats appuient la promotion des choix alimentaires recommandés par le « Guide Alimentaire Canadien- Premières Nations, Inuits, et Métis ». Une consommation accrue de légumes, fruits, produits laitiers faibles en gras, et aliments traditionnels caractérise les meilleurs patrons alimentaires. / Indigenous Canadians are going through a rapid nutrition transition marked by an increased consumption of market foods and a decreased intake of traditional products. The aim of this research is to identify dietary patterns associated with a better diet quality among Indigenous female adults living on reserve in British Columbia. The sample (n=493) was selected from the First Nations Food, Nutrition, and Environment Study. The study used 24-hour food recalls. To identify dietary patterns, individuals were classified in three groups (tertiles) according to points obtained on a dietary score (based on Dietary Reference Intakes for dietary fiber, total fat, saturated fat, folate, magnesium, calcium, iron, vitamins A, C, D). The tertiles were compared for their consumption of 25 food groups (FGs) using statistical non-parametric tests (i.e. Kruskal-Wallis and ANCOVA tests). A discriminant analysis was used to confirm the FGs significantly associated with diet quality.
Generally, subjects had poor diet quality since no food recall met the recommended intakes for all selected nutritional elements. The discriminant analysis confirmed that the FGs significantly associated with diet quality were “vegetables and vegetable products”, “fruits”, “traditional foods”, “low-fat dairy products”, “soups and broth”, and “other market meat” (standardized discriminant function coefficient= 0.324, 0.295, 0.292, 0.282, 0.157, -0.189 respectively). The percentage of correct classifications was 83.8%.
In conclusion, our findings support the promotion of dietary choices according to the “Eating well with the Canadian Food Guide – First Nations, Inuit, and Métis”. It is greater use of vegetables, fruits, low-fat dairy products, and traditional foods that characterizes better dietary patterns.
|
307 |
Towards on-line domain-independent big data learning : novel theories and applicationsMalik, Zeeshan January 2015 (has links)
Feature extraction is an extremely important pre-processing step to pattern recognition, and machine learning problems. This thesis highlights how one can best extract features from the data in an exhaustively online and purely adaptive manner. The solution to this problem is given for both labeled and unlabeled datasets, by presenting a number of novel on-line learning approaches. Specifically, the differential equation method for solving the generalized eigenvalue problem is used to derive a number of novel machine learning and feature extraction algorithms. The incremental eigen-solution method is used to derive a novel incremental extension of linear discriminant analysis (LDA). Further the proposed incremental version is combined with extreme learning machine (ELM) in which the ELM is used as a preprocessor before learning. In this first key contribution, the dynamic random expansion characteristic of ELM is combined with the proposed incremental LDA technique, and shown to offer a significant improvement in maximizing the discrimination between points in two different classes, while minimizing the distance within each class, in comparison with other standard state-of-the-art incremental and batch techniques. In the second contribution, the differential equation method for solving the generalized eigenvalue problem is used to derive a novel state-of-the-art purely incremental version of slow feature analysis (SLA) algorithm, termed the generalized eigenvalue based slow feature analysis (GENEIGSFA) technique. Further the time series expansion of echo state network (ESN) and radial basis functions (EBF) are used as a pre-processor before learning. In addition, the higher order derivatives are used as a smoothing constraint in the output signal. Finally, an online extension of the generalized eigenvalue problem, derived from James Stone’s criterion, is tested, evaluated and compared with the standard batch version of the slow feature analysis technique, to demonstrate its comparative effectiveness. In the third contribution, light-weight extensions of the statistical technique known as canonical correlation analysis (CCA) for both twinned and multiple data streams, are derived by using the same existing method of solving the generalized eigenvalue problem. Further the proposed method is enhanced by maximizing the covariance between data streams while simultaneously maximizing the rate of change of variances within each data stream. A recurrent set of connections used by ESN are used as a pre-processor between the inputs and the canonical projections in order to capture shared temporal information in two or more data streams. A solution to the problem of identifying a low dimensional manifold on a high dimensional dataspace is then presented in an incremental and adaptive manner. Finally, an online locally optimized extension of Laplacian Eigenmaps is derived termed the generalized incremental laplacian eigenmaps technique (GENILE). Apart from exploiting the benefit of the incremental nature of the proposed manifold based dimensionality reduction technique, most of the time the projections produced by this method are shown to produce a better classification accuracy in comparison with standard batch versions of these techniques - on both artificial and real datasets.
|
308 |
Classification of Carpiodes Using Fourier Descriptors: A Content Based Image Retrieval ApproachTrahan, Patrick 06 August 2009 (has links)
Taxonomic classification has always been important to the study of any biological system. Many biological species will go unclassified and become lost forever at the current rate of classification. The current state of computer technology makes image storage and retrieval possible on a global level. As a result, computer-aided taxonomy is now possible. Content based image retrieval techniques utilize visual features of the image for classification. By utilizing image content and computer technology, the gap between taxonomic classification and species destruction is shrinking. This content based study utilizes the Fourier Descriptors of fifteen known landmark features on three Carpiodes species: C.carpio, C.velifer, and C.cyprinus. Classification analysis involves both unsupervised and supervised machine learning algorithms. Fourier Descriptors of the fifteen known landmarks provide for strong classification power on image data. Feature reduction analysis indicates feature reduction is possible. This proves useful for increasing generalization power of classification.
|
309 |
Excavating the Digital Landscape : GIS analyses of social relations in central Sweden in the 1st millennium ADLöwenborg, Daniel January 2010 (has links)
This thesis presents a number of GIS based landscape analyses that together aim to explore aspects of the social development in Iron Age Västmanland, central Sweden. From a perspective where nature and culture are seen as integrated in the landscape, differences in the relations to the physical landscape are interpreted as reflecting social organisation. Thus, hydrological modelling of watersheds is used for understanding the development of territories and regions that are recognisable in the outlay of the medieval hundare districts. Statistical modelling of burial grounds together with variables describing their situation in the landscape is used to calculate an estimated chronology for sites that have not yet been excavated. This information is used to analyse differences in how the setting in the landscape can tell of different trends in claims to land and property rights. An extensive renegotiation of property rights is suggested to have taken place after climatic catastrophe in AD 536 and the years after. This is interpreted as having caused a substantial population decline in parts of Scandinavia. The social development after this includes an increasingly stratified social hierarchy in the Late Iron Age, which is reflected in the construction of grave monuments. New GIS methods for analysing how to interpret the perception of different locations of the landscape, in terms of local topography and soil are discussed in relation to this. How to make the best use of large datasets of archaeological information in combination with other sources of geographical information is a central theme. Geographically Weighted Regression is used to predicting the representativity of the registry of graves for the whole landscape. It is suggested that the increasing availability of archaeological information in digital format, together with new analytical techniques has the potential to introduce fruitful new research perspectives. This will make it increasingly rewarding to work with the large amount of data produced from rescue archaeology, and it is important that this information is managed in a structured manner. / Appendices see http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-111310
|
310 |
Análise hiperespectral de folhas de Brachiaria brizantha cv. Marandú submetidas a doses crescentes de nitrogênio / Hyperspectral analysis of Brachiaria brizantha cv. Marandú leaves under contrasting nitrogen levelsTakushi, Mitsuhiko Reinaldo Hashioka 14 February 2019 (has links)
O sensoriamento remoto é uma estratégia que pode ajudar no monitoramento da qualidade das pastagens. Objetivou-se com esse estudo analisar a resposta espectral das folhas de Brachiaria brizantha cv. Marandú, adubada com doses crescentes de ureia, para diferenciar e predizer teores foliares de nitrogênio (TFN). Os tratamentos foram distribuídos em blocos ao acaso (DBC), composto por quatro blocos e quatro tratamentos, totalizando 16 parcelas. Foram utilizadas doses crescentes de adubação com ureia: 0, 25, 50, 75 kg de N/ha/corte. Ao longo do experimento foram realizadas 7 coletas, sendo coletadas 8 folhas por parcela. Essas folhas foram submetidas à análise hiperespectral e posterior análise química do teor de nitrogênio. Ao analisar a resposta espectral das folhas, observou-se diferenças estatísticas entre os tratamentos na região do visível em todas as coletas, com ênfase na região de 550 nm (verde). Por meio de análise discriminante linear (LDA) realizada para cada coleta, os centróides gerados por todos os tratamentos apresentaram diferenças significativas, com exceção do LD1 nas coletas 6 e 7 que não apresentou distinção entre os tratamentos de 50 e 75 kg de N/ha/corte, e LD2 na coleta 5 que não apresentou distinção entre os tratamentos de 0 e 50 kg de N/ha/corte. As equações de regressão multivariada obtidas pelo método de quadrados mínimos parciais (PLSR), geraram valores razoáveis a bons de R2 (0,53 a 0,83) na predição dos TFN, onde os comprimentos de onda com maior peso nessas regressões estão na região do red edge (715 a 720 nm). Por fim, ao testar a performance de alguns Índices de Vegetação da literatura, as coletas 4, 6 e 7 apresentaram bons coeficientes de determinação (R2) que variaram de 0,65 a 0,73; uma característica em comum nos índices que melhor estimaram os TFN é a presença de comprimentos de ondas que fazem parte da região do red edge. / Remote sensing is a set of techniques that can help to monitor pasture quality. The object of this study is to analyze the spectral response from Brachiaria brizantha cv. Marandú leaves, under contrasting nitrogen levels, to differentiate and predict leaf nitrogen content. The treatments were set in a Randomized Block Design, composed of four blocks and four treatments, totaling 16 plots. Increasing doses of urea fertilization were used: 0, 25, 50, 75 kg N/ha/mowing. During the experiment, 7 data collections were performed, and 8 leaves per plot were extracted for each data collection. These leaves were submitted to hyperspectral data extraction and subsequent chemical analysis to quantify the nitrogen content. When analyzing the spectral pattern of the leaves, statistical differences among samples with different nitrogen levels were noticeable in the visible range of the spectrum in all the collections, with emphasis on the 550 nm region (green). Through linear discriminant analysis (LDA), performed for each collection, the generated centroids by the samples of each nitrogen level presented significant differences, except for LD1 in collections 6 and 7, which did not present a distinction between treatments of 50 and 75 kg of N/ha/mowing, and LD2 in collection 5 that did not distinguish between treatments of 0 and 50 kg of N/ha/mowing. The partial least square regression (PLSR) method generated reasonable to good values of R2 (0.53 to 0.83) for the prediction of leaf nitrogen content, where the wavelengths with the highest coefficient in these models are in the red edge region of the spectrum (715 to 720 nm). Finally, when testing the performance of some Vegetation Indexes from literature, collections 4, 6 and 7 presented good determination coefficients (R2) ranging from 0.65 to 0.73; a common feature in the indexes that best estimate the nitrogen content is the presence of wavelengths from the red edge region of the spectrum.
|
Page generated in 0.066 seconds