• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 124
  • 39
  • 32
  • 21
  • 11
  • 9
  • 9
  • 7
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 302
  • 302
  • 153
  • 64
  • 61
  • 57
  • 42
  • 42
  • 36
  • 36
  • 35
  • 35
  • 33
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Localização de danos em estruturas isotrópicas com a utilização de aprendizado de máquina / Localization of damages in isotropic strutures with the use of machine learning

Oliveira, Daniela Cabral de [UNESP] 28 June 2017 (has links)
Submitted by DANIELA CABRAL DE OLIVEIRA null (danielacaboliveira@gmail.com) on 2017-07-31T18:25:34Z No. of bitstreams: 1 Dissertacao.pdf: 4071736 bytes, checksum: 8334dda6779551cc88a5687ed7937bb3 (MD5) / Approved for entry into archive by Luiz Galeffi (luizgaleffi@gmail.com) on 2017-08-03T16:52:18Z (GMT) No. of bitstreams: 1 oliveira_dc_me_ilha.pdf: 4071736 bytes, checksum: 8334dda6779551cc88a5687ed7937bb3 (MD5) / Made available in DSpace on 2017-08-03T16:52:18Z (GMT). No. of bitstreams: 1 oliveira_dc_me_ilha.pdf: 4071736 bytes, checksum: 8334dda6779551cc88a5687ed7937bb3 (MD5) Previous issue date: 2017-06-28 / Este trabalho introduz uma nova metodologia de Monitoramento da Integridade de Estruturas (SHM, do inglês Structural Health Monitoring) utilizando algoritmos de aprendizado de máquina não-supervisionado para localização e detecção de dano. A abordagem foi testada em material isotrópico (placa de alumínio). Os dados experimentais foram cedidos por Rosa (2016). O banco de dados disponibilizado é abrangente e inclui medidas em diversas situações. Os transdutores piezelétricos foram colados na placa de alumínio com dimensões de 500 x 500 x 2mm, que atuam como sensores e atuadores ao mesmo tempo. Para manipulação dos dados foram analisados os sinais definindo o primeiro pacote do sinal (first packet), considerando apenas o intervalo de tempo igual ao tempo da força de excitação. Neste caso, na há interferência dos sinais refletidos nas bordas da estrutura. Os sinais são obtidos na situação sem dano (baseline) e, posteriormente nas diversas situações de dano. Como método de avaliação do quanto o dano interfere em cada caminho, foram implementadas as seguintes métricas: pico máximo, valor médio quadrático (RMSD), correlação entre os sinais, normas H2 e H∞ entre os sinais baseline e sinais com dano. Logo após o cálculo das métricas para as diversas situações de dano, foi implementado o algoritmo de aprendizado de máquina não-supervisionado K-Means no matlab e também testado no toolbox Weka. No algoritmo K-Means há a necessidade da pré-determinação do número de clusters e isto pode dificultar sua utilização nas situações reais. Então, fez se necessário a implementação de um algoritmo de aprendizado de máquina não-supervisionado que utiliza propagação de afinidades, onde a determinação do número de clusters é definida pela matriz de similaridades. O algoritmo de propagação de afinidades foi desenvolvido para todas as métricas separadamente para cada dano. / This paper introduces a new Structural Health Monitoring (SHM) methodology using unsupervised machine learning algorithms for locating and detecting damage. The approach was tested with isotropic material in an aluminum plate. Experimental data were provided by Rosa (2016). This provided database is open and includes measures in a variety of situations. The piezoelectric transducers were bonded to the aluminum plate with dimensions 500 x 500 x 2mm, and act as sensors and actuators simultaneously. In order to manipulate the data, signals defining the first packet were analyzed. It considers strictly the time interval equal to excitation force length. In this case, there is no interference of reflected signals in the structure boundaries. Signals are gathered at undamaged situation (baseline) and at several damage situations. As an evaluating method of how damage interferes in each path, it was implemented the following metrics: maximum peak, root-mean-square deviation (RMSD), correlation between signals, H2 and H∞ norms regarding baseline and damaged signals. The metrics were computed for numerous damage situations. The data were evaluated in an unsupervised K-Means machine learning algorithm implemented in matlab and also tested in Weka toolbox. However, the K-Means algorithm requires the specification of the number of clusters and it is a problem for practical applications. Therefore, an implementation of an unsupervised machine learning algorithm, which uses affinity propagation was made. In this case, the determination of the number of clusters is defined by the data similarity matrix. The affinity propagation algorithm was developed for all metrics separately for each damage.
42

Sistema computacional de medidas de colorações humanas para exame médico de sudorese / Human coloring measures computer system for medical sweat test

Rodrigues, Lucas Cerqueira, 1988- 27 August 2018 (has links)
Orientador: Marco Antonio Garcia de Carvalho / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia / Made available in DSpace on 2018-08-27T14:19:19Z (GMT). No. of bitstreams: 1 Rodrigues_LucasCerqueira_M.pdf: 3544177 bytes, checksum: ffa0c5e0ad4701affb1f2910bdd85ca4 (MD5) Previous issue date: 2015 / Resumo: Na pesquisa médica, o exame de sudorese é utilizado para destacar as regiões do corpo onde o paciente transpira, sendo estas úteis para o médico identificar possíveis lesões no sistema nervoso simpático. Os estudos acerca deste exame apontam a inexistência de um processo de identificação automática das regiões do corpo. Neste projeto, utilizou-se o Kinect® para ajudar nesta solução. Este dispositivo é capaz escanear objetos 3D e possui uma biblioteca para desenvolvimento de sistemas. Este trabalho tem o objetivo de construir um sistema computacional cujo propósito é desenvolver uma solução semi-automática para análise de imagens digitais provenientes de exames de sudorese. O sistema em foco permite classificar as regiões do corpo onde o paciente transpira, por intermédio de seu escaneamento 3D, utilizando o Kinect®, e gerar um relatório para o médico com as informações consolidadas de forma a realizar o diagnóstico com facilidade, rapidez e precisão. O projeto teve início em 2013, no laboratório IMAGELab da FT/UNICAMP em Limeira/SP e contou com o apoio de uma das equipes do Hospital das Clínicas da USP de Ribeirão Preto/SP que realiza os estudos sobre o Exame de Sudorese iodo-amido. A contribuição do trabalho consistiu na construção do aplicativo, que utiliza o algoritmo de segmentação de imagem K-Means para segmentação das regiões sobre a superfície do paciente, além do desenvolvimento do sistema que inclui o Kinect®. A aplicação validou-se por meio de experimentos em pacientes reais / Abstract: In medical research, the Sweat Test is used to highlight regions where the patient sweats, which are useful for the doctor to identify possible lesions on the sympathetic nervous system. Studies on this test indicate some difficulties in the automatic identification of body regions. In this project, we used the Kinect® device to help in this solution. Created by Microsoft®, the Kinect® is able to identify distance and has a library for systems development. This work aims to build a computer system intending to resolve some of the difficulties encountered during the research in the examination of sweating. The system created allows classify regions of the body where the patient sweats, through its 3D scanning, using the Kinect®, and export to the doctor the consolidated information in order to make a diagnosis quickly, easily and accurately. The project began in 2013 in ImageLab laboratory FT / UNICAMP in Limeira / SP and had the support of one of the USP Clinical Hospital teams in Ribeirão Preto / SP that performs studies on the Sweating Exam Iodine-Starch. The contribution to knowledge was in the software construction using the Kinect® and the image segmentation using K-Means algorithm for targeting regions on the surface of the patient. The application is validated by experiments on real patients / Mestrado / Tecnologia e Inovação / Mestre em Tecnologia
43

Improving character recognition by thresholding natural images / Förbättra optisk teckeninläsning genom att segmentera naturliga bilder

Granlund, Oskar, Böhrnsen, Kai January 2017 (has links)
The current state of the art optical character recognition (OCR) algorithms are capable of extracting text from images in predefined conditions. OCR is extremely reliable for interpreting machine-written text with minimal distortions, but images taken in a natural scene are still challenging. In recent years the topic of improving recognition rates in natural images has gained interest because more powerful handheld devices are used. The main problem faced dealing with recognition in natural images are distortions like illuminations, font textures, and complex backgrounds. Different preprocessing approaches to separate text from its background have been researched lately. In our study, we assess the improvement reached by two of these preprocessing methods called k-means and Otsu by comparing their results from an OCR algorithm. The study showed that the preprocessing made some improvement on special occasions, but overall gained worse accuracy compared to the unaltered images. / Dagens optisk teckeninläsnings (OCR) algoritmer är kapabla av att extrahera text från bilder inom fördefinierade förhållanden. De moderna metoderna har uppnått en hög träffsäkerhet för maskinskriven text med minimala förvrängningar, men bilder tagna i en naturlig scen är fortfarande svåra att hantera. De senaste åren har ett stort intresse för att förbättra tecken igenkännings algoritmerna uppstått, eftersom fler kraftfulla och handhållna enheter används. Det huvudsakliga problemet när det kommer till igenkänning i naturliga bilder är olika förvrängningar som infallande ljus, textens textur och komplicerade bakgrunder. Olika metoder för förbehandling och därmed separation av texten och dess bakgrund har studerats under den senaste tiden. I våran studie bedömer vi förbättringen som uppnås vid förbehandlingen med två metoder som kallas för k-means och Otsu genom att jämföra svaren från en OCR algoritm. Studien visar att Otsu och k-means kan förbättra träffsäkerheten i vissa förhållanden men generellt sett ger det ett sämre resultat än de oförändrade bilderna.
44

Thresholded K-means Algorithm for Image Segmentation

Girish, Deeptha S. January 2016 (has links)
No description available.
45

設計與實作一個針對遊戲論壇的中文文章整合系統 / Design and Implementation of a Chinese Document Integration System for Game Forums

黃重鈞, Huang, Chung Chun Unknown Date (has links)
現今網路發達便利,人們資訊交換的方式更多元,取得資訊的方式,不再僅是透過新聞,透過論壇任何人都可以快速地、較沒有門檻地分享資訊。也因為這個特性造成資訊量暴增,就算透過搜尋引擎,使用者仍需要花費許多精力蒐集、過濾與處理特定的主題。本研究以巴哈姆特電玩資訊站─英雄聯盟哈拉討論板為例,期望可以為使用者提供一個全面且精要的遊戲角色描述,讓使用者至少對該角色有大概的認知。 本研究參考網路論壇探勘及新聞文件摘要系統,設計適用於論壇多篇文章的摘要系統。首先必須了解並分析論壇的特性,實驗如何從論壇挖掘出潛藏的資訊,並認識探勘論壇會遭遇的困難。根據前面的論壇分析再設計系統架構大致可分為三階段:1. 資料前處理:論壇文章與新聞文章不同,很難直接將名詞、動詞作為關鍵字,因此使用TF-IDF篩選出論壇文章中有代表性的詞彙,作為句子的向量空間維度。2. 分群:使用K-Means分群法分辨哪些句子是比較相似的,並將相似的句子分在同一群。 3. 句子挑選:根據句子的分群結果,依句子的關鍵字含量及TF-IDF選擇出最能代表文件集的句子。 我們發現實驗分析過程中可以看到一些有用的相關資訊,在論文的最後提出可能的改善方法,期望未來可以開發更好的論壇文章分類方式。 / With the establishment of network infrastructure, forum users can provide information fast and easily. However, users can have information retrieved through search engines, but they still have difficulty handling the articles. This is usually beyond the ability of human processing. In this study, we design a tool to automate retrieval of information from each topic in a Chinese game forum. We analyze the characteristics of the game forum, and refer to English news summary system. Our method is divided into three phases. The first phase attempts to discover the keywords in documents by TF-IDF instead of part of speech, and builds a vector space model. The second phase distinguishes the sentences by the vector space model built in the first phase. Also in the second phase, K-means clustering algorithm is exploited to gather sentences with the same sense into the same cluster. In the third phase, we choose two features to weight sentences and order sentences according to their weights. The two features are keywords of a sentence and TF-IDF. We conduct an experiment with data collected from the game forum, and find useful information through the experiment. We believe the developed techniques and the results of the analysis can be used to design a better system in the future.
46

RBF-sítě s dynamickou architekturou / RBF-networks with a dynamic architecture

Jakubík, Miroslav January 2011 (has links)
In this master thesis I recapitulated several methods for clustering input data. Two well known clustering algorithms, concretely K-means algorithm and Fuzzy C-means (FCM) algorithm, were described in the submitted work. I presented several methods, which could help estimate the optimal number of clusters. Further, I described Kohonen maps and two models of Kohonen's maps with dynamically changing structure, namely Kohonen map with growing grid and the model of growing neural gas. At last I described quite new model of radial basis function neural networks. I presented several learning algorithms for this model of neural networks. In the end of this work I made some clustering experiments with real data. This data describes the international trade among states of the whole world.
47

Quantification vectorielle en grande dimension : vitesses de convergence et sélection de variables / High-dimensional vector quantization : convergence rates and variable selection

Levrard, Clément 30 September 2014 (has links)
Ce manuscrit étudie dans un premier temps la dépendance de la distorsion, ou erreur en quantification, du quantificateur construit à partir d'un n-échantillon d'une distribution de probabilité via l'algorithme des k-means. Plus précisément, l'objectif de ce travail est de donner des bornes en probabilité sur l'écart entre la distorsion de ce quantificateur et la plus petite distorsion atteignable parmi les quantificateurs, à nombre d'images k fixé, décrivant l'influence des divers paramètres de ce problème: support de la distribution de probabilité à quantifier, nombre d'images k, dimension de l'espace vectoriel sous-jacent, et taille de l'échantillon servant à construire le quantificateur k-mean. Après un bref rappel des résultats précédents, cette étude établit l'équivalence des diverses conditions existantes pour établir une vitesse de convergence rapide en la taille de l'échantillon de l'écart de distorsion considéré, dans le cas des distributions à densité, à une condition technique ressemblant aux conditions requises en classification supervisée pour l'obtention de vitesses rapides de convergence. Il est ensuite prouvé que, sous cette condition technique, une vitesse de convergence de l'ordre de 1/n pouvait être atteinte en espérance. Ensuite, cette thèse énonce une condition facilement interprétable, appelée condition de marge, suffisante à la satisfaction de la condition technique établie précédemment. Plusieurs exemples classiques de distributions satisfaisant cette condition sont donnés, tels les mélanges gaussiens. Si cette condition de marge se trouve satisfaite, une description précise de la dépendance de l'écart de distorsion étudié peut être donné via une borne en espérance: la taille de l'échantillon intervient via un facteur 1/n, le nombre d'images k intervient via différentes quantités géométriques associées à la distribution à quantifier, et de manière étonnante la dimension de l'espace sous-jacent semble ne jouer aucun rôle. Ce dernier point nous a permis d'étendre nos résultats au cadre des espaces de Hilbert, propice à la quantification des courbes. Néanmoins, la quantification effective en grande dimension nécessite souvent en pratique une étape de réduction du nombre de variables, ce qui nous a conduit dans un deuxième temps à étudier une procédure de sélection de variables associée à la quantification. Plus précisément, nous nous sommes intéressés à une procédure de type Lasso adaptée au cadre de la quantification vectorielle, où la pénalité Lasso porte sur l'ensemble des points images du quantificateur, dans le but d'obtenir des points images parcimonieux. Si la condition de marge introduite précédemment est satisfaite, plusieurs garanties théoriques sont établies concernant le quantificateur issu d'une telle procédure, appelé quantificateur Lasso k-means, à savoir que les points images de ce quantificateur sont proches des points images d'un quantificateur naturellement parcimonieux, réalisant un compromis entre erreur en quantification et taille du support des points images, et que l'écart en distorsion du quantificateur Lasso k-means est de l'ordre de 1/n^(1/2) en la taille de l'échantillon. Par ailleurs la dépendance de cette distorsion en les différents autres paramètres de ce problème est donnée explicitement. Ces prédictions théoriques sont illustrées par des simulations numériques confirmant globalement les propriétés attendues d'un tel quantificateur parcimonieux, mais soulignant néanmoins quelques inconvénients liés à l'implémentation effective de cette procédure. / The distortion of the quantizer built from a n-sample of a probability distribution over a vector space with the famous k-means algorithm is firstly studied in this thesis report. To be more precise, this report aims to give oracle inequalities on the difference between the distortion of the k-means quantizer and the minimum distortion achievable by a k-point quantizer, where the influence of the natural parameters of the quantization issue should be precisely described. For instance, some natural parameters are the distribution support, the size k of the quantizer set of images, the dimension of the underlying Euclidean space, and the sample size n. After a brief summary of the previous works on this topic, an equivalence between the conditions previously stated for the excess distortion to decrease fast with respect to the sample size and a technical condition is stated, in the continuous density case. Interestingly, this condition looks like a technical condition required in statistical learning to achieve fast rates of convergence. Then, it is proved that the excess distortion achieves a fast convergence rate of 1/n in expectation, provided that this technical condition is satisfied. Next, a so-called margin condition is introduced, which is easier to understand, and it is established that this margin condition implies the technical condition mentioned above. Some examples of distributions satisfying this margin condition are exposed, such as the Gaussian mixtures, which are classical distributions in the clustering framework. Then, provided that this margin condition is satisfied, an oracle inequality on the excess distortion of the k-means quantizer is given. This convergence result shows that the excess distortion decreases with a rate 1/n and depends on natural geometric properties of the probability distribution with respect to the size of the set of images k. Suprisingly the dimension of the underlying Euclidean space seems to play no role in the convergence rate of the distortion. Following the latter point, the results are directly extended to the case where the underlying space is a Hilbert space, which is the adapted framework when dealing with curve quantization. However, high-dimensional quantization often needs in practical a dimension reduction step, before proceeding to a quantization algorithm. This motivates the following study of a variable selection procedure adapted to the quantization issue. To be more precise, a Lasso type procedure adapted to the quantization framework is studied. The Lasso type penalty applies to the set of image points of the quantizer, in order to obtain sparse image points. The outcome of this procedure is called the Lasso k-means quantizer, and some theoretical results on this quantizer are established, under the margin condition introduced above. First it is proved that the image points of such a quantizer are close to the image points of a sparse quantizer, achieving a kind of tradeoff between excess distortion and size of the support of image points. Then an oracle inequality on the excess distortion of the Lasso k-means quantizer is given, providing a convergence rate of 1/n^(1/2) in expectation. Moreover, the dependency of this convergence rate on different other parameters is precisely described. These theoretical predictions are illustrated with numerical experimentations, showing that the Lasso k-means procedure mainly behaves as expected. However, the numerical experimentations also shed light on some drawbacks concerning the practical implementation of such an algorithm.
48

Algorithmes et méthodes pour le diagnostic ex-situ et in-situ de systèmes piles à combustible haute température de type oxyde solide / Ex-situ and in-situ diagnostic algorithms and methods for solid oxide fuel cell systems

Wang, Kun 21 December 2012 (has links)
Le projet Européen « GENIUS » ambitionne de développer les méthodologies génériques pour le diagnostic de systèmes piles à combustible à haute température de type oxyde solide (SOFC). Le travail de cette thèse s’intègre dans ce projet ; il a pour objectif la mise en oeuvre d’un outil de diagnostic en utilisant le stack comme capteur spécial pour détecter et identifierles défaillances dans les sous-systèmes du stack SOFC.Trois algorithmes de diagnostic ont été développés, se basant respectivement sur la méthode de classification k-means, la technique de décomposition du signal en ondelettes ainsi que la modélisation par réseau Bayésien. Le premier algorithme sert au diagnostic ex-situ et est appliqué pour traiter les donnés issues des essais de polarisation. Il permet de déterminer les variables de réponse significatives qui indiquent l’état de santé du stack. L’indice Silhouette a été calculé comme mesure de qualité de classification afin de trouver le nombre optimal de classes dans la base de données.La détection de défaut en temps réel peut se réaliser par le deuxième algorithme. Puisque le stack est employé en tant que capteur, son état de santé doit être vérifié préalablement. La transformée des ondelettes a été utilisée pour décomposer les signaux de tension de la pile SOFC dans le but de chercher les variables caractéristiques permettant d’indiquer l’état desanté de la pile et également assez discriminatives pour différentier les conditions d’opération normales et anormales.Afin d’identifier le défaut du système lorsqu’une condition d’opération anormale s’est détectée, les paramètres opérationnelles réelles du stack doivent être estimés. Un réseau Bayésien a donc été développé pour accomplir ce travail.Enfin, tous les algorithmes ont été validés avec les bases de données expérimentales provenant de systèmes SOFC variés, afin de tester leur généricité. / The EU-project “GENIUS” is targeted at the investigation of generic diagnosis methodologies for different Solid Oxide Fuel Cell (SOFC) systems. The Ph.D study presented in this thesis was integrated into this project; it aims to develop a diagnostic tool for SOFC system fault detection and identification based on validated diagnostic algorithms, through applying theSOFC stack as a sensor.In this context, three algorithms, based on the k-means clustering technique, the wavelet transform and the Bayesian method, respectively, have been developed. The first algorithm serves for ex-situ diagnosis. It works on the classification of the polarization measurements of the stack, aiming to figure out the significant response variables that are able to indicate the state of health of the stack. The parameter “Silhouette” has been used to evaluate the classification solutions in order to determine the optimal number of classes/patterns to retain from the studied database.The second algorithm allows the on-line fault detection. The wavelet transform has been used to decompose the SOFC’s voltage signals for the purpose of finding out the effective feature variables that are discriminative for distinguishing the normal and abnormal operating conditions of the system. Considering the SOFC as a sensor, its reliability must be verifiedbeforehand. Thus, the feature variables are also required to be indicative to the state of health of the stack.When the stack is found being operated improperly, the actual operating parameters should be estimated so as to identify the system fault. To achieve this goal, a Bayesian network has been proposed serving as a meta-model of the stack to accomplish the estimation. At the end, the databases originated from different SOFC systems have been used to validate these three algorithms and assess their generalizability.
49

RBF-sítě s dynamickou architekturou / RBF-networks with a dynamic architecture

Jakubík, Miroslav January 2012 (has links)
In this master thesis I recapitulated several methods for data clustering. Two well known clustering algorithms, concretely K-means algorithm and Fuzzy C-means (FCM) algorithm, were described in the submitted work. I presented several methods, which could help estimate the optimal number of clusters. Further, I described Kohonen maps and two models of Kohonen's maps with dynamically changing structure, namely Kohonen map with growing grid and the model of growing neural gas. At last I described quite new model of radial basis function neural networks. I presented several learning algorithms for this model of neural networks, RAN, RANKEF, MRAN, EMRAN and GAP. In the end of this work I made some clustering experiments with real data. This data describes the international trade among states of the whole world.
50

Utilização de métodos de interpolação e agrupamento para definição de unidades de manejo em agricultura de precisão / Interpolator method and clustering to definition of management zones on precision agriculture

Schenatto, Kelyn 04 February 2014 (has links)
Made available in DSpace on 2017-05-12T14:46:57Z (GMT). No. of bitstreams: 1 Kelyn Schenatto.pdf: 4212903 bytes, checksum: 0ba04350cc25aff5e6acb249938e5375 (MD5) Previous issue date: 2014-02-04 / Despite the benefits offered by the technology of precision agriculture (PA), the necessity of dense sampling grids and use of sophisticated equipment for the soil and plant handling make it financially unfeasible in many cases, especially for small producers. With the aimof making viable the PA, the definition of management zones (MZ) consists in dividing the plotin subregions that have similar physicochemical features, where it is possible to work in the conventional manner (without site-specific input application), differing them from the other sub-regions of the field. Thus we use concepts from PA, but adapting some procedures to the reality of the producer, not requiring the replacement of machinery traditionally used.Therefore, yield is usually correlated with physical and chemical properties through statistical and geostatistical methods, and attributes are selected to generate thematic maps, which are then used to define the MZ. In the generation of thematic maps step, are commonly used traditional interpolation methods (Inverse Distance - ID , inverse of the square distance - ISD, and kriging - KRI), and it is important to assess if the quality of thematic maps generated influences in the MZ drafting process and can not justify the interpolation data using robust methods such as KRI. Thus, the present study aimed to evaluate three interpolation methods (ID , ISD and KRI ) for generation of thematic maps used in the generation of MZ by clustering methods K-Means and Fuzzy C-Meas, in two experimental areas (9.9 ha and 15.5 ha), and been used data from four seasons (three crops of soybeans and one of corn). The KRI interpolation and ID showed similar UM. The agreement between the maps decreased when an increase in the number of classes, but with greater intensity with the Fuzzy C-Means method. Clustering algorithms K-Means and Fuzzy C-Means performed similar division on two UM. The best interpolation method was KRI following the ID, what justifies the choice of a more robust interpolation (KRI) to generate UM / Apesar dos benefícios proporcionados pela tecnologia de agricultura de precisão (AP), a necessidade de grades amostrais densas e uso de equipamentos sofisticados para o manejo do solo e da planta tornam o seu cultivo em muitos casos inviável financeiramente, principalmente para pequenos produtores. Com a finalidade de viabilizar a AP, a definição de unidades de manejo (UM) consiste em dividir o talhão em sub-regiões que possuam características físico-químicas semelhantes, onde se pode trabalhar de forma convencional (sem aplicação localizada de insumos), diferenciando-se das outras sub-regiões do talhão. Dessa forma, utilizam-se conceitos de AP, mas adaptam-se alguns procedimentos para a realidade do produtor, não havendo necessidade da substituição de máquinas tradicionalmente utilizadas. Para isso, são geralmente correlacionados atributos físicos e químicos com a produtividade das culturas e, por meio de métodos estatísticos e geoestatísticos, selecionam-se atributos que darão origem a mapas temáticos posteriormente utilizados para definição das UM. Na etapa de geração dos mapas temáticos, são normalmente utilizados métodos tradicionais de interpolação (inverso da distância ID, inverso da distância ao quadrado IDQ e krigagem KRI) e é importante avaliar se a qualidade dos mapas temáticos gerados influencia no processo de definição das UM, podendo desta forma não se justificar a interpolação de dados a partir do uso de métodos robustos como a KRI. O presente trabalho teve como objetivo a avaliação de três métodos de interpolação (ID, IQD e KRI) para definição dos mapas temáticos utilizados na confecção de UM pelos métodos de agrupamento K-Means e Fuzzy C-Means, em duas áreas experimentais (de 9,9 ha e 15,5 ha), sendo utilizados dados de quatro safras (três safras de soja e uma de milho). Os interpoladores ID e KRI apresentaram UM similares. A concordância entre os mapas diminuiu quando houve aumento do número de classes, mas teve maior intensidade com o método Fuzzy C-Means. Os algoritmos de agrupamento K-Means e Fuzzy C-Means se apresentaram similares na divisão em duas UM. O melhor método de interpolação foi a KRI, seguida do ID, o que justifica a escolha do interpolador mais robusto (KRI) na geração de UM

Page generated in 0.1108 seconds