• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Proposta de implementa??o em FPGA de m?quina de vetores de suporte (SVM) utilizando otimiza??o sequencial m?nima (SMO)

Noronha, Daniel Holanda 20 November 2017 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2017-12-01T23:34:00Z No. of bitstreams: 1 DanielHolandaNoronha_DISSERT.pdf: 2617561 bytes, checksum: 88cfc246d074eabfd971d5b81edbf109 (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2017-12-05T21:07:17Z (GMT) No. of bitstreams: 1 DanielHolandaNoronha_DISSERT.pdf: 2617561 bytes, checksum: 88cfc246d074eabfd971d5b81edbf109 (MD5) / Made available in DSpace on 2017-12-05T21:07:18Z (GMT). No. of bitstreams: 1 DanielHolandaNoronha_DISSERT.pdf: 2617561 bytes, checksum: 88cfc246d074eabfd971d5b81edbf109 (MD5) Previous issue date: 2017-11-20 / A import?ncia do uso de FPGAs como aceleradores vem crescendo fortemente nos ?ltimos anos. Companhias como Amazon e Microsoft est?o incorporando FPGAs em seus data centers, objetivando especialmente acelerar algoritmos em suas ferramentas de busca. No centro dessas aplica??es est?o algoritmos de aprendizado de m?quina, como ? o caso da M?quina de Vetor de Suporte (SVM). Entretanto, para que essas aplica??es obtenham a acelera??o desejada, o uso eficiente dos recursos das FPGAs ? necess?rio. O projeto possui como objetivo a implementa??o paralela em hardware tanto da fase feed-forward de uma M?quina de Vetores de Suporte (SVM) quanto de sua fase de treinamento. A fase feed-forward (infer?ncia) ? implementada utilizando o kernel polinomial e de maneira totalmente paralela, visando obter a m?xima acelera??o poss?vel ao custo de uma maior utiliza??o da ?rea dispon?vel. Al?m disso, a implementa??o proposta para a infer?ncia ? capaz de computar tanto a classifica??o quanto a regress?o utilizando o mesmo hardware. J? o treinamento ? feito utilizando Otimiza??o Sequencial M?nima (SMO), possibilitando a resolu??o da complexa otimiza??o da SVM atrav?s de passos simples. A implementa??o da SMO tamb?m ? feita de modo extremamente paralelo, fazendo uso de t?cnicas para acelera??o como a cache do erro. Ademais, o Kernel Amig?vel ao Hardware (HFK) ? utilizado para diminuir a ?rea utilizada pelo kernel, permitindo que um n?mero maior de kernels seja implementado em um chip de mesmo tamanho, acelerando o treinamento. Ap?s a implementa??o paralela em hardware, a SVM ? validada por simula??o e s?o feitas an?lises associadas ao desempenho temporal da estrutura proposta, assim como an?lises associadas ao uso de ?rea da FPGA. / The importance of Field-Programmable Gate Arrays as compute accelerators has dramatically increased during the last couple of yers. Many companies such as Amazon, IBM and Microsoft included FPGAs in their data centers aiming to accelerate their search engines. In the center of those applications are many machine learning algorithms, such as Support Vector Machines (SVMs). For FPGAs to thrive in this new role, the effective usage of FPGA resources is required. The project?s main goal is the parallel FPGA implementation of both the feed-forward phase of a Support Vector Machine as well as its training phase. The feed-forward phase (inference) is implemented using the polynomial kernel in a highly parallel way in order to obtain maximum throughput at the cost of some extra area. Moreover, the inference implementation is capable of computing both classification and regression using a single hardware. The training phase of the SVM is implemented using Sequential Minimal Optimization (SMO), which enables the resolution of a complex convex optimization problem using simple steps. The SMO implementation is also highly parallel and uses some acceleration techniques, such as the error cache. Moreover, the Hardware Friendly Kernel (HFK) is used in order to reduce the kernel?s area, enabling the increase in the number of kernels per area. After the parallel implementation in hardware, the SVM is validated by simulation. Finally, analysis associated with the temporal performance of the proposed structure, as well as analysis associated with FPGA?s area usage are performed.

Page generated in 0.102 seconds