Return to search

Feature extraction via dependence structure optimization / Požymių išskyrimas optimizuojant priklausomumo struktūrą

In many important real world applications the initial representation of the data is inconvenient,
or even prohibitive for further analysis. For example, in image analysis, text
analysis and computational genetics high-dimensional, massive, structural, incomplete,
and noisy data sets are common. Therefore, feature extraction, or revelation of informative
features from the raw data is one of fundamental machine learning problems.
Efficient feature extraction helps to understand data and the process that generates it,
reduce costs for future measurements and data analysis. The representation of the structured
data as a compact set of informative numeric features allows applying well studied
machine learning techniques instead of developing new ones..
The dissertation focuses on supervised and semi-supervised feature extraction methods,
which optimize the dependence structure of features. The dependence is measured using
the kernel estimator of Hilbert-Schmidt norm of covariance operator (HSIC measure).
Two dependence structures are investigated: in the first case we seek features which
maximize the dependence on the dependent variable, and in the second one, we additionally
minimize the mutual dependence of features. Linear and kernel formulations of
HBFE and HSCA are provided. Using Laplacian regularization framework we construct
semi-supervised variants of HBFE and HSCA.
Suggested algorithms were investigated experimentally using conventional and multilabel
classification data... [to full text] / Daugelis praktiškai reikšmingu sistemu mokymo uždaviniu reikalauja gebeti panaudoti didelio matavimo, strukturizuotus, netiesinius duomenis. Vaizdu, teksto, socialiniu bei verslo ryšiu analize, ivairus bioinformatikos uždaviniai galetu buti tokiu uždaviniu pavyzdžiais. Todel požymiu išskyrimas dažnai yra pirmasis žingsnis, kuriuo pradedama duomenu analize ir nuo kurio priklauso galutinio rezultato sekme. Šio disertacinio darbo tyrimo objektas yra požymiu išskyrimo algoritmai, besiremiantys priklausomumo savoka. Darbe nagrinejamas priklausomumas, nusakytas kovariacinio operatoriaus Hilberto-Šmidto normos (HSIC mato) branduoliniu ivertiniu. Pasiulyti šiuo ivertiniu besiremiantys HBFE ir HSCA algoritmai leidžia dirbti su bet kokios strukturos duomenimis, bei yra formuluojami tikriniu vektoriu terminais (tai leidžia optimizavimui naudoti standartinius paketus), bei taikytini ne tik prižiurimo, bet ir dalinai prižiurimo mokymo imtims. Pastaruoju atveju HBFE ir HSCA modifikacijos remiasi Laplaso reguliarizacija. Eksperimentais su klasifikavimo bei daugiažymio klasifikavimo duomenimis parodyta, jog pasiulyti algoritmai leidžia pagerinti klasifikavimo efektyvuma lyginant su PCA ar LDA.

Identiferoai:union.ndltd.org:LABT_ETD/oai:elaba.lt:LT-eLABa-0001:E.02~2012~D_20121001_093645-66010
Date01 October 2012
CreatorsDaniušis, Povilas
ContributorsVaitkus, Pranas, Žilinskas, Antanas, Navakauskas, Dalius, Vaicekauskas, Rimantas, Bastys, Algirdas, Simutis, Rimvydas, Bikelis, Algimantas Jonas, Baronas, Romas, Vilnius University
PublisherLithuanian Academic Libraries Network (LABT), Vilnius University
Source SetsLithuanian ETD submission system
LanguageEnglish
Detected LanguageEnglish
TypeDoctoral thesis
Formatapplication/pdf
Sourcehttp://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20121001_093645-66010
RightsUnrestricted

Page generated in 0.0023 seconds