• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Aggregated Learning: An Information Theoretic Framework to Learning with Neural Networks

Soflaei Shahrbabak, Masoumeh 04 November 2020 (has links)
Deep learning techniques have achieved profound success in many challenging real-world applications, including image recognition, speech recognition, and machine translation. This success has increased the demand for developing deep neural networks and more effective learning approaches. The aim of this thesis is to consider the problem of learning a neural network classifier and to propose a novel approach to solve this problem under the Information Bottleneck (IB) principle. Based on the IB principle, we associate with the classification problem a representation learning problem, which we call ``IB learning". A careful investigation shows there is an unconventional quantization problem that is closely related to IB learning. We formulate this problem and call it ``IB quantization". We show that IB learning is, in fact, equivalent to the IB quantization problem. The classical results in rate-distortion theory then suggest that IB learning can benefit from a vector quantization approach, namely, simultaneously learning the representations of multiple input objects. Such an approach assisted with some variational techniques, result in a novel learning framework that we call ``Aggregated Learning (AgrLearn)", for classification with neural network models. In this framework, several objects are jointly classified by a single neural network. In other words, AgrLearn can simultaneously optimize against multiple data samples which is different from standard neural networks. In this learning framework, two classes are introduced, ``deterministic AgrLearn (dAgrLearn)" and ``probabilistic AgrLearn (pAgrLearn)". We verify the effectiveness of this framework through extensive experiments on standard image recognition tasks. We show the performance of this framework over a real world natural language processing (NLP) task, sentiment analysis. We also compare the effectiveness of this framework with other available frameworks for the IB learning problem.
2

On Non-Convex Splitting Methods For Markovian Information Theoretic Representation Learning

Teng Hui Huang (12463926) 27 April 2022 (has links)
<p>In this work, we study a class of Markovian information theoretic optimization problems motivated by the recent interests in incorporating mutual information as performance metrics which gives evident success in representation learning, feature extraction and clustering problems. In particular, we focus on the information bottleneck (IB) and privacy funnel (PF) methods and their recent multi-view, multi-source generalizations that gain attention because the performance significantly improved with multi-view, multi-source data. Nonetheless, the generalized problems challenge existing IB and PF solves in terms of the complexity and their abilities to tackle large-scale data. </p> <p>To address this, we study both the IB and PF under a unified framework and propose solving it through splitting methods, including renowned algorithms such as alternating directional method of multiplier (ADMM), Peaceman-Rachford splitting (PRS) and Douglas-Rachford splitting (DRS) as special cases. Our convergence analysis and the locally linear rate of convergence results give rise to new splitting method based IB and PF solvers that can be easily generalized to multi-view IB, multi-source PF. We implement the proposed methods with gradient descent and empirically evaluate the new solvers in both synthetic and real-world datasets. Our numerical results demonstrate improved performance over the state-of-the-art approach with significant reduction in complexity. Furthermore, we consider the practical scenario where there is distribution mismatch between training and testing data generating processes under a known bounded divergence constraint. In analyzing the generalization error, we develop new techniques inspired by the input-output mutual information approach and tighten the existing generalization error bounds.</p>
3

Implementation and verification of the Information Bottleneck interpretation of deep neural networks

Liu, Feiyang January 2018 (has links)
Although deep neural networks (DNNs) have made remarkable achievementsin various elds, there is still not a matching practical theory that is able toexplain DNNs' performances. Tishby (2015) proposed a new insight to analyzeDNN via the Information bottleneck (IB) method. By visualizing how muchrelevant information each layer contains in input and output, he claimed thatthe DNNs training is composed of tting phase and compression phase. Thetting phase is when DNNs learn information both in input and output, andthe prediction accuracy goes high during this process. Afterwards, it is thecompression phase when information in output is preserved while unrelatedinformation in input is thrown away in hidden layers. This is a tradeo betweenthe network complexity (complicated DNNs lose less information in input) andprediction accuracy, which is the same goal with the IB method.In this thesis, we verify this IB interpretation rst by reimplementing Tishby'swork, where the hidden layer distribution is approximated by the histogram(binning). Additionally, we introduce various mutual information estimationmethods like kernel density estimators. Based upon simulation results, we concludethat there exists an optimal bound on the mutual information betweenhidden layers with input and output. But the compression mainly occurs whenthe activation function is \double saturated", like hyperbolic tangent function.Furthermore, we extend the work to the simulated wireless model where thedata set is generated by a wireless system simulator. The results reveal that theIB interpretation is true, but the binning is not a correct tool to approximatehidden layer distributions. The ndings of this thesis reect the informationvariations in each layer during the training, which might contribute to selectingtransmission parameter congurations in each frame in wireless communicationsystems. / Ä ven om djupa neuronnät (DNN) har gjort anmärkningsvärda framsteg på olikaområden, finns det fortfarande ingen matchande praktisk teori som kan förklara DNNs prestanda. Tishby (2015) föreslog en ny insikt att analysera DNN via informationsflaskhack (IB) -metoden. Genom att visualisera hur mycket relevant information varje lager innehåller i ingång och utgång, hävdade han att DNNs träning består av monteringsfas och kompressionsfas. Monteringsfasenär när DNN lär sig information både i ingång och utgång, och prediktionsnoggrannheten ökar under denna process. Efteråt är det kompressionsfasen när information i utgången bevaras medan orelaterad information i ingången kastas bort. Det här är en kompromiss mellan nätkomplexiteten (komplicerade DNN förlorar mindre information i inmatning) och predictionsnoggrannhet, vilket är exakt samma mål med informationsflaskhals (IB) -metoden.I detta examensarbete kontrollerar vi denna IB-framställning först genom att implementera om Tishby’s arbete, där den dolda lagerfördelningen approximeras av histogrammet (binning). Dessutom introducerar vi olika metoder förömsesidig information uppskattning som kernel density estimators. Baserat på simuleringsresultatet drar vi slutsatsen att det finns en optimal bindning för denömsesidiga informationen mellan dolda lager med ingång och utgång. Men komprimeringen sker huvudsakligen när aktiveringsfunktionen är “dubbelmättad”, som hyperbolisk tangentfunktion.Dessutom utvidgar vi arbetet till den simulerad trådlösa modellen där data set genereras av en trådlös systemsimulator. Resultaten visar att IB-framställning är sann, men binningen är inte ett korrekt verktyg för att approximera dolda lagerfördelningar. Resultatet av denna examensarbete reflekterar informationsvariationerna i varje lager, vilket kan bidra till att välja överföringspa-rameterns konfigurationer i varje ram i trådlösa kommunikationssystem
4

Learning competitive ensemble of information-constrained primitives

Sodhani, Shagun 07 1900 (has links)
No description available.

Page generated in 0.1242 seconds