The present thesis addresses several machine learning problems on generative and predictive models on sequential data. All the models considered have in common that they can be de ned in terms of nite-state machines. On one line of work we study algorithms for learning the probabilistic analog of
Deterministic Finite Automata (DFA). This provides a fairly expressive generative model for sequences with very interesting algorithmic properties. State-merging algorithms for learning these models can be interpreted as a divisive clustering scheme where the "dependency graph" between clusters is not
necessarily a tree. We characterize these algorithms in terms of statistical queries and a use this characterization for proving a lower bound with an explicit dependency on the distinguishability of the target machine. In a more realistic setting, we give an adaptive state-merging algorithm satisfying the stringent algorithmic constraints of the data streams computing paradigm. Our algorithms come with strict PAC learning guarantees. At the heart of state-merging algorithms lies a statistical test for distribution similarity. In the streaming version this is replaced with a bootstrap-based test which yields faster
convergence in many situations. We also studied a wider class of models for which the state-merging paradigm also yield PAC learning algorithms. Applications of this method are given to continuous-time Markovian models and stochastic transducers on pairs of aligned sequences. The main tools used for obtaining these results include a variety of concentration inequalities and sketching algorithms.
In another line of work we contribute to the rapidly growing body of spectral learning algorithms.
The main virtues of this type of algorithms include the possibility of proving nite-sample error bounds in the realizable case and enormous savings on computing time over iterative methods like Expectation-Maximization. In this thesis we give the rst application of this method for learning conditional distributions over pairs of aligned sequences de ned by probabilistic nite-state transducers. We also prove that the method can learn the whole class of probabilistic automata, thus extending the class of models previously known to be learnable with this approach. In the last two chapters we present works combining spectral learning with methods from convex optimization and matrix completion. Respectively, these yield an alternative interpretation of spectral learning and an extension to cases with missing data.
In the latter case we used a novel joint stability analysis of matrix completion and spectral learning to prove the rst generalization bound for this type of algorithms that holds in the non-realizable case. Work in this area has been motivated by connections between spectral learning, classic automata theory,
and statistical learning; tools from these three areas have been used.
Identifer | oai:union.ndltd.org:TDX_UPC/oai:www.tdx.cat:10803/129070 |
Date | 12 July 2013 |
Creators | Balle Pigem, Borja de |
Contributors | Gavaldà Mestre, Ricard, Castro, Jorge (Castro Rabal), Universitat Politècnica de Catalunya. Departament de Llenguatges i Sistemes Informàtics |
Publisher | Universitat Politècnica de Catalunya |
Source Sets | Universitat Politècnica de Catalunya |
Language | English |
Detected Language | English |
Type | info:eu-repo/semantics/doctoralThesis, info:eu-repo/semantics/publishedVersion |
Format | 175 p., application/pdf |
Source | TDX (Tesis Doctorals en Xarxa) |
Rights | L'accés als continguts d'aquesta tesi queda condicionat a l'acceptació de les condicions d'ús establertes per la següent llicència Creative Commons: http://creativecommons.org/licenses/by-nc-sa/3.0/es/, info:eu-repo/semantics/openAccess |
Page generated in 0.0028 seconds