The overaching goal in this thesis is to develop the representational frameworks, the inference algorithms, and the learning methods necessary for the accurate modeling of domains that exhibit complex and non-local dependency structures. There are three parts to this thesis. In the first part, we develop a toolbox of high order potentials (HOPs) that are useful for defining interactions and constraints that would be inefficient or otherwise difficult to use within the standard graphical modeling framework. For each potential, we develop associated algorithms so that the type of interaction can be used efficiently in a variety of settings. We further show that this HOP toolbox is useful not only for defining models, but also for defining loss functions.
In the second part, we look at the similarities and differences between special-purpose and general-purpose inference algorithms, with the aim of learning from the special-purpose algorithms so that we can build better general-purpose algorithms. Specifically, we show how to cast a popular special-purpose algorithm (graph cuts) in terms of the degrees of freedom available to a popular general-purpose algorithm (max-product belief propagation). After, we look at how to take the lessons learned and build a better general-purpose algorithm.
Finally, we develop a class of model that allows for the discrete optimization algorithms studied in the previous sections (as well as other discrete optimization algorithms) to be used as the centerpoint of probabilistic models. This allows us to build probabilistic models that have fast exact inference procedures in domains where the standard probabilistic formulation would lead to intractability.
|Date||13 August 2013|
|Contributors||Zemel, Richard S.|
|Source Sets||University of Toronto|
Page generated in 0.0022 seconds