Volume 6 - Issue 4
A two-stage learning framework of relational Markov networks
Abstract
Relational Markov networks (RMNs), which extended from Markov random fields (or Markov networks) to the relational setting, are a type of joint probabilistic modeling framework for an entire collection of related entities. RMNs can deal with relational data effectively by integrating information from content attributes of individual entities as well as the links between them. When we learn RMNs, optimization algorithms such as conjugate gradient are combined with approximate probabilistic inference for estimating the parameters. However, the computational complexity of learning an RMN is very high because the approximate inference algorithm must be invoked repeatedly during the whole training process, so the training time is usually unacceptable while the scale of the RMN is too large. In this paper, we propose a two-stage learning framework for RMNs. We divide all of the cliques in an RMN into two categories: evidence cliques and compatibility cliques. In the first stage we estimate the parameters of evidence cliques in the flat setting without take the link attributes into account, and then in the second stage we estimate all of the parameters of both the evidence and compatibility cliques based on the results of the first stage. The experimental results on a bibliographic collective classification task show that the two-stage learning framework can obviously increase the convergence speed of the optimization algorithms and save much training time while learning an RMN.
Paper Details
PaperID: 77956953423
Author's Name: Wan, H., Lin, Y., Wu, Z., Huang, H.
Volume: Volume 6
Issues: Issue 4
Keywords: Approximate probabilistic inference, Maximum a posterior, Optimization, Relational Markov networks
Year: 2010
Month: April
Pages: 1027 - 1035