LEMUR searches the space of possible theories using a Monte Carlo Tree Search (MCTS) algorithm.
It is an approximate search method based on a one-player game approach. It sees the problem of learning the structure of a probabilistic logic program as a multi-armed bandit problem, relying on the Monte-Carlo tree search UCT algorithm [Kocsis and Szepesvari, 2006] that combines the precision of tree search with the generality of random sampling. In particular, it is applied for learning the structure of Logic Programs with Annotated Disjunctions (LPADs). For an overview of Monte Carlo Tree Search algorithms we remand to [Browne et al, 2012].
When applying the UCT algorithm, we consider each clause to be added to the theory as a bandit problem, where each legal clause revision is an arm with unknown reward. We start from the empty theory and then we iteratively add clauses to it. In particular, each node of the search tree is a clause and its children are its revisions that are selected according to the UCT formula. The reward of a clause/arm is computed by learning the parameters of the current theory plus the clause with EMBLEM and using the resulting log-likelihood (LL). The iterative process can be stopped when adding a new clause does not improve the LL. Clause revisions are performed using a downward refinement operator following a language bias specified in terms of mode declarations [Muggleton 1995]).
The algorithm is available in the git repository https://bitbucket.org/ebellodi/yap-lemur which is a user clone of the Yap 6.2.2 repository. To download it, please do
git clone https://bitbucket.org/ebellodi/yap-lemur
Other SRL algorithms
The system has been tested on various real-world datasets and has shown good performance with respect to other state of the art statistical relational learning approaches in terms of classification abilities. In particular, the experiments compared LEMUR to:
- SLIPCASE, SLIPCOVER (for Probabilistic Logic Programs)
- BUSL, LSM, ALEPH++ExactL1, MLN-BC, MLN-BT (for Markov Logic Networks)
- RDN-B (for Relational Dependency Networks)
- SLS and RRR (for structure learning by randomized search techniques)
on seven datasets, whose reference can be found in the mentioned paper. The knowledge bases, language bias (when needed) and commands for executing the algorithms can be found at https://bitbucket.org/ebellodi/lemur and downloaded with
git clone https://bitbucket.org/ebellodi/lemur
[Kocsis and Szepesvari, 2006] Kocsis L, Szepesv.ari C (2006) Bandit based Monte-Carlo planning. In: Proceedings of the 17th European conference on Machine Learning, Springer-Verlag, pp 282-293.
[Browne et al, 2012] Browne C, Powley EJ, Whitehouse D, Lucas SM, Cowling PI, Rohlfshagen P, Tavener S, Perez D, Samothrakis S, Colton S (2012) A survey of Monte Carlo Tree Search methods. IEEE Trans on Computational Intelligence and AI in Games 4(1):1-43.