Matrix Factorization (MF) models have seen tremendous success in relation extraction tasks. However, they continue to perform much worse than Tensor Factorization (TF) models on the task of Knowledge Base Inference (KBI). We analyze the application of MF for KBI further, starting by proposing a new evaluation protocol for MF models that makes comparison with TF models fair. This results in a steep drop in performance. We attribute this to the high out-of-vocabulary (OOV) rate of entity pairs in test folds of commonly-used datasets. To alleviate this issue, we propose three extensions. Our best model shows a dramatic increase in performance across all datasets, while being robust across diverse dataset characteristics (arxiv/1706.00637).