Statistical Relational Models combine both relational structure and uncertainty exhibited by real world domains in a single framework (like Markov Logic Networks). Inference (and hence learning) in these models become significantly hard because they don't assume that instances are i.i.d., due to which size of the network becomes huge and state space is exponential in the size of ground atoms. Our work deals with these challenges by proposing methods which exploit the symmetries in the network to come up with the solutions which are scalable in large domain sizes. We propose scalability in MAP inference, Marginal inference in presence of constraints, and parameter learning in case of large test data sizes.