Simultaneous Localization and Mapping (SLAM) from a monocular video camera has been a well researched area. A lot of work has already been done in this field. Nowadays, lot of people are using head mounted cameras. The videos taken by such cameras are called egocentric videos because of the nature of the scene captured is similar to that viewed by a person. Estimating camera pose and 3d geometry reliably from egocentric videos still remain a challenge because of the wild nature of such videos and large parallax. Also, in case of forward motion the scene changes very little in proportion to the distance traversed. The current SLAM techniques provide inferior results in such cases. We propose a novel batch mode structure from motion based technique for robust SLAM in such scenarios. These talk will focus on the problems in current SLAM techniques and will give a snapshot of our method with some of our results.