Robotics, ML, Perception
I am an MS Robotics student at Arizona state university. I do research in machine learning with focus on 3D vision and robotics. My masters thesis is in learning based Visual Odometry. My research is driven by a more abstract interest in shape understanding at both the structural and semantic levels.
M.S., Robotics | Arizona State University |
B.Tech., Electronics and communication | SRM University |
I am very happy to share that we submitted our latest research topic to ICRA 2025!
A Self-Supervised approach for Learning based Visual Localization. It is a novel use of contrastive loss
function for finding SE(3) pose of object.
Concepts: Epipolar geometry, Siamese networks,Contrastive loss,Optical Flow.
View Code
MS Thesis - Visual odometry using Contrastive Learning: designing a supervised contrastive regression loss for visual pose estimation. Transnational accuracy:0.76 cm. Rotational accuracy:1e-6 radians on KITTI dataset. Advisor Dr Yezhou Yang
Developed an RGBD camera-based, highly accurate (<0.02m error) box-dimensioning system for a mid-size warehouses and applied convex optimization techniques to maximize trucking efficiency by optimizing cargo volume and delivery address arrangement, saving up to 5 trips, equivalent to approximately $1000 per day.
Performed semantic segmentation of infrastructure objects in 3D point cloud data collected via vehicle-mounted LiDAR, achieving 92% IoU using a PointNet architecture trained on the Waymo dataset to track 6D object poses.
• Contributed to Warehouse drone automation team by deploying various Perception and Mapping algorithms. • Integrated Particle Filter, EKF, and YOLO semantic segmentation into PX4 autopilot.
Crreating a 3D pointmap of a rocky mountain using ORB-SLAM. while autonomously navigating and landing on the mocing rover using optical flow (Concepts: ORB-SLAM, ROS, Gazebo Simulation, Optical flow) View Code
A Self-Supervised approach for Learning based Visual Localization. It is a novel use of contrastive loss funtion for finding SE(3) pose of object View Code
See blog-post
I have written a detail article about Camera calinration and bundle Adjustment View Code
Implementation of low-pass filter, high-pass filter on the images, Phase swapping of two images
See blog-post
Generation of Gaussian pyramid and Laplcian pyramid as a basic encoder decoder model
Implemented a Deep Convolutional Generative Adversarial Network (DCGAN) model for image generation as described in Goodfellow’s paper, showcasing proficiency in deep learning and computer vision. (Implementation of Min-Max adversarial loss funtion)
View Code
Balancing an inverted pendulum by programming an LQR controller. Design of a controller by checking observabilty matrix, kalman filtering, and finally checking the controllability of the designed system.
View Code
IROS 2018 https://ieeexplore.ieee.org/document/8594129 M. Harikrishnan Nair, T. Ghanshyam Singh, G. Chourasia, A. Das, A. Shrivastava and Z. S. Bhatt, “Flamen 7 DOF robotic Arm to Manipulate a Spanish Fan,” 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 4152-4157, DOI: 10.1109/IROS.2018.8594129.
https://ieeexplore.ieee.org/abstract/document/8991313 Chourasia, Gunjan, et al. “7-dof robotic manipulator for autonomous segregation using transfer learning.” 2019 6th International Conference on Computing for Sustainable Global Development (INDIACom). IEEE, 2019.