Research

Intelligent robotic systems, control theory, Reinforcement Learning and machine learning.

3Publications
1Citations
1h-index
2Years Active
3Conferences

Stats updated manually. View live on Google Scholar →

Research Statement

My research focuses on intelligent robotic systems that operate reliably in real-world environments. My primary MSc research is on electric wheelchair robots, addressing control challenges that arise from real-world mechanical constraints such as non-configurable caster wheel drift, using reinforcement learning with residual networks and sim-to-real transfer. I also worked on shape-aware path planning algorithms for the Smorphi reconfigurable modular robot, and on augmented reality systems for immersive indoor robot navigation using computer vision and depth estimation. My research broadly spans mobile robotics, reinforcement learning, motion control, and visual perception. I am currently targeting submission to ICRA 2027.

Research Interests

Reinforcement Learning for Robot Control

Training RL policies with residual networks for mobile robot control, learning to compensate for mechanical imperfections such as caster wheel drift, with sim-to-real transfer on physical hardware.

Mobile & Reconfigurable Robotics

Control and path planning for mobile robotic platforms, including wheelchair robots with non-ideal caster wheel mechanics and shape-aware navigation for the Smorphi reconfigurable modular robot. Includes sim-to-real transfer on physical hardware.

Motion Control & Dynamic Modeling

Kinematic and dynamic modeling of wheeled robotic systems, trajectory planning, and control architectures for reliable autonomous navigation in unstructured environments.

Computer Vision & Augmented Reality

Visual perception for robotic systems including depth estimation, semantic segmentation, 3D mapping, and AR-based interfaces for immersive indoor robot monitoring and human-robot interaction.

Publications

Peer-reviewed papers, preprints, and works in progress

PublishedPresentedUnder ReviewPreprintIn Preparation

Authors: M. Pasan Sanjaya Fernando, A. G. Tharindu Gimras, J. M. Budshan P. Jayasundara, D. P. Chandima

Integrates computer vision, AR, and cloud-based communication to create a real-time 3D map of a robot's surroundings for an immersive indoor robot control experience, with 1 cm accuracy depth estimation using Microsoft Kinect V2 and semantic segmentation.

Augmented RealityComputer VisionRobot Monitoring SystemCloud Integration3D Object Rendering

PresentedICIPRoB 2026 — IEEE (Pending)ICIPRoB 2026

ICIPRoB 2026 Paper 1 — Best Poster Award (Full citation coming soon)

Authors: To be updated upon IEEE publication

🏆 Best Poster Award. Presented as a poster at the 4th International Conference on Image Processing and Robotics (ICIPRoB 2026), March 7–8, 2026, Sri Lanka. Full citation available upon IEEE publication.

Mobile robotsPrototype designElectric wheelchairsCaster wheel dynamicswheel configurable

PresentedICIPRoB 2026 — IEEE (Pending)ICIPRoB 2026

Presented Paper 2 — ICIPRoB 2026 (Full citation coming soon)

Authors: To be updated upon IEEE publication

Presented as an oral presentation at the 4th International Conference on Image Processing and Robotics (ICIPRoB 2026), March 7–8, 2026, Sri Lanka. Full citation available upon IEEE publication.

Reconfigurable robotsPath planningShape-aware AlgorithmComputer Vision

In PreparationICRA 2027 — IEEE (Pending)ICRA 2027

Writing conference paper for the International Conference on Robotics & Automation(ICRA) 2027

Authors: To be updated

Currently writing a paper for submission to the International Conference on Robotics and Automation (ICRA) 2027, focusing on advanced robotics research. Full citation will be available upon IEEE publication.

Mobile RoboticsControl SystemsSim to RealReinforcement LearningResidual Networks

Research Projects

Ongoing and completed research projects

Integrates computer vision, Augmented Reality (AR), and AWS cloud services to create a real-time 3D map of a robot's surroundings, enabling immersive indoor robot monitoring and control with 1 cm depth accuracy using Microsoft Kinect V2.

M. Pasan Sanjaya Fernando, A.G. Tharindu Gimras, J.M. Budshan P. Jayasundara, D.P. Chandima

Computer VisionARAWSKinect V2Semantic Segmentation

Academic Services

Conference Reviewer/Editor:
ICARC 2025 (Reviewer)Program BookMERCon 2025 (Editorial Team)Letter of Appreciation
Teaching Assistant:
Aug-Sep 2023
Introduction to Gazebo (EE4214 Robotics and Control)University of Moratuwa
Sep-Sep 2023
Research Paper Writing with Overleaf (EE-Term 1-Intake 2024)University of Moratuwa
Jan–Mar 2023
React.js and CSSMIHA Institute

Collaborators & Labs

References

Available upon request

Prof. A.G.B.P. Jayasekara

Professor & Head, Dept. of Electrical Engineering

University of Moratuwa, Sri Lanka

jayasekara@uom.lk

Prof. Chinthaka Premachandra

Professor, Dept. of Electronic Engineering

Shibaura Institute of Technology, Tokyo, Japan

premachandra@sic.shibaura-it.ac.jp