Noushad Sojib
Robot Learning Engineer building robust policies from imperfect human demonstrations.
Focused on imitation learning, VLA models, and diffusion policies for real-world robotic systems.
M.Sc. in Computer Science from the University of New Hampshire
(Cognitive Assistive Robotics Lab).
B.Sc. from SUST, where I founded a robotics club and built humanoid robots from scratch.
Currently seeking roles in robot learning and real-world robotic systems.
Email /
GitHub /
Google Scholar /
LinkedIn
|
|
Robot Learning
Projects focused on learning robust robot behaviors from human demonstrations,
spanning imitation learning, VLA models, and diffusion policies.
|
Real-World Robot Learning with Diffusion Policies and VLA Models
Applying state-of-the-art robot learning frameworks—including Diffusion Policy,
π0.5, OpenVLA, and Groot N1.5—to train manipulation
policies from human demonstrations. Focus on generalizing across tasks and environments with minimal
data, targeting real-world deployment on physical robot platforms.
|
Robot From Scratch
I started building robots from scratch out of curiosity—there were no humanoid platforms available to learn
from where I grew up. This led me to found RoboSUST, where I led a team to design, build,
and deploy multiple robotic systems using low-cost hardware and self-developed control pipelines.
|
|
Ribo — 24 DOF Humanoid Robot
Designed and built a full humanoid robot capable of upper-body manipulation and human-interactive behaviors.
Role: Team Lead — hardware, software, and interaction interface
Key Contributions: Led hardware and software development of a 24 DOF humanoid platform. Implemented control for coordinated arm and hand motion. Designed user-facing interaction interface.
|
|
Lee: A biped walking robot
Built a biped robot focused on achieving stable walking with minimal hardware cost.
Role: Team Lead — mechanical design, gait control, and software
Key Contributions: Designed mechanical structure for balance and locomotion. Implemented basic gait generation and control. Optimized for low-cost components.
|
|
Kiddo
Interactive educational robot designed to engage children through programmable behaviors—built and validated in both simulation and physical hardware.
Role: Solo Designer & Developer
|
Hardware Design
Building robots from scratch taught me that good software needs good hardware.
Along the way, I designed several embedded devices—a few of which landed in peer-reviewed venues
and are now in active use on research robots.
|
|
3Wheel Mouse
Three-wheeled input device that enables efficient, versatile non-visual computer interaction for blind users.
Role: Designer & Prototype Builder — published at ACM UIST 2024
Islam, Md Touhidul, et al. “Wheeler: A three-wheeled input device for usable, efficient, and versatile non-visual interaction.” ACM UIST 2024. Paper & Video
|
|
Charging Dock
Robust, low-cost autonomous charging dock for mobile robots—enabling continuous operation without human intervention.
Role: Designer & Prototype Builder — demonstrated live at IROS 2023, deployed on Hello Stretch
Live demonstrated at IROS 2023. An extended version is actively used with the Hello Stretch robot. See example
|
|
Lowcost Braille Display
Low-cost single-cell Braille display that makes digital Bangla text accessible to visually impaired readers.
Role: Designer & Prototype Builder — published at IEEE ICBSLP 2018
Sojib, Noushad, and M. Zafar Iqbal. “Single cell bangla braille book reader for visually impaired people.” IEEE ICBSLP 2018. Paper
|
Research
How can robots learn reliably from imperfect human demonstrations?
I develop methods that enable robots to extract safe, generalizable behaviors from noisy or erroneous data,
bringing imitation learning closer to real-world deployment.
|
|
Self Supervised Detection of Incorrect Human Demonstrations: A Path Toward Safe Imitation Learning by Robots in the Wild
Noushad Sojib, Momotaz Begum
IROS 2024, 2024
Problem: Human demonstrations are often noisy and degrade policy learning.
Approach: Proposed BED, a self-supervised method to detect and filter incorrect demonstrations.
Result: Enables robust policy learning from real-world, imperfect data.
Validation: RoboSuite simulation + real Sawyer robot arm deployment.
|
|
Self-Supervised Visual Motor Skills via Neural Radiance Fields
Paul Gesel, Noushad Sojib, Momotaz Begum
IROS 2023, 2023
Problem: Learning visuomotor policies without labeled data.
Approach: Combined NeRF-based scene representation with keypoint correspondence for self-supervised learning.
Result: Learns manipulation skills directly from raw visual input with improved generalization.
|
|