I am a Senior Perception & Sensor Fusion Engineer with experience building end-to-end autonomous perception systems for real-world robotics and autonomous vehicle applications. My work spans multi-sensor fusion, edge AI deployment, robotics middleware, 3D vision, and distributed perception architectures.
Currently, I lead the Sensor Fusion & Perception Systems team, focusing on scalable autonomy across ground, marine and robotics platforms.
- Autonomous Vehicle Perception
- Multi-Sensor Fusion (Camera, LiDAR, Radar, GPS/INS)
- Computer Vision & Deep Learning
- Distributed & Edge Perception
- ROS / DDS-based Robotics Software
- Real-Time AI Systems
- Agentic AI for Robotics Intelligence
Led perception & fusion stack development for an autonomous surveillance marine vehicle, including:
- Multi-camera + LiDAR + Radar fusion
- BEV fusion pipeline for 360Β° situational awareness
- Distributed perception nodes running on embedded NVIDIA platforms
- Real-time object detection & tracking (YOLOv12, deepSORT, custom CV pipelines)
- Marine obstacle detection, path perception & autonomy integration
- System simulation using MATLAB + Gazebo
- Mission execution through ROS2 + DDS communication
AI / Deep Learning
- PyTorch, TensorFlow, YOLOv12, BEV Fusion, CV Pipelines
Robotics & Middleware
- ROS2, DDS, Gazebo, NVIDIA Jetson, Edge AI
Programming
- C++, Python, QT Framework
Simulation & Modeling
- MATLAB, Gazebo Sim, RViz, Isaac tools
System Architecture
- Distributed Perception Stack, Edge Deployment, Real-time Systems
- Product Leadership & Technical Roadmap Ownership
- Email: [email protected]
- LinkedIn: https://www.linkedin.com/in/nagarjunasagar
- GitHub: https://github.com/Nagarjunasagar