roadmap
Robotics-Academy: New exercise on End-to-End Visual Control of an Autonomous Vehicle using DeepLearning.
Abstract
This project aims to develop a new deep learning exercise within Robotics-Academy, focusing on end-to-end visual control of autonomous vehicles. The exercise will allow users to upload trained PyTorch/TensorFlow models that process camera input from a simulated car or drone in Gazebo and output control vehicle movement based on the model’s predictions. A web interface will enable model upload, real-time visualization, and performance monitoring. The exercise will include a demo model trained on synthetic data, a Gazebo simulation environment, and integration with ROS2. This exercise will leverage existing infrastructure from previous deep learning-based Robotics-Academy exercises, such as Human Detection and Digit Classifier.
This section will be updated at the end of every GSoC week.
Community Bonding
During the community bonding period, the focus was on familiarizing with the project requirements and understanding the existing codebase. The team also explored various deep learning architectures suitable for end-to-end visual control of autonomous vehicles. Additionally, discussions were held to finalize the project scope and set clear milestones for the coding phase.
- Community Bonding Period
- Engage with the JdeRobot community.
- Discuss requirements with mentors and refine the project plan with mentors.