Coding period week 11

1 minute read

Week 11

The main goal of this week is to explore more training strategies that could be optimal for the formula 1, right now Behavior Studio in Noetic support training with laser in a simplified way and the training with images from the camera, which uses the error between the middle line and the cars direction.

Additionally more documentation is being added to address the configuration file and options in the Behavior Studio’s GUI, how one can add its own agent and how to train and test it.

RL settings

Some of the hyper parameters are set in the brain itself for now, additionally the actions can be changes from the setting.py file.

qlearn = QLearn(actions=actions, alpha=0.2, gamma=0.9, epsilon=0.99)

Most of the agent and gazebo settings are found in these file, right now there 3 action sets;

  • Simple : 3 actions
  • Medium : 5 actions
  • Hard : 7 actions

More actions can be set up from this file also the positions where the agent is restarted randomly when it crashes can be modified here too.

Trying out Deep Reinforcement Learning

At the moment the RL agent works in two settings the first one with a set of laser beams which tells the agent how close is the agent to crash so the agent learns how to do not crash.

The second setting is with the camera here there is a pre-processing where the agents gets the distance of each central point and the the middle line that appears in the lane.

Now it is being trying to make the agent learn from the raw image but after some preprocessing as is usual in other environments like atari, where the each state is going to be represented but a 4 scaled gray images.

Week Highlights

References