Coding period week 10
Week 10
For this week the gym gazebo training should be done directly from Behavior Studio GUI, this task involves that a proper brain class is created inside the folder structure so when the user defines its yml file with f1rl
it directly searches all reinforcement learning brain for formula1.
Here is a clip of the training process in the noetic ros behavior studio inside the container, the agent is using laser beams as perception and it is training with q learning using tensorflow.
Formula 1 and Reinforcement Learning
Behavior studio uses a well defined brain module which previous deep learning and opencv approaches use, in this week most of the reinforcement learning agents have been adapted to the brain module.
This is how one would start an agent from the configuration file, where the type f1rl inside the Robot options is the new keyword for showing the current agents available in the Behavior Studio’s GUI.
Behaviors:
Robot:
Sensors:
Cameras:
Camera_0:
Name: 'camera_0'
Topic: '/F1ROS/cameraL/image_raw'
Actuators:
Motors:
Motors_0:
Name: 'motors_0'
Topic: '/F1ROS/cmd_vel'
MaxV: 3
MaxW: 0.3
BrainPath: 'brains/f1rl/f1_follow_line_qlearn_laser.py'
Type: 'f1rl'
Simulation:
World: /opt/jderobot/share/jderobot/gazebo/launch/f1_1_simplecircuit.launch
Dataset:
In: '/tmp/my_bag.bag'
Out: ''
Layout:
Frame_0:
Name: frame_0
Geometry: [1, 1, 2, 2]
Data: rgbimage
Frame_1:
Name: frame_1
Geometry: [0, 1, 1, 1]
Data: rgbimage
Issues
ROS-noetic
Recently I got this warning or error message in my logs which disappear when I upgraded ROS-Noetic packages to the latest version.
gazebo/pause_physics service call failed: transport error completing service call: receive_once[/gazebo/pause_physics]: DeserializationError cannot deserialize: unknown error handler name 'rosmsg'
transport error completing service call: receive_once[/gazebo/unpause_physics]: DeserializationError cannot deserialize: unknown error handler name 'rosmsg'
I am updating regularly each image in the docker hub so it always contains a working version of Behavior Studio!.
CUDA version
Recently the gpu nvidia driver were updated to CUDA11 this has broken the training in gpu in my computer, which was previously using CUDA10.1 and tested in that version. I am looking for a solution for running Behavior Studio with newer drivers.
2020-08-08 16:23:41.302306: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:118] None of the MLIR optimization passes are enabled (registered 1)
Traceback (most recent call last):
File "f1_follow_line_camera_dqn.py", line 149, in <module>
qValues = deepQ.getQValues(observation)
File "/root/BehaviorStudio/gym-gazebo/agents/f1/dqn.py", line 103, in getQValues
predicted = self.model.predict(state)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py", line 130, in _method_wrapper
return method(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py", line 1610, in predict
tmp_batch_outputs = self.predict_function(iterator)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 796, in __call__
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 862, in _call
return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds) # pylint: disable=protected-access
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 1858, in _filtered_call
cancellation_manager=cancellation_manager)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 1934, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 557, in call
ctx=ctx)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU
[[node sequential/max_pooling2d/MaxPool (defined at /root/BehaviorStudio/gym-gazebo/agents/f1/dqn.py:103) ]] [Op:__inference_predict_function_241]
The problem was gone when I run again in a new docker with the drivers updated, one must used the most recent driver in the host machine in order to used older version in containers.
Week Highlights
-
Issues
-
Pull Requests
- Adding support for RL #62 New changes and updates are being applied directly into the noetic-devel branch.
References
- [1] Jderobot, BehaviorStudio (BehaviorSuite)