What was done
I worked on improving the Deep Learning Human Detection Exercise to identify the presence of humans and identification of the rectangular boundary around them. Apart from the live and video inference features, the exercise also includes model benchmarking and model visualization. The user is expected to upload a Deep Learning model which fits the required input and output specifications for inference. The input model is supposed to be in the ONNX format.
The project work also included compiling and testing an exercise guide for the user, which contains everything from fine tuning pre built object detection models in different frameworks(PyTorch and TensorFlow), to its subsequent conversion to the ONNX format.
Below is a very brief timeline of the work done. For more information and details, please refer to the specific week in the blog post.
|Community Bonding Period
|Tried Working with the Exercise to get a good fell for the changes needed, along with setting up the Environment for the Deep learning exercise to run successfully.
|Understanding the ONNX and NNEF formats to represent Machine Learning Models, along with converting different models to these formats
|Week 2 and 3
|Researched different Models that are used for Object Detection, and deciding which one to be used for the Exercise
|Converting the said model (the SSDLite_MObilenet Model to the ONNX format, updating the blog with all the work done till now
|Implementing the Seaborn library in the Exercise, basically trying to get prettier plots for the user to understand.
|Working for the Midsem Evaluation, by documenting whatever done till now
|Week 7 and 8
|Understanding the Codebase to work better, as now the work was mainly frontend-related, along with adding different graphs in the picture, so as to give the user more understanding. Started working on creation of a list where the user can change Graphs manually
|Week 9 and 10
|Week 9 and Week 10 were mainly used to update the exercise frontend, along with the completion of the changing graphs feature. Compiling the results, and DOcumenting everything done till now for the End-Semester Evaluation
- Allow users to choose between different metrics in benchmarking mode (Issue #1791)
- Use seaborn to improve benchmarking plots readability #1763