Week 9 (Jul 29 - Aug 4)
Meeting with Mentors
- I started with showing them the UI enhancements that I made and updating the progress that I have made regarding reducing the loading time in the UI.
- David suggested that I change the name for the save model button to “load model” and he also suggested that I add a summary section towards the end of the page after inference with all the detection details and also add an option to download it as a JSON file.
- We also discussed the vision they have regarding the metrics tab. David mentioned that showing the progress bar is a must and visualizing the ground truth and inference of a sample from the dataset. He mentioned that extending the metrics supported like adding precision-recall curve and range of IoU-thresholds is also a priority for this week and the same should be shown in the metrics tab UI as well.
- Later Sergio shared https://leaderboard.roboflow.com/ with us, which basically gives me an idea of metrics that we should show and also how to show them in the UI.
To Do for This Week
- Develop metrics tab
- Extend metrics to support precision-recall curve and range of IoU thresholds
Progress
-
I started with working on the pilot version of metrics tab UI first. I basically had this idea that it is redundant for the user to give the dataset and model details again in this tab if they had already given them in the dataset and inference tabs. The UI also would be cleaner if fewer number of input boxes are there. So I thought it would be better to reuse the dataset and model objects that are created during the earlier tabs. If the user had not given them yet, this tab would show a banner that the dataset details are to be provided in the dataset viewer tab. After the dataset and model details are given, then the user would be able to access the “run evaluation” button. Upon clicking the button, the dataset processing would start and the progress bar with the number of samples processed so far would also be displayed. Once the evaluation is done, the evaluation summary would be shown along with the option to download the CSV of the result dataframe.
-
To support the progress bar in Streamlit, I had to alter the eval function in the TorchImageDetectionModel class. So basically when initiated from the UI, we pass an extra parameter now called progress_callback (Optional callback function for progress updates in Streamlit UI). When initiating from the Jupyter notebook, this parameter can be skipped, and when it is skipped, the usual tqdm progress bar will be shown.
-
I have also added the section in inference where the user can see the detection results, and also added a button to download the results as a JSON file.
-
I was a little ambiguous about how to take the input for the range of IoU_thresholds. I asked the mentors to confirm. While I heard back from them, I thought of exploring to make the evaluator tab better. I thought the ideal case could be that the details the user gives in dataset and inference tabs should be able to be preloaded when we open the evaluator tabs. And if the user wants to, they should also be able to change the details. I tried implementing the same, but I was facing issues with the files uploaded during the inference tab which should be carried to the evaluator tab. I will keep working on it to find out a way to make it work.
What’s Next?
- Extending the evaluation metrics, display plotting curves and GT, inference for a sample in metrics tab.
- Finish the UI prototype, test and fix any last-minute errors.
Enjoy Reading This Article?
Here are some more articles you might like to read next: