Week 11 (Aug 12 - Aug 22)

Meeting with Mentors

  • I started with telling them the idea I have regarding giving the option for the user to upload the details in evaluator tab only when the data isn’t provided in dataset tab and model tab. This is to avoid having multiple places for user to give the details. The mentors agreed with this approach.
  • And later I showed them the UI, David told that the UI is clean. He suggested to also add a downloadable file with precision-recall curve points.
  • I discussed about the issue I found when the batch size is more than one too.
  • Then I asked them regarding the documentation guidelines. David suggested that I go through the existing documentation and start including the info regarding detection as well. Sergio suggested that I can do a demo video showing the UI and the steps to use the tool.

To Do for This Week

  • Adding collapsable section to upload inputs details in UI
  • Fixing issue with batch size > 1
  • Adding evaluation step to show live results
  • Add precision-recall points as a downloadable file as well

Progress

  • First I started with working on updating the UI to improve model/dataset selection. I wanted to modify it in a way that there is only one place where the user can provide the dataset/model details. I wanted to avoid having multiple or different places where the user can give the details. I thought it would be better if there is a single pipeline for initiating the dataset and model objects which can be used across all the tabs. So having a persistent section that is common across all three tabs where we take all the user inputs seemed like the perfect solution. I discussed this idea with the mentors and started exploring the UI options.
  • I created a sidebar drawer in the UI where the user can give the dataset and model details. I moved the tabs to a single row on the main UI horizontally for a clean UI. I have removed the input section in the dataset and inference tabs. When the user is yet to give the inputs the tabs will show a warning banner asking the user to provide the necessary details. When the details are given the dataset tab will show the image grid along with the search button for the user to even select the image from the drop down. In inference tab (given that the user provided the model details) an image uploader will be shown where the user can upload the image and run inference on it. In evaluator tab when both dataset and model details are given, the run evaluation button gets enabled and clicking on that will run the evaluation.
  • Then I started working on showing the live results when the user provides evaluation step in the model config file. I included another input parameter in the model config called “evaluation_step”. The results in the UI will be updated for each step size. I have added a parameter called metrics_callback, which is the feedback given to the streamlit UI for the values on the screen to be updated.
  • I also added the precision recall curve points as a downloadable csv file.
  • Later I spent some time resolving the issue regarding batch size > 1. The issue is there only in case of streamlit, the normal sample notebook is giving results for batch size > 1. This is mostly because of the multiprocessing and pickling limitations in streamlit. I tried a couple of work arounds and it still didn’t seem to work. For now I have modified the logic in a way that, when it is called from streamlit UI, I am making num_workers as 0 in the DataLoader. If not I assign the batch size value to the num_workers in DataLoader. It is working for now that it will show the results. But for the processing time to decrease in the UI, I still need to look into the issue further and find a fix.

What’s Next?

  • Fix the multiprocessing issue in streamlit UI
  • Documentation



Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Final Report
  • Week 9 (Jul 29 - Aug 4)
  • Week 10 (Aug 4 - Aug 11)
  • Week 7 (Jul 15 - Jul 21)
  • Week 4 (Jun 23 - Jul 3)