1st post of the 5-minute paper’s series.

About the series:

This blog post marking the start of series (Hopefully) will be a walkthrough of the ideas shared in the Bag of Tricks for Image Classification with Convolutional Neural Networks paper and a few thoughts by me.

The paper discusses a few tricks and does an analysis of their individual as well as combined contributions to training a few of the recent CNN models.

Context:

While we keep noticing a push in the State Of The Art accuracy for Image classification models and even though Deep Learning networks have surpassed human level accuracy at the task of Image Classification; these breakthroughs haven’t solely been because of the architectures or just because the Neural net models keep getting “deeper”

Many of the improvements have been due to little “tricks” that usually go unpublished or are not extremely highlighted.

The goal of the authors is to share these tricks along with extensive experiments based on each of the “trick”.

These tricks have two aims:

Tricks

The authors discuss a few tricks, lets go over them:

Results:

The paper shows the most promise in ImageNet, using the ResNet 50 model which shows the highest improvement.

There is an extensive study of how useful or “hurtful” each of these individual tricks is towards a task.

Finally, there is a comparison of using transfer learning on two tasks:

Conclusion & Personal thoughts:

This is one of the first papers to embrace the little tricks that are usually not in the spotlight or not given as much importance in papers.

The paper does a deep-dive into all of the ideas, showing extensive comparisons.

As a little exercise, I believe it should be a good experiment to try a few of these tricks onto a personal target task and document the improvements.

I’d also love to read more about such tricks in future papers or have a detailed section dedicated to such approaches.

If you found this interesting and would like to be a part of My Learning Path, you can find me on Twitter here.

If you’re interested in reading about Deep Learning and Computer Vision news, you can check out my newsletter here.

If you’re interested in reading a few best advice from Machine Learning Heroes: Practitioners, Researchers, and Kagglers. Please click here