3 Kinds Of Indicators For Improving Your ML System

You should never guess how to improve your classifier.

I’ve seen spectacular improvements and accuracy jumps in Neural Networks that were achieved by accident. Someone took the Net and thought ‘well, why don’t we change LSTM to GRU?’ which has lifted the system from prototype to production-ready component. It’s nice but wait. Is guessing, or making thousands of experiments really what you want to spend your career on?

Although the story shows flukes happen, IMO you should never guess what your system needs. There’s a much nicer way to improve. It feels like being a detective, or a diagnostics, and makes your work much more meaningful and sensible. And it’s not that hard!

In my opinion, there are 3 kinds of insights into the Networks (and other classifiers) and using all of them is what gives you progress you want and deprives you of the feeling of shame for a wasted month for a project.

  1. Print all the metrics you can figure out.
    I proved many times on this website that printing metrics is essential. When you optimize accuracy, print it, not the loss only. In no case forget about training set metrics. Print everything often – don’t wait the entire epoch (or even worse, entire training). Professionals can read from it much more than beginners can imagine. Should you use more hidden neurons or more hidden layers? Metrics answer. You can find more in the Stories.

  2. Do extensive error analysis.
    Some engineers look into the output of the system (or even into the data) once for a month. It’s so many times when I saw a horrible error (like the missing character in ASR training data…) because somebody never watched what he’s working on. Moreover, you can improve your system, but often handmade classification of the errors types shows you how to gain a lot easily. I did it many times and each time it gives you quite a different view on the further steps or the system whatsoever. 

  3. Print everything that’s in your classifier.
    When you’re wondering what could work better, check it first. Print your, weights histograms, weights movement, trees, activation values, activations for some concrete examples, costs. Observe input right before the network, and the output right after, for a single example. Log it. Observe optimizers parameters speed, distribution of the outputs. Be sure you can investigate every part of your network. 
    All this lets you find not only bugs, but also the weaknesses of the chosen architecture.


You can find more ideas on Andrew Karpathy blog.

That’s for the good start even though it’s essential. If Sanity Checks are your 13th month of work, these insights are your extra year. 

 

Leave a Comment

Your email address will not be published. Required fields are marked *