This is another of my blog post based on the notes I have made while watching Andrew Ng’s deep learning course.
Table of Contents
According to Andrew Ng, the critical hyperparameters (in the order of their importance) are:
-
learning rate
-
momentum (if you are using gradient descent with momentum), mini-batch size, and the number of hidden units in layers
-
the number of layers and learning rate decay
When we look at the list, we may notice that Andrew Ng thinks that the hyperparameter is more significant when it has a stronger influence on the weights produced during training.
Andrew Ng recommends to stop using grid search and replace it with a random search of hyperparameters. This advice is based on an observation that it is better to try a more diverse set of parameters than keep an important parameter (for example learning rate) unchanged for a while to tweak some less crucial parameters (like mini-batch size or the number of layers).
Want to build AI systems that actually work?
Download my expert-crafted GenAI Transformation Guide for Data Teams and discover how to properly measure AI performance, set up guardrails, and continuously improve your AI solutions like the pros.
Coarse to fine hyperparameter search
This approach consists of two steps. First, we look for the best hyperparameters using random search. Later, we repeat the random search, but limit the range to values similar to the best hyperparameters from the previous step.
We do it because that allows us to quickly find hyperparameters which are good enough and later continue tweaking them.
Use log scale for searching hyperparameters
The final advice suggested by Andrew Ng is using a logarithmic scale when randomly choosing hyperparameters. We do it because when we generate uniformly distributed random numbers, we are more likely to get more diverse result.
For example, if we want values between 1 and 0.0001, we can randomly generate the exponent and use a constant base:
n = -4 * np.random.rand(4)
parameter = 10**n