GitHub / fitushar / Cyclical-Learning-Rates-for-Training-Neural-Networks-With-Unbalanced-Data-Sets
As the learning rate is one of the most important hyper-parameters to tune for training convolutional neural networks. In this paper, a powerful technique to select a range of learning rates for a neural network that named cyclical learning rate was implemented with two different skewness degrees. It is an approach to adjust where the value is cycled between a lower bound and upper bound. CLR policies are computationally simpler and can avoid the computational expense of fine tuning with fixed learning rate. It is clearly shown that changing the learning rate during the training phase provides by far better results than fixed values with similar or even smaller number of epochs.
Stars: 3
Forks: 0
Open issues: 0
License: None
Language: Jupyter Notebook
Size: 2.56 MB
Dependencies parsed at: Pending
Created at: almost 7 years ago
Updated at: over 2 years ago
Pushed at: over 6 years ago
Last synced at: 3 months ago
Topics: clr, clr-policies, cnn, cyclical-learning-rates, epochs, expense, hyperparameters, learning-rate, learning-rates, neural-network, range, skewness-degrees