Apply L2 regularization to tensorflow variables
Basic steps to apply L2 regularization variables
- Firstly call tf.contrib.layers.l2_regularizer() to return a L2 regularization function, with regularization parameter lamda.
- Next define a tensorflow variable, with keyword parameter “regularizer”, this variable will be added to tf.GraphKeys.REGULARIZATION_LOSSES collections.
- Get the regularized variables from the collections, apply regularization and added to the loss.
1 2 3 4 5 6 7 |
regularizer = tf.contrib.layers.l2_regularizer(lamda) W1 = tf.get_variable('W1', [3, 3, 3, 32], \ initializer = tf.contrib.layers.xavier_initializer(), \ regularizer = regularizer) reg_variables = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) reg_term = tf.contrib.layers.apply_regularization(regularizer, reg_variables) loss += reg_term |
Per Tensorflow 1.4 release notes, currently CUDA 8 and cuDNN 6 are supported.
CUDA 9 and cuDNN 7 will be supported in next release.
Download CUDA 8
Download cuDNN 6
本文出自扉启博客,转载时请注明出处及相应链接。
本文永久链接: https://www.feiqy.com/apply-l2-regularization-to-tensorflow-variables/
近期评论