{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "FhGuhbZ6M5tl" }, "outputs": [], "source": [ "##### Copyright 2022 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "AwOEIRJC6Une", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "EIdT9iu_Z4Rb" }, "source": [ "# 使用 Core API 构建优化器" ] }, { "cell_type": "markdown", "metadata": { "id": "bBIlTPscrIT9" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
在 TensorFlow.org 上查看 在 Google Colab 中运行 在 GitHub 上查看源代码 下载笔记本
" ] }, { "cell_type": "markdown", "metadata": { "id": "SjAxxRpBzVYg" }, "source": [ "## 简介\n", "\n", "本笔记本介绍使用 [TensorFlow Core 低级 API](https://tensorflow.google.cn/guide/core) 创建自定义优化器的过程。访问 [Core API 概述](https://tensorflow.google.cn/guide/core)以详细了解 TensorFlow Core 及其预期用例。\n", "\n", "[Keras 优化器](https://tensorflow.google.cn/api_docs/python/tf/keras/optimizers)模块是一种推荐用于许多一般训练用途的优化工具包。它包含各种预构建的优化器,以及用于自定义的子类化功能。Keras 优化器还兼容使用 Core API 构建的自定义层、模型和训练循环。这些预构建和可自定义的优化器适用于大多数用例,但借助 Core API,您将可以完全控制优化过程。例如,锐度感知最小化 (SAM) 等技术需要模型与优化器耦合,这并不符合机器学习优化器的传统定义。本指南将逐步介绍使用 Core API 从头开始构建自定义优化器的过程,使您具备完全控制优化器的结构、实现和行为的能力。" ] }, { "cell_type": "markdown", "metadata": { "id": "nBmqYyodNRd_" }, "source": [ "## 优化器概述\n", "\n", "优化器是一种用于针对模型可训练参数最小化损失函数的算法。最直接的优化技术为梯度下降,它会通过朝损失函数的最陡下降方向前进一步来迭代更新模型的参数。它的步长与梯度的大小成正比,当梯度过大或过小时都会出现问题。还有许多其他基于梯度的优化器,例如 Adam、Adagrad 和 RMSprop,它们利用梯度的各种数学属性来提高内存效率和加快收敛速度。" ] }, { "cell_type": "markdown", "metadata": { "id": "nchsZfwEVtVs" }, "source": [ "## 安装" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "d9idwpXCltUl", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import matplotlib\n", "from matplotlib import pyplot as plt\n", "# Preset Matplotlib figure sizes.\n", "matplotlib.rcParams['figure.figsize'] = [9, 6]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9xQKvCJ85kCQ", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import tensorflow as tf\n", "print(tf.__version__)\n", "# set random seed for reproducible results \n", "tf.random.set_seed(22)" ] }, { "cell_type": "markdown", "metadata": { "id": "0UmF5aU3MnwX" }, "source": [ "## 梯度下降\n", "\n", "基本优化器类应具有初始化方法以及用于基于一列梯度更新一列变量的函数。我们首先实现基本的梯度下降优化器,通过减去按学习率缩放的梯度来更新每个变量。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "MWjmUmeOQFFN", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "class GradientDescent(tf.Module):\n", "\n", " def __init__(self, learning_rate=1e-3):\n", " # Initialize parameters\n", " self.learning_rate = learning_rate\n", " self.title = f\"Gradient descent optimizer: learning rate={self.learning_rate}\"\n", "\n", " def apply_gradients(self, grads, vars):\n", " # Update variables\n", " for grad, var in zip(grads, vars):\n", " var.assign_sub(self.learning_rate*grad)" ] }, { "cell_type": "markdown", "metadata": { "id": "ZSekgBHDRzmp" }, "source": [ "要测试此优化器,请创建一个样本损失函数以针对单个变量 $x$ 进行最小化。计算它的梯度函数并对其最小化参数值求解:\n", "\n", "$$L = 2x^4 + 3x^3 + 2$$\n", "\n", "$$\\frac{dL}{dx} = 8x^3 + 9x^2$$\n", "\n", "$\\frac{dL}{dx}$ 在 $x = 0$ 时为 0,这是一个鞍点;在 $x = - \\frac{9}{8}$ 时为全局最小值。因此,损失函数在 $x^\\star = - \\frac{9}{8}$ 时得到优化。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "VCtJaUo6Ry8V", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "x_vals = tf.linspace(-2, 2, 201)\n", "x_vals = tf.cast(x_vals, tf.float32)\n", "\n", "def loss(x):\n", " return 2*(x**4) + 3*(x**3) + 2\n", "\n", "def grad(f, x):\n", " with tf.GradientTape() as tape:\n", " tape.watch(x)\n", " result = f(x)\n", " return tape.gradient(result, x)\n", "\n", "plt.plot(x_vals, loss(x_vals), c='k', label = \"Loss function\")\n", "plt.plot(x_vals, grad(loss, x_vals), c='tab:blue', label = \"Gradient function\")\n", "plt.plot(0, loss(0), marker=\"o\", c='g', label = \"Inflection point\")\n", "plt.plot(-9/8, loss(-9/8), marker=\"o\", c='r', label = \"Global minimum\")\n", "plt.legend()\n", "plt.ylim(0,5)\n", "plt.xlabel(\"x\")\n", "plt.ylabel(\"loss\")\n", "plt.title(\"Sample loss function and gradient\");" ] }, { "cell_type": "markdown", "metadata": { "id": "fLlIBJ9yuwhE" }, "source": [ "编写一个函数来测试具有单个变量损失函数的优化器的收敛性。假设当更新参数在时间步骤 $t$ 的值与其原先在时间步骤 $t-1$ 的值相同时,即已实现收敛。在一定数量的迭代后终止测试,并在过程中跟踪任何梯度爆炸情况。为了对优化算法提出真正的挑战,以不良方式初始化参数。在上面的示例中,$x = 2$ 是一个不错的选择,因为它涉及到较陡梯度并且还会导致出现拐点。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "SLQTc41ouv0F", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def convergence_test(optimizer, loss_fn, grad_fn=grad, init_val=2., max_iters=2000):\n", " # Function for optimizer convergence test\n", " print(optimizer.title)\n", " print(\"-------------------------------\")\n", " # Initializing variables and structures\n", " x_star = tf.Variable(init_val)\n", " param_path = []\n", " converged = False\n", "\n", " for iter in range(1, max_iters + 1):\n", " x_grad = grad_fn(loss_fn, x_star)\n", "\n", " # Case for exploding gradient\n", " if tf.math.is_nan(x_grad):\n", " print(f\"Gradient exploded at iteration {iter}\\n\")\n", " return []\n", "\n", " # Updating the variable and storing its old-version\n", " x_old = x_star.numpy()\n", " optimizer.apply_gradients([x_grad], [x_star])\n", " param_path.append(x_star.numpy())\n", "\n", " # Checking for convergence\n", " if x_star == x_old:\n", " print(f\"Converged in {iter} iterations\\n\")\n", " converged = True\n", " break\n", " \n", " # Print early termination message\n", " if not converged:\n", " print(f\"Exceeded maximum of {max_iters} iterations. Test terminated.\\n\")\n", " return param_path" ] }, { "cell_type": "markdown", "metadata": { "id": "vK-7_TsmyAgI" }, "source": [ "针对以下学习率测试梯度下降优化器的收敛性:1e-3、1e-2、1e-1" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lWRn8c91mqB0", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "param_map_gd = {}\n", "learning_rates = [1e-3, 1e-2, 1e-1]\n", "for learning_rate in learning_rates:\n", " param_map_gd[learning_rate] = (convergence_test(\n", " GradientDescent(learning_rate=learning_rate), loss_fn=loss))" ] }, { "cell_type": "markdown", "metadata": { "id": "TydrGHF5y6iI" }, "source": [ "在损失函数的等高线图上呈现参数的路径。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "piffzGHI_u5G", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def viz_paths(param_map, x_vals, loss_fn, title, max_iters=2000):\n", " # Creating a controur plot of the loss function\n", " t_vals = tf.range(1., max_iters + 100.)\n", " t_grid, x_grid = tf.meshgrid(t_vals, x_vals)\n", " loss_grid = tf.math.log(loss_fn(x_grid))\n", " plt.pcolormesh(t_vals, x_vals, loss_grid, vmin=0, shading='nearest')\n", " colors = ['r', 'w', 'c']\n", " # Plotting the parameter paths over the contour plot\n", " for i, learning_rate in enumerate(param_map):\n", " param_path = param_map[learning_rate]\n", " if len(param_path) > 0:\n", " x_star = param_path[-1]\n", " plt.plot(t_vals[:len(param_path)], param_path, c=colors[i])\n", " plt.plot(len(param_path), x_star, marker='o', c=colors[i], \n", " label = f\"x*: learning rate={learning_rate}\")\n", " plt.xlabel(\"Iterations\")\n", " plt.ylabel(\"Parameter value\")\n", " plt.legend()\n", " plt.title(f\"{title} parameter paths\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Ssyj2sO4BcNY", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "viz_paths(param_map_gd, x_vals, loss, \"Gradient descent\")" ] }, { "cell_type": "markdown", "metadata": { "id": "MmM-5eDLFnmC" }, "source": [ "当使用较小的学习率时,梯度下降似乎停滞在拐点处。提高学习率即加大步长,因此可以在停滞区域周围加快移动;然而,当损失函数十分陡峭时,这会带来在早期迭代中发生梯度爆炸的风险。" ] }, { "cell_type": "markdown", "metadata": { "id": "m5CDeXN8S1SF" }, "source": [ "## 具有动量的梯度下降\n", "\n", "具有动量的梯度下降不仅使用梯度来更新变量,而且还涉及到基于变量先前更新的变量位置变化。动量参数决定了时间步骤 $t-1$ 的更新对于时间步骤 $t$ 的更新所含影响的程度。累积动量有助于使变量能够相比基本梯度下降更快地通过停滞区域。动量更新规则如下:\n", "\n", "$$\\Delta_x^{[t]} = lr \\cdot L^\\prime(x^{[t-1]}) + p \\cdot \\Delta_x^{[t-1]}$$\n", "\n", "$$x^{[t]} = x^{[t-1]} - \\Delta_x^{[t]}$$\n", "\n", "在哪里\n", "\n", "- $x$:经优化的变量\n", "- $\\Delta_x$:$x$ 的变化\n", "- $lr$:学习率\n", "- $L^\\prime(x)$:损失函数相对于 x 的梯度\n", "- $p$:动量参数" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "rOBY8Tz4S0dX", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "class Momentum(tf.Module):\n", "\n", " def __init__(self, learning_rate=1e-3, momentum=0.7):\n", " # Initialize parameters\n", " self.learning_rate = learning_rate\n", " self.momentum = momentum\n", " self.change = 0.\n", " self.title = f\"Gradient descent optimizer: learning rate={self.learning_rate}\"\n", "\n", " def apply_gradients(self, grads, vars):\n", " # Update variables \n", " for grad, var in zip(grads, vars):\n", " curr_change = self.learning_rate*grad + self.momentum*self.change\n", " var.assign_sub(curr_change)\n", " self.change = curr_change" ] }, { "cell_type": "markdown", "metadata": { "id": "t_nDu38gW6Fu" }, "source": [ "针对以下学习率测试动量优化器的收敛性:1e-3、1e-2、1e-1" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "tA6oQL-sW2xg", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "param_map_mtm = {}\n", "learning_rates = [1e-3, 1e-2, 1e-1]\n", "for learning_rate in learning_rates:\n", " param_map_mtm[learning_rate] = (convergence_test(\n", " Momentum(learning_rate=learning_rate),\n", " loss_fn=loss, grad_fn=grad))" ] }, { "cell_type": "markdown", "metadata": { "id": "wz_LV0EPYE6k" }, "source": [ "在损失函数的等高线图上呈现参数的路径。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qbW1eEKaX3T9", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "viz_paths(param_map_mtm, x_vals, loss, \"Momentum\")" ] }, { "cell_type": "markdown", "metadata": { "id": "4bEFnhPRTBXh" }, "source": [ "## 自适应矩估计 (Adam)\n", "\n", "自适应矩估计 (Adam) 算法是一种高效且高度泛化的优化技术,它利用了两种关键的梯度下降算法:动量和均方根传播 (RMSP)。动量会通过使用一阶矩(梯度之和)和衰减参数来帮助加速梯度下降。RMSP 与之类似,差别是会利用二阶矩(梯度平方和)。\n", "\n", "Adam 算法将一阶矩与二阶矩相结合,因此可提供泛化能力更高的更新规则。变量 $x$ 的符号可通过计算 $\\frac{x}{\\sqrt{x^2}}$ 来确定。Adam 优化器基于这一事实来计算实际上是平滑符号的更新步骤。优化器并不计算 $\\frac{x}{\\sqrt{x^2}}$,而是针对每个变量更新计算 $x$(一阶矩)和 $x^2$(二阶矩)的平滑版本。\n" ] }, { "cell_type": "markdown", "metadata": { "id": "WjgyqRiZ7XhA" }, "source": [ "**Adam 算法**\n", "\n", "$\\beta_1 \\gets 0.9 ; \\triangleright \\text{literature value}$\n", "\n", "$\\beta_2 \\gets 0.999 ; \\triangleright \\text{literature value}$\n", "\n", "$lr \\gets \\text{1e-3} ; \\triangleright \\text{configurable learning rate}$\n", "\n", "$\\epsilon \\gets \\text{1e-7} ; \\triangleright \\text{prevents divide by 0 error}$\n", "\n", "$V_{dv} \\gets \\vec {\\underset{n\\times1}{0}} ;\\triangleright \\text{stores momentum updates for each variable}$\n", "\n", "$S_{dv} \\gets \\vec {\\underset{n\\times1}{0}} ; \\triangleright \\text{stores RMSP updates for each variable}$\n", "\n", "$t \\gets 1$\n", "\n", "$\\text{On iteration } t:$\n", "\n", "$;;;; \\text{For} (\\frac{dL}{dv}, v) \\text{ in gradient variable pairs}:$\n", "\n", "$;;;;;;;; V_{dv_i} = \\beta_1V_{dv_i} + (1 - \\beta_1)\\frac{dL}{dv} ; \\triangleright \\text{momentum update}$\n", "\n", "$;;;;;;;; S_{dv_i} = \\beta_2V_{dv_i} + (1 - \\beta_2)(\\frac{dL}{dv})^2 ; \\triangleright \\text{RMSP update}$\n", "\n", "$;;;;;;;; v_{dv}^{bc} = \\frac{V_{dv_i}}{(1-\\beta_1)^t} ; \\triangleright \\text{momentum bias correction}$\n", "\n", "$;;;;;;;; s_{dv}^{bc} = \\frac{S_{dv_i}}{(1-\\beta_2)^t} ; \\triangleright \\text{RMSP bias correction}$\n", "\n", "$;;;;;;;; v = v - lr\\frac{v_{dv}^{bc}}{\\sqrt{s_{dv}^{bc}} + \\epsilon} ; \\triangleright \\text{parameter update}$\n", "\n", "$;;;;;;;; t = t + 1$\n", "\n", "**算法结束**\n", "\n", "假设 $V_{dv}$ 和 $S_{dv}$ 初始化为 0,且 $\\beta_1$ 和 $\\beta_2$ 接近于 1,则动量和 RMSP 更新会自然偏向 0;因此,变量可以受益于偏差校正。偏差校正还有助于控制权重在接近全局最小值时的振荡。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "hm5vffRJRsEc", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "class Adam(tf.Module):\n", " \n", " def __init__(self, learning_rate=1e-3, beta_1=0.9, beta_2=0.999, ep=1e-7):\n", " # Initialize the Adam parameters\n", " self.beta_1 = beta_1\n", " self.beta_2 = beta_2\n", " self.learning_rate = learning_rate\n", " self.ep = ep\n", " self.t = 1.\n", " self.v_dvar, self.s_dvar = [], []\n", " self.title = f\"Adam: learning rate={self.learning_rate}\"\n", " self.built = False\n", "\n", " def apply_gradients(self, grads, vars):\n", " # Set up moment and RMSprop slots for each variable on the first call\n", " if not self.built:\n", " for var in vars:\n", " v = tf.Variable(tf.zeros(shape=var.shape))\n", " s = tf.Variable(tf.zeros(shape=var.shape))\n", " self.v_dvar.append(v)\n", " self.s_dvar.append(s)\n", " self.built = True\n", " # Perform Adam updates\n", " for i, (d_var, var) in enumerate(zip(grads, vars)):\n", " # Moment calculation\n", " self.v_dvar[i] = self.beta_1*self.v_dvar[i] + (1-self.beta_1)*d_var\n", " # RMSprop calculation\n", " self.s_dvar[i] = self.beta_2*self.s_dvar[i] + (1-self.beta_2)*tf.square(d_var)\n", " # Bias correction\n", " v_dvar_bc = self.v_dvar[i]/(1-(self.beta_1**self.t))\n", " s_dvar_bc = self.s_dvar[i]/(1-(self.beta_2**self.t))\n", " # Update model variables\n", " var.assign_sub(self.learning_rate*(v_dvar_bc/(tf.sqrt(s_dvar_bc) + self.ep)))\n", " # Increment the iteration counter\n", " self.t += 1." ] }, { "cell_type": "markdown", "metadata": { "id": "UWN4Qus7flUO" }, "source": [ "使用与梯度下降示例相同的学习率测试 Adam 优化器的性能。 " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "GXHCxtemFBpR", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "param_map_adam = {}\n", "learning_rates = [1e-3, 1e-2, 1e-1]\n", "for learning_rate in learning_rates:\n", " param_map_adam[learning_rate] = (convergence_test(\n", " Adam(learning_rate=learning_rate), loss_fn=loss))" ] }, { "cell_type": "markdown", "metadata": { "id": "jgpUcs_xXEjX" }, "source": [ "在损失函数的等高线图上呈现参数的路径。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ctvOUmlzFK8s", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "viz_paths(param_map_adam, x_vals, loss, \"Adam\")" ] }, { "cell_type": "markdown", "metadata": { "id": "_oGScF8zJcY4" }, "source": [ "在此特定示例中,当使用较小的学习率时,Adam 优化器与传统梯度下降相比收敛速度较慢。但当学习率较大时,该算法成功地越过了停滞区域并收敛到全局最小值。由于 Adam 在遇到较大梯度时可动态缩放学习率,不再存在梯度爆炸问题。" ] }, { "cell_type": "markdown", "metadata": { "id": "VFLfEH4ManbW" }, "source": [ "## 结论\n", "\n", "此笔记本介绍了使用 [TensorFlow Core API](https://tensorflow.google.cn/guide/core) 编写和比较优化器的基础知识。尽管像 Adam 这样的预构建优化器具备可泛化性,但它们可能并非总是每个模型或数据集的最佳选择。对优化过程进行精细控制有助于简化机器学习训练工作流并提高整体性能。有关自定义优化器的更多示例,请参阅以下文档:\n", "\n", "- [多层感知器](https://tensorflow.google.cn/guide/core/mlp_core)教程和[分布式训练]()中使用了此 Adam 优化器\n", "- [Model Garden](https://blog.tensorflow.org/2020/03/introducing-model-garden-for-tensorflow-2.html) 中包含多种使用 Core API 编写的[自定义优化器](https://github.com/tensorflow/models/tree/master/official/modeling/optimization)。\n" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "optimizers_core.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }