{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "b518b04cbfe0" }, "outputs": [], "source": [ "##### Copyright 2020 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "906e07f6e562", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "d201e826ab29" }, "source": [ "# 编写自己的回调函数" ] }, { "cell_type": "markdown", "metadata": { "id": "71699af85d2d" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
在 TensorFlow.org 上查看 在 Google Colab 中运行 在 GitHub 上查看源代码 下载笔记本
" ] }, { "cell_type": "markdown", "metadata": { "id": "d75eb2e25f36" }, "source": [ "## 简介\n", "\n", "回调是一种可以在训练、评估或推断过程中自定义 Keras 模型行为的强大工具。示例包括使用 TensorBoard 来呈现训练进度和结果的 `tf.keras.callbacks.TensorBoard`,以及用来在训练期间定期保存模型的 `tf.keras.callbacks.ModelCheckpoint`。\n", "\n", "在本指南中,您将了解什么是 Keras 回调函数,它可以做什么,以及如何构建自己的回调函数。我们提供了一些简单回调函数应用的演示,以帮助您入门。" ] }, { "cell_type": "markdown", "metadata": { "id": "b3600ee25c8e" }, "source": [ "## 设置" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "4dadb6688663", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import tensorflow as tf\n", "from tensorflow import keras" ] }, { "cell_type": "markdown", "metadata": { "id": "42676f705fc8" }, "source": [ "## Keras 回调函数概述\n", "\n", "所有回调函数都将 `keras.callbacks.Callback` 类作为子类,并重写在训练、测试和预测的各个阶段调用的一组方法。回调函数对于在训练期间了解模型的内部状态和统计信息十分有用。\n", "\n", "您可以将回调函数的列表(作为关键字参数 `callbacks`)传递给以下模型方法:\n", "\n", "- `keras.Model.fit()`\n", "- `keras.Model.evaluate()`\n", "- `keras.Model.predict()`" ] }, { "cell_type": "markdown", "metadata": { "id": "46945bdf5056" }, "source": [ "## 回调函数方法概述\n", "\n", "### 全局方法\n", "\n", "#### `on_(train|test|predict)_begin(self, logs=None)`\n", "\n", "在 `fit`/`evaluate`/`predict` 开始时调用。\n", "\n", "#### `on_(train|test|predict)_end(self, logs=None)`\n", "\n", "在 `fit`/`evaluate`/`predict` 结束时调用。\n", "\n", "### Batch-level methods for training/testing/predicting\n", "\n", "#### `on_(train|test|predict)_batch_begin(self, batch, logs=None)`\n", "\n", "正好在训练/测试/预测期间处理批次之前调用。\n", "\n", "#### `on_(train|test|predict)_batch_end(self, batch, logs=None)`\n", "\n", "在训练/测试/预测批次结束时调用。在此方法中,`logs` 是包含指标结果的字典。\n", "\n", "### 周期级方法(仅训练)\n", "\n", "#### `on_epoch_begin(self, epoch, logs=None)`\n", "\n", "在训练期间周期开始时调用。\n", "\n", "#### `on_epoch_end(self, epoch, logs=None)`\n", "\n", "在训练期间周期开始时调用。" ] }, { "cell_type": "markdown", "metadata": { "id": "82f2370418a1" }, "source": [ "## 基本示例\n", "\n", "让我们来看一个具体的例子。首先,导入 Tensorflow 并定义一个简单的序列式 Keras 模型:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7350ea602e50", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "# Define the Keras model to add callbacks to\n", "def get_model():\n", " model = keras.Sequential()\n", " model.add(keras.layers.Dense(1, input_dim=784))\n", " model.compile(\n", " optimizer=keras.optimizers.RMSprop(learning_rate=0.1),\n", " loss=\"mean_squared_error\",\n", " metrics=[\"mean_absolute_error\"],\n", " )\n", " return model\n" ] }, { "cell_type": "markdown", "metadata": { "id": "044db5f2dc6f" }, "source": [ "然后,从 Keras 数据集 API 加载 MNIST 数据进行训练和测试:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "f8826736a184", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "# Load example MNIST data and pre-process it\n", "(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\n", "x_train = x_train.reshape(-1, 784).astype(\"float32\") / 255.0\n", "x_test = x_test.reshape(-1, 784).astype(\"float32\") / 255.0\n", "\n", "# Limit the data to 1000 samples\n", "x_train = x_train[:1000]\n", "y_train = y_train[:1000]\n", "x_test = x_test[:1000]\n", "y_test = y_test[:1000]" ] }, { "cell_type": "markdown", "metadata": { "id": "b9acd50b2215" }, "source": [ "接下来,定义一个简单的自定义回调函数来记录以下内容:\n", "\n", "- `fit`/`evaluate`/`predict` 开始和结束的时间\n", "- 每个周期开始和结束的时间\n", "- 每个训练批次开始和结束的时间\n", "- 每个评估(测试)批次开始和结束的时间\n", "- 每次推断(预测)批次开始和结束的时间" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "cc9888d28e79", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "class CustomCallback(keras.callbacks.Callback):\n", " def on_train_begin(self, logs=None):\n", " keys = list(logs.keys())\n", " print(\"Starting training; got log keys: {}\".format(keys))\n", "\n", " def on_train_end(self, logs=None):\n", " keys = list(logs.keys())\n", " print(\"Stop training; got log keys: {}\".format(keys))\n", "\n", " def on_epoch_begin(self, epoch, logs=None):\n", " keys = list(logs.keys())\n", " print(\"Start epoch {} of training; got log keys: {}\".format(epoch, keys))\n", "\n", " def on_epoch_end(self, epoch, logs=None):\n", " keys = list(logs.keys())\n", " print(\"End epoch {} of training; got log keys: {}\".format(epoch, keys))\n", "\n", " def on_test_begin(self, logs=None):\n", " keys = list(logs.keys())\n", " print(\"Start testing; got log keys: {}\".format(keys))\n", "\n", " def on_test_end(self, logs=None):\n", " keys = list(logs.keys())\n", " print(\"Stop testing; got log keys: {}\".format(keys))\n", "\n", " def on_predict_begin(self, logs=None):\n", " keys = list(logs.keys())\n", " print(\"Start predicting; got log keys: {}\".format(keys))\n", "\n", " def on_predict_end(self, logs=None):\n", " keys = list(logs.keys())\n", " print(\"Stop predicting; got log keys: {}\".format(keys))\n", "\n", " def on_train_batch_begin(self, batch, logs=None):\n", " keys = list(logs.keys())\n", " print(\"...Training: start of batch {}; got log keys: {}\".format(batch, keys))\n", "\n", " def on_train_batch_end(self, batch, logs=None):\n", " keys = list(logs.keys())\n", " print(\"...Training: end of batch {}; got log keys: {}\".format(batch, keys))\n", "\n", " def on_test_batch_begin(self, batch, logs=None):\n", " keys = list(logs.keys())\n", " print(\"...Evaluating: start of batch {}; got log keys: {}\".format(batch, keys))\n", "\n", " def on_test_batch_end(self, batch, logs=None):\n", " keys = list(logs.keys())\n", " print(\"...Evaluating: end of batch {}; got log keys: {}\".format(batch, keys))\n", "\n", " def on_predict_batch_begin(self, batch, logs=None):\n", " keys = list(logs.keys())\n", " print(\"...Predicting: start of batch {}; got log keys: {}\".format(batch, keys))\n", "\n", " def on_predict_batch_end(self, batch, logs=None):\n", " keys = list(logs.keys())\n", " print(\"...Predicting: end of batch {}; got log keys: {}\".format(batch, keys))\n" ] }, { "cell_type": "markdown", "metadata": { "id": "8184bd3a76c2" }, "source": [ "我们来试一下:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "75f7aa1edac6", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "model = get_model()\n", "model.fit(\n", " x_train,\n", " y_train,\n", " batch_size=128,\n", " epochs=1,\n", " verbose=0,\n", " validation_split=0.5,\n", " callbacks=[CustomCallback()],\n", ")\n", "\n", "res = model.evaluate(\n", " x_test, y_test, batch_size=128, verbose=0, callbacks=[CustomCallback()]\n", ")\n", "\n", "res = model.predict(x_test, batch_size=128, callbacks=[CustomCallback()])" ] }, { "cell_type": "markdown", "metadata": { "id": "02113b8677eb" }, "source": [ "### `logs` 字典的用法\n", "\n", "`logs` 字典包含损失值,以及批次或周期结束时的所有指标。示例包括损失和平均绝对误差。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "629bc145eb84", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "class LossAndErrorPrintingCallback(keras.callbacks.Callback):\n", " def on_train_batch_end(self, batch, logs=None):\n", " print(\n", " \"Up to batch {}, the average loss is {:7.2f}.\".format(batch, logs[\"loss\"])\n", " )\n", "\n", " def on_test_batch_end(self, batch, logs=None):\n", " print(\n", " \"Up to batch {}, the average loss is {:7.2f}.\".format(batch, logs[\"loss\"])\n", " )\n", "\n", " def on_epoch_end(self, epoch, logs=None):\n", " print(\n", " \"The average loss for epoch {} is {:7.2f} \"\n", " \"and mean absolute error is {:7.2f}.\".format(\n", " epoch, logs[\"loss\"], logs[\"mean_absolute_error\"]\n", " )\n", " )\n", "\n", "\n", "model = get_model()\n", "model.fit(\n", " x_train,\n", " y_train,\n", " batch_size=128,\n", " epochs=2,\n", " verbose=0,\n", " callbacks=[LossAndErrorPrintingCallback()],\n", ")\n", "\n", "res = model.evaluate(\n", " x_test,\n", " y_test,\n", " batch_size=128,\n", " verbose=0,\n", " callbacks=[LossAndErrorPrintingCallback()],\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "742d62e5394a" }, "source": [ "## `self.model` 属性的用法\n", "\n", "除了在调用其中一种方法时接收日志信息外,回调还可以访问与当前一轮训练/评估/推断有关的模型:`self.model`。\n", "\n", "以下是您可以在回调函数中使用 `self.model` 进行的一些操作:\n", "\n", "- 设置 `self.model.stop_training = True` 以立即中断训练。\n", "- 转变优化器(可作为 `self.model.optimizer`)的超参数,例如 `self.model.optimizer.learning_rate`。\n", "- 定期保存模型。\n", "- 在每个周期结束时,在少量测试样本上记录 `model.predict()` 的输出,以用作训练期间的健全性检查。\n", "- 在每个周期结束时提取中间特征的可视化,随时间推移监视模型当前的学习内容。\n", "- 其他\n", "\n", "下面我们通过几个示例来看看它是如何工作的。" ] }, { "cell_type": "markdown", "metadata": { "id": "7eb29d3ed752" }, "source": [ "## Keras 回调函数应用示例" ] }, { "cell_type": "markdown", "metadata": { "id": "2d1d29d99fa5" }, "source": [ "### 在达到最小损失时尽早停止\n", "\n", "第一个示例展示了如何通过设置 `self.model.stop_training`(布尔)属性来创建能够在达到最小损失时停止训练的 `Callback`。您还可以提供参数 `patience` 来指定在达到局部最小值后应该等待多少个周期然后停止。\n", "\n", "`tf.keras.callbacks.EarlyStopping` 提供了一种更完整、更通用的实现。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "5d2acd79cecd", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import numpy as np\n", "\n", "\n", "class EarlyStoppingAtMinLoss(keras.callbacks.Callback):\n", " \"\"\"Stop training when the loss is at its min, i.e. the loss stops decreasing.\n", "\n", " Arguments:\n", " patience: Number of epochs to wait after min has been hit. After this\n", " number of no improvement, training stops.\n", " \"\"\"\n", "\n", " def __init__(self, patience=0):\n", " super(EarlyStoppingAtMinLoss, self).__init__()\n", " self.patience = patience\n", " # best_weights to store the weights at which the minimum loss occurs.\n", " self.best_weights = None\n", "\n", " def on_train_begin(self, logs=None):\n", " # The number of epoch it has waited when loss is no longer minimum.\n", " self.wait = 0\n", " # The epoch the training stops at.\n", " self.stopped_epoch = 0\n", " # Initialize the best as infinity.\n", " self.best = np.Inf\n", "\n", " def on_epoch_end(self, epoch, logs=None):\n", " current = logs.get(\"loss\")\n", " if np.less(current, self.best):\n", " self.best = current\n", " self.wait = 0\n", " # Record the best weights if current results is better (less).\n", " self.best_weights = self.model.get_weights()\n", " else:\n", " self.wait += 1\n", " if self.wait >= self.patience:\n", " self.stopped_epoch = epoch\n", " self.model.stop_training = True\n", " print(\"Restoring model weights from the end of the best epoch.\")\n", " self.model.set_weights(self.best_weights)\n", "\n", " def on_train_end(self, logs=None):\n", " if self.stopped_epoch > 0:\n", " print(\"Epoch %05d: early stopping\" % (self.stopped_epoch + 1))\n", "\n", "\n", "model = get_model()\n", "model.fit(\n", " x_train,\n", " y_train,\n", " batch_size=64,\n", " steps_per_epoch=5,\n", " epochs=30,\n", " verbose=0,\n", " callbacks=[LossAndErrorPrintingCallback(), EarlyStoppingAtMinLoss()],\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "939ecfbe0383" }, "source": [ "### 学习率规划\n", "\n", "在此示例中,我们展示了如何在学习过程中使用自定义回调来动态更改优化器的学习率。\n", "\n", "有关更通用的实现,请查看 `callbacks.LearningRateScheduler`。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "71c752b248c0", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "class CustomLearningRateScheduler(keras.callbacks.Callback):\n", " \"\"\"Learning rate scheduler which sets the learning rate according to schedule.\n", "\n", " Arguments:\n", " schedule: a function that takes an epoch index\n", " (integer, indexed from 0) and current learning rate\n", " as inputs and returns a new learning rate as output (float).\n", " \"\"\"\n", "\n", " def __init__(self, schedule):\n", " super(CustomLearningRateScheduler, self).__init__()\n", " self.schedule = schedule\n", "\n", " def on_epoch_begin(self, epoch, logs=None):\n", " if not hasattr(self.model.optimizer, \"lr\"):\n", " raise ValueError('Optimizer must have a \"lr\" attribute.')\n", " # Get the current learning rate from model's optimizer.\n", " lr = float(tf.keras.backend.get_value(self.model.optimizer.learning_rate))\n", " # Call schedule function to get the scheduled learning rate.\n", " scheduled_lr = self.schedule(epoch, lr)\n", " # Set the value back to the optimizer before this epoch starts\n", " tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr)\n", " print(\"\\nEpoch %05d: Learning rate is %6.4f.\" % (epoch, scheduled_lr))\n", "\n", "\n", "LR_SCHEDULE = [\n", " # (epoch to start, learning rate) tuples\n", " (3, 0.05),\n", " (6, 0.01),\n", " (9, 0.005),\n", " (12, 0.001),\n", "]\n", "\n", "\n", "def lr_schedule(epoch, lr):\n", " \"\"\"Helper function to retrieve the scheduled learning rate based on epoch.\"\"\"\n", " if epoch < LR_SCHEDULE[0][0] or epoch > LR_SCHEDULE[-1][0]:\n", " return lr\n", " for i in range(len(LR_SCHEDULE)):\n", " if epoch == LR_SCHEDULE[i][0]:\n", " return LR_SCHEDULE[i][1]\n", " return lr\n", "\n", "\n", "model = get_model()\n", "model.fit(\n", " x_train,\n", " y_train,\n", " batch_size=64,\n", " steps_per_epoch=5,\n", " epochs=15,\n", " verbose=0,\n", " callbacks=[\n", " LossAndErrorPrintingCallback(),\n", " CustomLearningRateScheduler(lr_schedule),\n", " ],\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "c9be225b57f1" }, "source": [ "### 内置 Keras 回调函数\n", "\n", "请务必阅读 [API 文档](https://tensorflow.google.cn/api_docs/python/tf/keras/callbacks/)查看现有的 Keras 回调函数。应用包括记录到 CSV、保存模型、在 TensorBoard 中可视化指标等等!" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "custom_callback.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }