{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import os\n", "os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # 设置日志级别为ERROR,以减少警告信息\n", "# 禁用 Gemini 的底层库(gRPC 和 Abseil)在初始化日志警告\n", "os.environ[\"GRPC_VERBOSITY\"] = \"ERROR\"\n", "os.environ[\"GLOG_minloglevel\"] = \"3\" # 0: INFO, 1: WARNING, 2: ERROR, 3: FATAL\n", "os.environ[\"GLOG_minloglevel\"] = \"true\"\n", "import logging\n", "import tensorflow as tf\n", "tf.get_logger().setLevel(logging.ERROR)\n", "tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)\n", "!export TF_FORCE_GPU_ALLOW_GROWTH=true\n", "from pathlib import Path\n", "\n", "temp_dir = Path(\".temp\")\n", "temp_dir.mkdir(parents=True, exist_ok=True)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "id": "fTFj8ft5dlbS" }, "outputs": [], "source": [ "##### Copyright 2018 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "cellView": "form", "id": "lzyBOpYMdp3F" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "cellView": "form", "id": "m_x4KfSJ7Vt7" }, "outputs": [], "source": [ "#@title MIT License\n", "#\n", "# Copyright (c) 2017 François Chollet\n", "#\n", "# Permission is hereby granted, free of charge, to any person obtaining a\n", "# copy of this software and associated documentation files (the \"Software\"),\n", "# to deal in the Software without restriction, including without limitation\n", "# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n", "# and/or sell copies of the Software, and to permit persons to whom the\n", "# Software is furnished to do so, subject to the following conditions:\n", "#\n", "# The above copyright notice and this permission notice shall be included in\n", "# all copies or substantial portions of the Software.\n", "#\n", "# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n", "# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n", "# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n", "# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n", "# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n", "# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n", "# DEALINGS IN THE SOFTWARE." ] }, { "cell_type": "markdown", "metadata": { "id": "C9HmC2T4ld5B" }, "source": [ "# 过拟合与欠拟合" ] }, { "cell_type": "markdown", "metadata": { "id": "kRTxFhXAlnl1" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
在 TensorFlow.org 上查看在 Google Colab 中运行 在 Github 上查看源代码下载笔记本
" ] }, { "cell_type": "markdown", "metadata": { "id": "19rPukKZsPG6" }, "source": [ "与往常一样,本示例中的代码将使用 `tf.keras` API,您可以通过 TensorFlow [Keras 指南](https://tensorflow.google.cn/guide/keras)了解详细信息。\n", "\n", "在之前的两个示例([文本分类](text_classification_with_hub.ipynb)和[预测燃油效率](regression.ipynb))中,可以看到模型在验证数据上的准确率会在经过多个周期的训练后达到峰值,然后便停滞不前或开始下降。\n", "\n", "换句话说,模型会对训练数据*过拟合*。学习如何处理过拟合很重要。尽管模型通常可以在*训练集*上达到很高的准确率,但我们真正想要的是开发出能很好地泛化到*测试集*(或之前未见过的数据)的模型。\n", "\n", "过拟合的反面是*欠拟合*。当在测试数据上仍有改进空间时就会发生欠拟合。出现这种情况的原因有很多:模型不够强大、过度正则化,或者只是训练时间不够长。这种情况意味着网络尚未学习训练数据中的相关模式。\n", "\n", "但如果训练时间过长,模型则会开始过拟合,并从训练数据中学习无法泛化到测试数据的模式。我们需要找到平衡点。了解如何训练合适数量的周期(将在下文进行探讨)是一项十分有用的技能。\n", "\n", "要防止过拟合,最好的解决方案是使用更完整的训练数据。数据集应该涵盖模型要处理的所有输入。其他数据可能只有在涉及新的值得关注的情况时才有用。\n", "\n", "在更完整的数据上训练的模型自然能更好地进行泛化。如果没有更完整的数据,则第二好的解决方案是使用正则化之类的技术。这些技术限制了模型可以存储的信息的数量和类型。如果网络只能记住少量的模式,则优化过程将迫使其关注最突出的模式,这些模式将有机会获得更好地泛化。\n", "\n", "在此笔记本中,您将探索几种常见的正则化技术,并使用这些技术改进分类模型。" ] }, { "cell_type": "markdown", "metadata": { "id": "WL8UoOTmGGsL" }, "source": [ "## 设置" ] }, { "cell_type": "markdown", "metadata": { "id": "9FklhSI0Gg9R" }, "source": [ "在开始之前,请导入必要的软件包:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "5pZ8A2liqvgk" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2.17.0\n" ] } ], "source": [ "import tensorflow as tf\n", "\n", "from tensorflow.keras import layers\n", "from tensorflow.keras import regularizers\n", "\n", "print(tf.__version__)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "QnAtAjqRYVXe" }, "outputs": [], "source": [ "# !pip install git+https://github.com/tensorflow/docs\n", "\n", "import tensorflow_docs as tfdocs\n", "import tensorflow_docs.modeling\n", "import tensorflow_docs.plots" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "id": "-pnOU-ctX27Q" }, "outputs": [], "source": [ "from IPython import display\n", "from matplotlib import pyplot as plt\n", "\n", "import numpy as np\n", "\n", "import pathlib\n", "import shutil\n", "import tempfile\n" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "id": "jj6I4dvTtbUe" }, "outputs": [], "source": [ "logdir = pathlib.Path(tempfile.mkdtemp(dir=temp_dir))/\"tensorboard_logs\"\n", "shutil.rmtree(logdir, ignore_errors=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "1cweoTiruj8O" }, "source": [ "## 希格斯数据集\n", "\n", "本教程的目的不是粒子物理学,因此无需关注数据集的细节。它包含 11,000,000 个样本,每个样本有 28 个特征和一个二元类标签。" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "id": "YPjAvwb-6dFd" }, "outputs": [], "source": [ "gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'http://mlphysics.ics.uci.edu/data/higgs/HIGGS.csv.gz')" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "id": "AkiyUdaWIrww" }, "outputs": [], "source": [ "FEATURES = 28" ] }, { "cell_type": "markdown", "metadata": { "id": "SFggl9gYKKRJ" }, "source": [ "`tf.data.experimental.CsvDataset` 类可用于直接从 Gzip 文件读取 CSV 记录,而无需中间的解压步骤。" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "id": "QHz4sLVQEVIU" }, "outputs": [], "source": [ "ds = tf.data.experimental.CsvDataset(gz, [float(),]*(FEATURES+1), compression_type=\"GZIP\")" ] }, { "cell_type": "markdown", "metadata": { "id": "HzahEELTKlSV" }, "source": [ "CSV 读取器类会为每条记录返回一个标量列表。下面的函数会将此标量列表重新打包为 (feature_vector, label) 对。" ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "id": "zPD6ICDlF6Wf" }, "outputs": [], "source": [ "def pack_row(*row):\n", " label = row[0]\n", " features = tf.stack(row[1:], 1)\n", " return features, label" ] }, { "cell_type": "markdown", "metadata": { "id": "4oa8tLuwLsbO" }, "source": [ "TensorFlow 在运算大批次数据时效率最高。\n", "\n", "因此,不要单独重新打包每一行,而是创建一个新的 `tf.data.Dataset`,该数据集会接收以 10,000 个样本为单位的批次,将 `pack_row` 函数应用于每个批次,然后将批次重新拆分为单个记录:" ] }, { "cell_type": "code", "execution_count": 62, "metadata": { "id": "-w-VHTwwGVoZ" }, "outputs": [], "source": [ "packed_ds = ds.batch(10000).map(pack_row).unbatch()" ] }, { "cell_type": "markdown", "metadata": { "id": "lUbxc5bxNSXV" }, "source": [ "检查这个新的 `packed_ds` 中的一些记录。\n", "\n", "虽然特征没有完全归一化,但对本教程而言已经足够了。" ] }, { "cell_type": "code", "execution_count": 64, "metadata": { "id": "TfcXuv33Fvka" }, "outputs": [ { "ename": "InvalidArgumentError", "evalue": "{{function_node __wrapped__IteratorGetNext_output_types_2_device_/job:localhost/replica:0/task:0/device:CPU:0}} Expect 29 fields but have 13 in record [Op:IteratorGetNext] name: ", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mInvalidArgumentError\u001b[0m Traceback (most recent call last)", "Cell \u001b[0;32mIn[64], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m features, label \u001b[38;5;129;01min\u001b[39;00m packed_ds\u001b[38;5;241m.\u001b[39mbatch(\u001b[38;5;241m100\u001b[39m)\u001b[38;5;241m.\u001b[39mtake(\u001b[38;5;241m1\u001b[39m):\n\u001b[1;32m 2\u001b[0m \u001b[38;5;28mprint\u001b[39m(features[\u001b[38;5;241m0\u001b[39m])\n\u001b[1;32m 3\u001b[0m plt\u001b[38;5;241m.\u001b[39mhist(features\u001b[38;5;241m.\u001b[39mnumpy()\u001b[38;5;241m.\u001b[39mflatten(), bins \u001b[38;5;241m=\u001b[39m \u001b[38;5;241m101\u001b[39m)\n", "File \u001b[0;32m/media/pc/data/lxw/envs/anaconda3x/envs/xxx/lib/python3.12/site-packages/tensorflow/python/data/ops/iterator_ops.py:826\u001b[0m, in \u001b[0;36mOwnedIterator.__next__\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 824\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__next__\u001b[39m(\u001b[38;5;28mself\u001b[39m):\n\u001b[1;32m 825\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 826\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_next_internal()\n\u001b[1;32m 827\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m errors\u001b[38;5;241m.\u001b[39mOutOfRangeError:\n\u001b[1;32m 828\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mStopIteration\u001b[39;00m\n", "File \u001b[0;32m/media/pc/data/lxw/envs/anaconda3x/envs/xxx/lib/python3.12/site-packages/tensorflow/python/data/ops/iterator_ops.py:776\u001b[0m, in \u001b[0;36mOwnedIterator._next_internal\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 773\u001b[0m \u001b[38;5;66;03m# TODO(b/77291417): This runs in sync mode as iterators use an error status\u001b[39;00m\n\u001b[1;32m 774\u001b[0m \u001b[38;5;66;03m# to communicate that there is no more data to iterate over.\u001b[39;00m\n\u001b[1;32m 775\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m context\u001b[38;5;241m.\u001b[39mexecution_mode(context\u001b[38;5;241m.\u001b[39mSYNC):\n\u001b[0;32m--> 776\u001b[0m ret \u001b[38;5;241m=\u001b[39m gen_dataset_ops\u001b[38;5;241m.\u001b[39miterator_get_next(\n\u001b[1;32m 777\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_iterator_resource,\n\u001b[1;32m 778\u001b[0m output_types\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_flat_output_types,\n\u001b[1;32m 779\u001b[0m output_shapes\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_flat_output_shapes)\n\u001b[1;32m 781\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 782\u001b[0m \u001b[38;5;66;03m# Fast path for the case `self._structure` is not a nested structure.\u001b[39;00m\n\u001b[1;32m 783\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_element_spec\u001b[38;5;241m.\u001b[39m_from_compatible_tensor_list(ret) \u001b[38;5;66;03m# pylint: disable=protected-access\u001b[39;00m\n", "File \u001b[0;32m/media/pc/data/lxw/envs/anaconda3x/envs/xxx/lib/python3.12/site-packages/tensorflow/python/ops/gen_dataset_ops.py:3086\u001b[0m, in \u001b[0;36miterator_get_next\u001b[0;34m(iterator, output_types, output_shapes, name)\u001b[0m\n\u001b[1;32m 3084\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m _result\n\u001b[1;32m 3085\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m _core\u001b[38;5;241m.\u001b[39m_NotOkStatusException \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[0;32m-> 3086\u001b[0m _ops\u001b[38;5;241m.\u001b[39mraise_from_not_ok_status(e, name)\n\u001b[1;32m 3087\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m _core\u001b[38;5;241m.\u001b[39m_FallbackException:\n\u001b[1;32m 3088\u001b[0m \u001b[38;5;28;01mpass\u001b[39;00m\n", "File \u001b[0;32m/media/pc/data/lxw/envs/anaconda3x/envs/xxx/lib/python3.12/site-packages/tensorflow/python/framework/ops.py:5983\u001b[0m, in \u001b[0;36mraise_from_not_ok_status\u001b[0;34m(e, name)\u001b[0m\n\u001b[1;32m 5981\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mraise_from_not_ok_status\u001b[39m(e, name) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m NoReturn:\n\u001b[1;32m 5982\u001b[0m e\u001b[38;5;241m.\u001b[39mmessage \u001b[38;5;241m+\u001b[39m\u001b[38;5;241m=\u001b[39m (\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m name: \u001b[39m\u001b[38;5;124m\"\u001b[39m \u001b[38;5;241m+\u001b[39m \u001b[38;5;28mstr\u001b[39m(name \u001b[38;5;28;01mif\u001b[39;00m name \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;28;01melse\u001b[39;00m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m\"\u001b[39m))\n\u001b[0;32m-> 5983\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m core\u001b[38;5;241m.\u001b[39m_status_to_exception(e) \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m\n", "\u001b[0;31mInvalidArgumentError\u001b[0m: {{function_node __wrapped__IteratorGetNext_output_types_2_device_/job:localhost/replica:0/task:0/device:CPU:0}} Expect 29 fields but have 13 in record [Op:IteratorGetNext] name: " ] } ], "source": [ "for features,label in packed_ds.batch(1000).take(1):\n", " print(features[0])\n", " plt.hist(features.numpy().flatten(), bins = 101)" ] }, { "cell_type": "markdown", "metadata": { "id": "ICKZRY7gN-QM" }, "source": [ "为了缩短本教程的篇幅,我们只使用前 1,000 个样本进行验证,再用接下来的 10,000 个样本进行训练:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "hmk49OqZIFZP" }, "outputs": [], "source": [ "N_VALIDATION = int(1e3)\n", "N_TRAIN = int(1e4)\n", "BUFFER_SIZE = int(1e4)\n", "BATCH_SIZE = 500\n", "STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE" ] }, { "cell_type": "markdown", "metadata": { "id": "FP3M9DmvON32" }, "source": [ "`Dataset.skip` 和 `Dataset.take` 方法能够使这项操作变得容易。\n", "\n", "同时,使用 `Dataset.cache` 方法来确保加载器无需每个周期都需要从文件重新读取数据。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "H8H_ZzpBOOk-" }, "outputs": [], "source": [ "validate_ds = packed_ds.take(N_VALIDATION).cache()\n", "train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9zAOqk2_Px7K" }, "outputs": [], "source": [ "train_ds" ] }, { "cell_type": "markdown", "metadata": { "id": "6PMliHoVO3OL" }, "source": [ "这些数据集会返回单个样本。使用 `Dataset.batch` 方法创建适当大小的批次进行训练。在创建批次之前,还要记得在训练集上使用 `Dataset.shuffle` 和 `Dataset.repeat`。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Y7I4J355O223" }, "outputs": [], "source": [ "validate_ds = validate_ds.batch(BATCH_SIZE)\n", "train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)" ] }, { "cell_type": "markdown", "metadata": { "id": "lglk41MwvU5o" }, "source": [ "## 演示过拟合\n", "\n", "要避免过拟合,最简单的方法是从小模型开始。小模型是指具有少量可学习参数(由层数和每层的单元数决定)的模型。在深度学习中,模型中可学习参数的数量通常被称为模型的“容量”。\n", "\n", "凭直觉来看,模型的参数越多,“记忆容量”就越大,于是就能够轻松学习训练样本与其目标之间的字典式完美映射(这种映射没有任何泛化能力),但这在对以前未曾见过的数据进行预测时毫无用处。\n", "\n", "请务必牢记:深度学习模型往往擅长拟合训练数据,但真正的挑战是泛化而非拟合。\n", "\n", "另一方面,如果网络的记忆资源有限,便无法轻松学习映射。为了使损失最小化,它必须学习具有更强预测能力的压缩表示。同时,如果模型太小,则很难与训练数据拟合。我们需要找到“容量过剩”和“容量不足”之间的平衡点。\n", "\n", "遗憾的是,没有什么神奇的公式可以确定模型的正确大小或架构(层数或每层的正确大小)。您必须用一系列不同的架构进行试验。\n", "\n", "要找到合适的模型大小,最好先使用相对较少的层和参数,然后增加层的大小或添加新层,直到看到返回的验证损失逐渐减小。\n", "\n", "先从仅使用密集连接层 (`tf.keras.layers.Dense`) 作为基线的简单模型开始,然后创建更大的模型并进行对比。" ] }, { "cell_type": "markdown", "metadata": { "id": "_ReKHdC2EgVu" }, "source": [ "### 训练过程" ] }, { "cell_type": "markdown", "metadata": { "id": "pNzkSkkXSP5l" }, "source": [ "如果在训练期间逐渐减小学习率,许多模型的训练效果会更好。请使用 `tf.keras.optimizers.schedules` 随着时间的推移减小学习率:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "LwQp-ERhAD6F" }, "outputs": [], "source": [ "lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(\n", " 0.001,\n", " decay_steps=STEPS_PER_EPOCH*1000,\n", " decay_rate=1,\n", " staircase=False)\n", "\n", "def get_optimizer():\n", " return tf.keras.optimizers.Adam(lr_schedule)" ] }, { "cell_type": "markdown", "metadata": { "id": "kANLx6OYTQ8B" }, "source": [ "上述代码设置了一个 `tf.keras.optimizers.schedules.InverseTimeDecay`,用于在 1,000 个周期时将学习率根据双曲线的形状降至基础速率的 1/2,在 2,000 个周期时将至 1/3,依此类推。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "HIo_yPjEAFgn" }, "outputs": [], "source": [ "step = np.linspace(0,100000)\n", "lr = lr_schedule(step)\n", "plt.figure(figsize = (8,6))\n", "plt.plot(step/STEPS_PER_EPOCH, lr)\n", "plt.ylim([0,max(plt.ylim())])\n", "plt.xlabel('Epoch')\n", "_ = plt.ylabel('Learning Rate')\n" ] }, { "cell_type": "markdown", "metadata": { "id": "ya7x7gr9UjU0" }, "source": [ "本教程中的每个模型都将使用相同的训练配置。因此,从回调列表开始,以可重用的方式对其进行设置。\n", "\n", "本教程的训练会运行许多个较短周期。为了降低日志记录噪声,请使用 `tfdocs.EpochDots`,它仅会为每个周期打印一个 `.`,并每隔 100 个周期打印一整套指标。\n", "\n", "然后,添加 `tf.keras.callbacks.EarlyStopping` 以避免冗长和不必要的训练时间。请注意,设置此回调是为了监视 `val_binary_crossentropy`,而不是 `val_loss`。这个区别在后面会很重要。\n", "\n", "使用 `callbacks.TensorBoard` 为训练生成 TensorBoard 日志。\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "vSv8rfw_T85n" }, "outputs": [], "source": [ "def get_callbacks(name):\n", " return [\n", " tfdocs.modeling.EpochDots(),\n", " tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),\n", " tf.keras.callbacks.TensorBoard(logdir/name),\n", " ]" ] }, { "cell_type": "markdown", "metadata": { "id": "VhctzKhBWVDD" }, "source": [ "类似地,每个模型将使用相同的 `Model.compile` 和 `Model.fit` 设置:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "xRCGwU3YH5sT" }, "outputs": [], "source": [ "def compile_and_fit(model, name, optimizer=None, max_epochs=10000):\n", " if optimizer is None:\n", " optimizer = get_optimizer()\n", " model.compile(optimizer=optimizer,\n", " loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n", " metrics=[\n", " tf.keras.metrics.BinaryCrossentropy(\n", " from_logits=True, name='binary_crossentropy'),\n", " 'accuracy'])\n", "\n", " model.summary()\n", "\n", " history = model.fit(\n", " train_ds,\n", " steps_per_epoch = STEPS_PER_EPOCH,\n", " epochs=max_epochs,\n", " validation_data=validate_ds,\n", " callbacks=get_callbacks(name),\n", " verbose=0)\n", " return history" ] }, { "cell_type": "markdown", "metadata": { "id": "mxBeiLUiWHJV" }, "source": [ "### 微模型" ] }, { "cell_type": "markdown", "metadata": { "id": "a6JDv12scLTI" }, "source": [ "从训练下面的模型开始:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "EZh-QFjKHb70" }, "outputs": [], "source": [ "tiny_model = tf.keras.Sequential([\n", " layers.Dense(16, activation='elu', input_shape=(FEATURES,)),\n", " layers.Dense(1)\n", "])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "X72IUdWYipIS" }, "outputs": [], "source": [ "size_histories = {}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "bdOcJtPGHhJ5" }, "outputs": [], "source": [ "size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')" ] }, { "cell_type": "markdown", "metadata": { "id": "rS_QGT6icwdI" }, "source": [ "现在查看一下模型的表现:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "dkEvb2x5XsjE" }, "outputs": [], "source": [ "plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)\n", "plotter.plot(size_histories)\n", "plt.ylim([0.5, 0.7])" ] }, { "cell_type": "markdown", "metadata": { "id": "LGxGzh_FWOJ8" }, "source": [ "### 小模型" ] }, { "cell_type": "markdown", "metadata": { "id": "YjMb6E72f2pN" }, "source": [ "要检查能否超过小模型的表现,需要逐步训练一些较大的模型。\n", "\n", "请尝试两个隐藏层,其中每层包含 16 个单元:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "QKgdXPx9usBa" }, "outputs": [], "source": [ "small_model = tf.keras.Sequential([\n", " # `input_shape` is only required here so that `.summary` works.\n", " layers.Dense(16, activation='elu', input_shape=(FEATURES,)),\n", " layers.Dense(16, activation='elu'),\n", " layers.Dense(1)\n", "])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "LqG3MXF5xSjR" }, "outputs": [], "source": [ "size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')" ] }, { "cell_type": "markdown", "metadata": { "id": "L-DGRBbGxI6G" }, "source": [ "### 中等模型" ] }, { "cell_type": "markdown", "metadata": { "id": "SrfoVQheYSO5" }, "source": [ "现在尝试三个隐藏层,其中每层包含 64 个单元:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "jksi-XtaxDAh" }, "outputs": [], "source": [ "medium_model = tf.keras.Sequential([\n", " layers.Dense(64, activation='elu', input_shape=(FEATURES,)),\n", " layers.Dense(64, activation='elu'),\n", " layers.Dense(64, activation='elu'),\n", " layers.Dense(1)\n", "])" ] }, { "cell_type": "markdown", "metadata": { "id": "jbngCZliYdma" }, "source": [ "然后使用相同的数据训练该模型:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Ofn1AwDhx-Fe" }, "outputs": [], "source": [ "size_histories['Medium'] = compile_and_fit(medium_model, \"sizes/Medium\")" ] }, { "cell_type": "markdown", "metadata": { "id": "vIPuf23FFaVn" }, "source": [ "### 大模型\n", "\n", "作为练习,您可以创建一个更大的模型,检查它开始过拟合的速度。接下来,为这个基准添加一个具有更大容量的网络,其容量远远超出解决问题的需要:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ghQwwqwqvQM9" }, "outputs": [], "source": [ "large_model = tf.keras.Sequential([\n", " layers.Dense(512, activation='elu', input_shape=(FEATURES,)),\n", " layers.Dense(512, activation='elu'),\n", " layers.Dense(512, activation='elu'),\n", " layers.Dense(512, activation='elu'),\n", " layers.Dense(1)\n", "])" ] }, { "cell_type": "markdown", "metadata": { "id": "D-d-i5DaYmr7" }, "source": [ "同样地,使用相同的数据训练该模型:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "U1A99dhqvepf" }, "outputs": [], "source": [ "size_histories['large'] = compile_and_fit(large_model, \"sizes/large\")" ] }, { "cell_type": "markdown", "metadata": { "id": "Fy3CMUZpzH3d" }, "source": [ "### 绘制训练和验证损失" ] }, { "cell_type": "markdown", "metadata": { "id": "HSlo1F4xHuuM" }, "source": [ "实线表示训练损失,虚线表示验证损失(请记住:验证损失越低表示模型越好)。" ] }, { "cell_type": "markdown", "metadata": { "id": "OLhL1AszdLfM" }, "source": [ "虽然构建的模型越大,其能力越强,但如果不对这种能力进行限制,它很容易对训练集过拟合。\n", "\n", "在此示例中,通常只有 `\"Tiny\"` 模型能完全避免过拟合,而其他较大的模型都更快地过拟合数据。对于 `\"large\"` 模型来说,过拟合的情况尤为严重,您必须将绘图切换为对数尺度才能真正弄清楚所发生的情况。\n", "\n", "如果您绘制出验证指标并将其与训练指标进行对比的话,就会很明显。\n", "\n", "- 有细微差别是正常的。\n", "- 如果两个指标都朝同一方向移动,说明一切正常。\n", "- 如果验证指标开始停滞,而训练指标继续提升,则可能即将出现过拟合。\n", "- 如果验证指标的方向错误,则模型显然已经过拟合。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0XmKDtOWzOpk" }, "outputs": [], "source": [ "plotter.plot(size_histories)\n", "a = plt.xscale('log')\n", "plt.xlim([5, max(plt.xlim())])\n", "plt.ylim([0.5, 0.7])\n", "plt.xlabel(\"Epochs [Log Scale]\")" ] }, { "cell_type": "markdown", "metadata": { "id": "UekcaQdmZxnW" }, "source": [ "注:上面运行的所有训练都使用了 `callbacks.EarlyStopping`,会在发现模型没有进展后终止训练。" ] }, { "cell_type": "markdown", "metadata": { "id": "DEQNKadHA0M3" }, "source": [ "### 在 TensorBoard 中查看\n", "\n", "上述模型都会在训练期间写入 TensorBoard 日志。\n", "\n", "在笔记本中打开嵌入式 TensorBoard 查看器(抱歉,这不会在 tensorflow.org 上显示):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "6oa1lkJddZ-m" }, "outputs": [], "source": [ "# Load the TensorBoard notebook extension\n", "%load_ext tensorboard\n", "\n", "# Open an embedded TensorBoard viewer\n", "%tensorboard --logdir {logdir}/sizes" ] }, { "cell_type": "markdown", "metadata": { "id": "HkIIzE5rBBY_" }, "source": [ "您可以在 [TensorBoard.dev](https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97) 上查看此笔记本[先前运行的结果](https://tensorboard.dev/)。" ] }, { "cell_type": "markdown", "metadata": { "id": "ASdv7nsgEFhx" }, "source": [ "## 防止过拟合的策略" ] }, { "cell_type": "markdown", "metadata": { "id": "YN512ksslaxJ" }, "source": [ "在开始学习本部分内容之前,请先复制上述 `\"Tiny\"` 模型的训练日志,用作比较基线。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "40k1eBtnQzNo" }, "outputs": [], "source": [ "shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)\n", "shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "vFWMeFo7jLpN" }, "outputs": [], "source": [ "regularizer_histories = {}\n", "regularizer_histories['Tiny'] = size_histories['Tiny']" ] }, { "cell_type": "markdown", "metadata": { "id": "4rHoVWcswFLa" }, "source": [ "### 添加权重正则化\n" ] }, { "cell_type": "markdown", "metadata": { "id": "kRxWepNawbBK" }, "source": [ "您可能对奥卡姆剃刀法则很熟悉:对某件事给出两种解释,正确的解释往往是“最简单”的那个,即做出最少假设的那个解释。这也适用于神经网络学习的模型:给定一些训练数据和一个网络架构,有多组权重值(多个模型)可以解释数据,而简单模型比复杂模型更不容易过拟合。\n", "\n", "在此上下文中,“简单模型”是指参数值的分布具有更少的熵的模型(或者是具有更少参数的模型,如上文中所见)。因此,缓解过拟合的一种常用方式是限制网络的复杂性,方法是强制网络的权重值只取较小值,这样会使权重值的分布更加“规则”。这被称为“权重正则化”,通过向网络的损失函数添加一个与较大权重相关的成本来实现。这种成本有两种方式:\n", "\n", "- [L1 正则化](https://developers.google.com/machine-learning/glossary/#L1_regularization),其中添加的成本与权重系数的绝对值(即权重的“L1 范数”)成正比。\n", "\n", "- [L2 正则化](https://developers.google.com/machine-learning/glossary/#L2_regularization),其中添加的成本与权重系数值的平方(即权重的“L2 范数”)成正比。L2 正则化在神经网络中也被称为权重衰减。不要因为名称不同而感到困惑:从数学角度来讲,权重衰减与 L2 正则化完全相同。\n", "\n", "L1 正则化会促使权重向零靠近,鼓励稀疏模型。L2 正则化会惩罚权重参数而不使其稀疏化,因为对于较小权重,惩罚会趋近于零。这也是 L2 更为常见的一个原因。\n", "\n", "在 `tf.keras` 中,添加权重正则化的方式是将权重正则化器实例作为关键字参数传递给层。添加 L2 权重正则化:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "HFGmcwduwVyQ" }, "outputs": [], "source": [ "l2_model = tf.keras.Sequential([\n", " layers.Dense(512, activation='elu',\n", " kernel_regularizer=regularizers.l2(0.001),\n", " input_shape=(FEATURES,)),\n", " layers.Dense(512, activation='elu',\n", " kernel_regularizer=regularizers.l2(0.001)),\n", " layers.Dense(512, activation='elu',\n", " kernel_regularizer=regularizers.l2(0.001)),\n", " layers.Dense(512, activation='elu',\n", " kernel_regularizer=regularizers.l2(0.001)),\n", " layers.Dense(1)\n", "])\n", "\n", "regularizer_histories['l2'] = compile_and_fit(l2_model, \"regularizers/l2\")" ] }, { "cell_type": "markdown", "metadata": { "id": "bUUHoXb7w-_C" }, "source": [ "`l2(0.001)` 表示层的权重矩阵中的每个系数都会将 `0.001 * weight_coefficient_value**2` 添加到网络的总**损失**中。\n", "\n", "这就是为什么我们要直接监视 `binary_crossentropy`,因为它没有混入此正则化组件。\n", "\n", "因此,带有 `L2` 正则化惩罚的相同 `\"Large\"` 模型表现得更好:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7wkfLyxBZdh_" }, "outputs": [], "source": [ "plotter.plot(regularizer_histories)\n", "plt.ylim([0.5, 0.7])" ] }, { "cell_type": "markdown", "metadata": { "id": "Kx1YHMsVxWjP" }, "source": [ "如上图所示,`\"L2\"` 正则化模型现在比 `\"Tiny\"` 模型更具竞争力。`\"L2\"` 模型也比它所基于的 `\"Large\"` 模型(具有相同数量的参数)更不容易过拟合。" ] }, { "cell_type": "markdown", "metadata": { "id": "JheBk6f8jMQ7" }, "source": [ "#### 更多信息\n", "\n", "关于此类正则化,有两个重要的注意事项:\n", "\n", "1. 如果您正在编写自己的训练循环,请务必询问模型的正则化损失。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "apDHQNybjaML" }, "outputs": [], "source": [ "result = l2_model(features)\n", "regularization_loss=tf.add_n(l2_model.losses)" ] }, { "cell_type": "markdown", "metadata": { "id": "MLhG6fMSjE-J" }, "source": [ "1. 此实现的工作方式是在模型的损失中添加权重惩罚,然后应用标准的优化程序。\n", "\n", "还有一种方式,也就是仅在原始损失上运行优化器,然后在应用计算步骤的同时,优化器也应用一些权重衰减。此“解耦权重衰减”在 `tf.keras.optimizers.Ftrl` 和 `tfa.optimizers.AdamW` 等优化器中使用。" ] }, { "cell_type": "markdown", "metadata": { "id": "HmnBNOOVxiG8" }, "source": [ "### 添加随机失活\n", "\n", "随机失活是一种最有效、最常用的神经网络正则化技术,由 Hinton 和他在多伦多大学的学生共同开发。\n", "\n", "随机失活的直观解释是,由于网络中的单个节点不能依赖其他节点的输出,所以每个节点必须输出对自己有用的特征。\n", "\n", "应用于层时,随机失活会在训练期间对该层的多个输出特征进行随机“失活”(即设置为零)。例如,在训练期间,给定的层通常会为给定的输入样本返回一个 `[0.2, 0.5, 1.3, 0.8, 1.1]` 向量;应用随机失活后,该向量会有一些随机分布的零条目,例如 `[0, 0.5, 1.3, 0, 1.1]`。\n", "\n", "“随机失活率”是指被清零的特征的比率;它通常设置为 0.2 到 0.5 之间。在测试时,没有单元会被随机失活,而是根据一个等于随机失活率的系数将层的输出值按比例缩小,进而实现平衡(因为会有更多单元在训练时被激活)。\n", "\n", "在 Keras 中,您可以通过 `tf.keras.layers.Dropout` 层在网络中引入随机失活,该层将应用于前一层的输出。\n", "\n", "向网络中添加两个随机失活层,检查它们在减少过拟合方面的表现:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "OFEYvtrHxSWS" }, "outputs": [], "source": [ "dropout_model = tf.keras.Sequential([\n", " layers.Dense(512, activation='elu', input_shape=(FEATURES,)),\n", " layers.Dropout(0.5),\n", " layers.Dense(512, activation='elu'),\n", " layers.Dropout(0.5),\n", " layers.Dense(512, activation='elu'),\n", " layers.Dropout(0.5),\n", " layers.Dense(512, activation='elu'),\n", " layers.Dropout(0.5),\n", " layers.Dense(1)\n", "])\n", "\n", "regularizer_histories['dropout'] = compile_and_fit(dropout_model, \"regularizers/dropout\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "SPZqwVchx5xp" }, "outputs": [], "source": [ "plotter.plot(regularizer_histories)\n", "plt.ylim([0.5, 0.7])" ] }, { "cell_type": "markdown", "metadata": { "id": "4zlHr4iaI1U6" }, "source": [ "从上面的绘图中可以清楚地看到,这两种正则化方法都改善了 `\"Large\"` 模型的行为,但依然没有超过 `\"Tiny\"` 基线。\n", "\n", "接下来,将两者合起来试一试,看看效果是否更好。" ] }, { "cell_type": "markdown", "metadata": { "id": "u7qMg_7Nwy5t" }, "source": [ "### L2 + 随机失活的结合" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7zfs_qQIw1cz" }, "outputs": [], "source": [ "combined_model = tf.keras.Sequential([\n", " layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),\n", " activation='elu', input_shape=(FEATURES,)),\n", " layers.Dropout(0.5),\n", " layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),\n", " activation='elu'),\n", " layers.Dropout(0.5),\n", " layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),\n", " activation='elu'),\n", " layers.Dropout(0.5),\n", " layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),\n", " activation='elu'),\n", " layers.Dropout(0.5),\n", " layers.Dense(1)\n", "])\n", "\n", "regularizer_histories['combined'] = compile_and_fit(combined_model, \"regularizers/combined\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qDqBBxfI0Yd8" }, "outputs": [], "source": [ "plotter.plot(regularizer_histories)\n", "plt.ylim([0.5, 0.7])" ] }, { "cell_type": "markdown", "metadata": { "id": "tE0OoNCQNTJv" }, "source": [ "这个使用 `\"Combined\"` 正则化的模型显然是目前为止最好的模型。" ] }, { "cell_type": "markdown", "metadata": { "id": "-dw23T03FEO1" }, "source": [ "### 在 TensorBoard 中查看\n", "\n", "这些模型也记录了 TensorBoard 日志。\n", "\n", "要打开嵌入式查看器,请运行以下代码单元(抱歉,这不会在 tensorflow.org 上显示):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Op4vLqVWBK_y" }, "outputs": [], "source": [ "%tensorboard --logdir {logdir}/regularizers" ] }, { "cell_type": "markdown", "metadata": { "id": "_rx5b294BXBd" }, "source": [ "您可以在 [TensorBoard.dev](https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97) 上查看此笔记本[先前运行的结果](https://tensorboard.dev/)。" ] }, { "cell_type": "markdown", "metadata": { "id": "uXJxtwBWIhjG" }, "source": [ "## 结论" ] }, { "cell_type": "markdown", "metadata": { "id": "gjfnkEeQyAFG" }, "source": [ "回顾一下,以下是在神经网络中防止过拟合的最常见方式:\n", "\n", "- 获得更多训练数据。\n", "- 降低网络容量。\n", "- 添加权重正则化。\n", "- 添加随机失活\n", "\n", "本指南没有涵盖的两个重要方法是:\n", "\n", "- [数据增强](../images/data_augmentation.ipynb)\n", "- 批次归一化 (`tf.keras.layers.BatchNormalization`)\n", "\n", "请记住,单独使用每种方法也会有效,但结合使用通常效果更好。" ] } ], "metadata": { "accelerator": "GPU", "colab": { "name": "overfit_and_underfit.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "xxx", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }