{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "Tce3stUlHN0L" }, "outputs": [], "source": [ "##### Copyright 2020 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "tuOe1ymfHZPu" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import os\n", "os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # 设置日志级别为ERROR,以减少警告信息\n", "# 禁用 Gemini 的底层库(gRPC 和 Abseil)在初始化日志警告\n", "os.environ[\"GRPC_VERBOSITY\"] = \"ERROR\"\n", "os.environ[\"GLOG_minloglevel\"] = \"3\" # 0: INFO, 1: WARNING, 2: ERROR, 3: FATAL\n", "os.environ[\"GLOG_minloglevel\"] = \"true\"\n", "import logging\n", "import tensorflow as tf\n", "tf.get_logger().setLevel(logging.ERROR)\n", "tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)\n", "!export TF_FORCE_GPU_ALLOW_GROWTH=true\n", "from pathlib import Path\n", "\n", "temp_dir = Path(\".temp\")\n", "temp_dir.mkdir(parents=True, exist_ok=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "qFdPvlXBOdUN" }, "source": [ "# Keras Tuner 简介" ] }, { "cell_type": "markdown", "metadata": { "id": "MfBg1C5NB3X0" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
在 TensorFlow.org 上查看 在 Google Colab 中运行在 GitHub 上查看源代码下载笔记本
" ] }, { "cell_type": "markdown", "metadata": { "id": "xHxb-dlhMIzW" }, "source": [ "## 概述\n", "\n", "Keras Tuner 是一个库,可帮助您为 TensorFlow 程序选择最佳的超参数集。为您的机器学习 (ML) 应用选择正确的超参数集,这一过程称为*超参数调节*或*超调*。\n", "\n", "超参数是控制训练过程和 ML 模型拓扑的变量。这些变量在训练过程中保持不变,并会直接影响 ML 程序的性能。超参数有两种类型:\n", "\n", "1. **模型超参数**:影响模型的选择,例如隐藏层的数量和宽度\n", "2. **算法超参数**:影响学习算法的速度和质量,例如随机梯度下降 (SGD) 的学习率以及 k 近邻 (KNN) 分类器的近邻数\n", "\n", "在本教程中,您将使用 Keras Tuner 对图像分类应用执行超调。" ] }, { "cell_type": "markdown", "metadata": { "id": "MUXex9ctTuDB" }, "source": [ "## 设置" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "id": "IqR2PQG4ZaZ0" }, "outputs": [], "source": [ "import tensorflow as tf\n", "from tensorflow import keras" ] }, { "cell_type": "markdown", "metadata": { "id": "g83Lwsy-Aq2_" }, "source": [ "安装并导入 Keras Tuner。" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "id": "hpMLpbt9jcO6" }, "outputs": [], "source": [ "!pip install -q -U keras-tuner" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "id": "_leAIdFKAxAD" }, "outputs": [], "source": [ "import keras_tuner as kt" ] }, { "cell_type": "markdown", "metadata": { "id": "ReV_UXOgCZvx" }, "source": [ "## 下载并准备数据集\n", "\n", "在本教程中,您将使用 Keras Tuner 为某个对 [Fashion MNIST 数据集](https://github.com/zalandoresearch/fashion-mnist)内的服装图像进行分类的机器学习模型找到最佳超参数。" ] }, { "cell_type": "markdown", "metadata": { "id": "HljH_ENLEdHa" }, "source": [ "加载数据。" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "OHlHs9Wj_PUM" }, "outputs": [], "source": [ "(img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data()" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "bLVhXs3xrUD0" }, "outputs": [], "source": [ "# Normalize pixel values between 0 and 1\n", "img_train = img_train.astype('float32') / 255.0\n", "img_test = img_test.astype('float32') / 255.0" ] }, { "cell_type": "markdown", "metadata": { "id": "K5YEL2H2Ax3e" }, "source": [ "## 定义模型\n", "\n", "构建用于超调的模型时,除了模型架构之外,还要定义超参数搜索空间。您为超调设置的模型称为*超模型*。\n", "\n", "您可以通过两种方式定义超模型:\n", "\n", "- 使用模型构建工具函数\n", "- 将 Keras Tuner API 的 `HyperModel` 类子类化\n", "\n", "您还可以将两个预定义的 HyperModel 类 [HyperXception](https://keras.io/api/keras_tuner/hypermodels/hyper_xception/) 和 [HyperResNet](https://keras.io/api/keras_tuner/hypermodels/hyper_resnet/) 用于计算机视觉应用。\n", "\n", "在本教程中,您将使用模型构建工具函数来定义图像分类模型。模型构建工具函数将返回已编译的模型,并使用您以内嵌方式定义的超参数对模型进行超调。" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "id": "ZQKodC-jtsva" }, "outputs": [], "source": [ "def model_builder(hp):\n", " model = keras.Sequential()\n", " model.add(keras.layers.Flatten())\n", "\n", " # Tune the number of units in the first Dense layer\n", " # Choose an optimal value between 32-512\n", " hp_units = hp.Int('units', min_value=32, max_value=512, step=32)\n", " model.add(keras.layers.Dense(units=hp_units, activation='relu'))\n", " model.add(keras.layers.Dense(10))\n", "\n", " # Tune the learning rate for the optimizer\n", " # Choose an optimal value from 0.01, 0.001, or 0.0001\n", " hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4])\n", "\n", " model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp_learning_rate),\n", " loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n", " metrics=['accuracy'])\n", "\n", " return model" ] }, { "cell_type": "markdown", "metadata": { "id": "0J1VYw4q3x0b" }, "source": [ "## 实例化调节器并执行超调\n", "\n", "实例化调节器以执行超调。Keras Tuner 提供了四种调节器:`RandomSearch`、`Hyperband`、`BayesianOptimization` 和 `Sklearn`。在本教程中,您将使用 [Hyperband](https://arxiv.org/pdf/1603.06560.pdf) 调节器。\n", "\n", "要实例化 Hyperband 调节器,必须指定超模型、要优化的 `objective` 和要训练的最大周期数 (`max_epochs`)。" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "id": "oichQFly6Y46" }, "outputs": [], "source": [ "tuner = kt.Hyperband(model_builder,\n", " objective='val_accuracy',\n", " max_epochs=10,\n", " factor=3,\n", " directory=temp_dir/'my_dir',\n", " project_name='intro_to_kt')" ] }, { "cell_type": "markdown", "metadata": { "id": "VaIhhdKf9VtI" }, "source": [ "Hyperband 调节算法使用自适应资源分配和早停法来快速收敛到高性能模型。该过程采用了体育竞技争冠模式的排除法。算法会将大量模型训练多个周期,并仅将性能最高的一半模型送入下一轮训练。Hyperband 通过计算 1 + logfactor(`max_epochs`) 并将其向上舍入到最接近的整数来确定要训练的模型的数量。" ] }, { "cell_type": "markdown", "metadata": { "id": "cwhBdXx0Ekj8" }, "source": [ "创建回调以在验证损失达到特定值后提前停止训练。" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "id": "WT9IkS9NEjLc" }, "outputs": [], "source": [ "stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)" ] }, { "cell_type": "markdown", "metadata": { "id": "UKghEo15Tduy" }, "source": [ "运行超参数搜索。除了上面的回调外,搜索方法的参数也与 `tf.keras.model.fit` 所用参数相同。" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "id": "dSBQcTHF9cKt" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Trial 30 Complete [00h 00m 40s]\n", "val_accuracy: 0.8653333187103271\n", "\n", "Best val_accuracy So Far: 0.8915833234786987\n", "Total elapsed time: 00h 09m 06s\n", "\n", "The hyperparameter search is complete. The optimal number of units in the first densely-connected\n", "layer is 480 and the optimal learning rate for the optimizer\n", "is 0.001.\n", "\n" ] } ], "source": [ "tuner.search(img_train, label_train, epochs=50, validation_split=0.2, callbacks=[stop_early])\n", "\n", "# Get the optimal hyperparameters\n", "best_hps=tuner.get_best_hyperparameters(num_trials=1)[0]\n", "\n", "print(f\"\"\"\n", "The hyperparameter search is complete. The optimal number of units in the first densely-connected\n", "layer is {best_hps.get('units')} and the optimal learning rate for the optimizer\n", "is {best_hps.get('learning_rate')}.\n", "\"\"\")" ] }, { "cell_type": "markdown", "metadata": { "id": "Lak_ylf88xBv" }, "source": [ "## 训练模型\n", "\n", "使用从搜索中获得的超参数找到训练模型的最佳周期数。" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "id": "McO82AXOuxXh" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Epoch 1/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m6s\u001b[0m 3ms/step - accuracy: 0.7844 - loss: 0.6154 - val_accuracy: 0.8453 - val_loss: 0.4136\n", "Epoch 2/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.8609 - loss: 0.3789 - val_accuracy: 0.8729 - val_loss: 0.3500\n", "Epoch 3/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.8794 - loss: 0.3273 - val_accuracy: 0.8728 - val_loss: 0.3514\n", "Epoch 4/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.8872 - loss: 0.3048 - val_accuracy: 0.8827 - val_loss: 0.3235\n", "Epoch 5/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 2ms/step - accuracy: 0.8939 - loss: 0.2821 - val_accuracy: 0.8809 - val_loss: 0.3286\n", "Epoch 6/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 3ms/step - accuracy: 0.9030 - loss: 0.2598 - val_accuracy: 0.8760 - val_loss: 0.3422\n", "Epoch 7/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9061 - loss: 0.2545 - val_accuracy: 0.8867 - val_loss: 0.3147\n", "Epoch 8/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9094 - loss: 0.2420 - val_accuracy: 0.8893 - val_loss: 0.3141\n", "Epoch 9/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9122 - loss: 0.2288 - val_accuracy: 0.8780 - val_loss: 0.3494\n", "Epoch 10/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9165 - loss: 0.2218 - val_accuracy: 0.8899 - val_loss: 0.3222\n", "Epoch 11/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9201 - loss: 0.2132 - val_accuracy: 0.8898 - val_loss: 0.3182\n", "Epoch 12/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 2ms/step - accuracy: 0.9235 - loss: 0.2020 - val_accuracy: 0.8924 - val_loss: 0.3073\n", "Epoch 13/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9247 - loss: 0.1993 - val_accuracy: 0.8898 - val_loss: 0.3306\n", "Epoch 14/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9291 - loss: 0.1879 - val_accuracy: 0.8894 - val_loss: 0.3475\n", "Epoch 15/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9332 - loss: 0.1792 - val_accuracy: 0.8904 - val_loss: 0.3315\n", "Epoch 16/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9332 - loss: 0.1797 - val_accuracy: 0.8928 - val_loss: 0.3508\n", "Epoch 17/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9358 - loss: 0.1720 - val_accuracy: 0.8895 - val_loss: 0.3469\n", "Epoch 18/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 2ms/step - accuracy: 0.9390 - loss: 0.1633 - val_accuracy: 0.8905 - val_loss: 0.3497\n", "Epoch 19/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9394 - loss: 0.1599 - val_accuracy: 0.8880 - val_loss: 0.3483\n", "Epoch 20/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9436 - loss: 0.1496 - val_accuracy: 0.8885 - val_loss: 0.3815\n", "Epoch 21/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9437 - loss: 0.1494 - val_accuracy: 0.8947 - val_loss: 0.3472\n", "Epoch 22/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9457 - loss: 0.1457 - val_accuracy: 0.8883 - val_loss: 0.3934\n", "Epoch 23/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9466 - loss: 0.1400 - val_accuracy: 0.8943 - val_loss: 0.3530\n", "Epoch 24/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9487 - loss: 0.1352 - val_accuracy: 0.8901 - val_loss: 0.3810\n", "Epoch 25/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9512 - loss: 0.1296 - val_accuracy: 0.8928 - val_loss: 0.3857\n", "Epoch 26/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9515 - loss: 0.1283 - val_accuracy: 0.8956 - val_loss: 0.3769\n", "Epoch 27/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9552 - loss: 0.1199 - val_accuracy: 0.8915 - val_loss: 0.3791\n", "Epoch 28/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9553 - loss: 0.1173 - val_accuracy: 0.8946 - val_loss: 0.3851\n", "Epoch 29/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9537 - loss: 0.1231 - val_accuracy: 0.8941 - val_loss: 0.4013\n", "Epoch 30/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9576 - loss: 0.1152 - val_accuracy: 0.8942 - val_loss: 0.4023\n", "Epoch 31/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9605 - loss: 0.1080 - val_accuracy: 0.8950 - val_loss: 0.4240\n", "Epoch 32/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9596 - loss: 0.1069 - val_accuracy: 0.8918 - val_loss: 0.4392\n", "Epoch 33/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9597 - loss: 0.1055 - val_accuracy: 0.8918 - val_loss: 0.4087\n", "Epoch 34/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 3ms/step - accuracy: 0.9605 - loss: 0.1034 - val_accuracy: 0.8744 - val_loss: 0.5429\n", "Epoch 35/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9623 - loss: 0.1027 - val_accuracy: 0.8938 - val_loss: 0.4789\n", "Epoch 36/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9618 - loss: 0.0986 - val_accuracy: 0.8942 - val_loss: 0.4359\n", "Epoch 37/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9657 - loss: 0.0934 - val_accuracy: 0.8948 - val_loss: 0.4627\n", "Epoch 38/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9629 - loss: 0.0959 - val_accuracy: 0.8932 - val_loss: 0.4700\n", "Epoch 39/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9639 - loss: 0.0953 - val_accuracy: 0.8938 - val_loss: 0.4628\n", "Epoch 40/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9677 - loss: 0.0863 - val_accuracy: 0.8943 - val_loss: 0.4916\n", "Epoch 41/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9695 - loss: 0.0828 - val_accuracy: 0.8948 - val_loss: 0.4793\n", "Epoch 42/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9702 - loss: 0.0814 - val_accuracy: 0.8954 - val_loss: 0.4605\n", "Epoch 43/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9699 - loss: 0.0805 - val_accuracy: 0.8959 - val_loss: 0.4710\n", "Epoch 44/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 2ms/step - accuracy: 0.9714 - loss: 0.0769 - val_accuracy: 0.8903 - val_loss: 0.4923\n", "Epoch 45/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 2ms/step - accuracy: 0.9699 - loss: 0.0777 - val_accuracy: 0.8861 - val_loss: 0.5372\n", "Epoch 46/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9692 - loss: 0.0820 - val_accuracy: 0.8960 - val_loss: 0.5028\n", "Epoch 47/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 2ms/step - accuracy: 0.9710 - loss: 0.0748 - val_accuracy: 0.8913 - val_loss: 0.5318\n", "Epoch 48/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 2ms/step - accuracy: 0.9717 - loss: 0.0758 - val_accuracy: 0.8902 - val_loss: 0.5180\n", "Epoch 49/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9728 - loss: 0.0718 - val_accuracy: 0.8907 - val_loss: 0.5315\n", "Epoch 50/50\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9744 - loss: 0.0671 - val_accuracy: 0.8940 - val_loss: 0.5327\n", "Best epoch: 46\n" ] } ], "source": [ "# Build the model with the optimal hyperparameters and train it on the data for 50 epochs\n", "model = tuner.hypermodel.build(best_hps)\n", "history = model.fit(img_train, label_train, epochs=50, validation_split=0.2)\n", "\n", "val_acc_per_epoch = history.history['val_accuracy']\n", "best_epoch = val_acc_per_epoch.index(max(val_acc_per_epoch)) + 1\n", "print('Best epoch: %d' % (best_epoch,))" ] }, { "cell_type": "markdown", "metadata": { "id": "uOTSirSTI3Gp" }, "source": [ "重新实例化超模型并使用上面的最佳周期数对其进行训练。" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "id": "NoiPUEHmMhCe" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Epoch 1/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m6s\u001b[0m 3ms/step - accuracy: 0.7809 - loss: 0.6163 - val_accuracy: 0.8484 - val_loss: 0.4384\n", "Epoch 2/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.8677 - loss: 0.3708 - val_accuracy: 0.8735 - val_loss: 0.3508\n", "Epoch 3/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.8762 - loss: 0.3344 - val_accuracy: 0.8788 - val_loss: 0.3403\n", "Epoch 4/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.8859 - loss: 0.3046 - val_accuracy: 0.8776 - val_loss: 0.3412\n", "Epoch 5/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.8953 - loss: 0.2834 - val_accuracy: 0.8808 - val_loss: 0.3291\n", "Epoch 6/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 2ms/step - accuracy: 0.8992 - loss: 0.2724 - val_accuracy: 0.8758 - val_loss: 0.3492\n", "Epoch 7/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9066 - loss: 0.2501 - val_accuracy: 0.8848 - val_loss: 0.3247\n", "Epoch 8/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9102 - loss: 0.2437 - val_accuracy: 0.8839 - val_loss: 0.3256\n", "Epoch 9/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9117 - loss: 0.2329 - val_accuracy: 0.8885 - val_loss: 0.3255\n", "Epoch 10/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 2ms/step - accuracy: 0.9188 - loss: 0.2199 - val_accuracy: 0.8921 - val_loss: 0.3090\n", "Epoch 11/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9197 - loss: 0.2120 - val_accuracy: 0.8873 - val_loss: 0.3218\n", "Epoch 12/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9239 - loss: 0.2056 - val_accuracy: 0.8924 - val_loss: 0.3150\n", "Epoch 13/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 3ms/step - accuracy: 0.9273 - loss: 0.1980 - val_accuracy: 0.8956 - val_loss: 0.3158\n", "Epoch 14/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9293 - loss: 0.1931 - val_accuracy: 0.8917 - val_loss: 0.3301\n", "Epoch 15/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9323 - loss: 0.1786 - val_accuracy: 0.8976 - val_loss: 0.3212\n", "Epoch 16/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 3ms/step - accuracy: 0.9353 - loss: 0.1746 - val_accuracy: 0.8898 - val_loss: 0.3441\n", "Epoch 17/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9347 - loss: 0.1709 - val_accuracy: 0.8947 - val_loss: 0.3288\n", "Epoch 18/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9386 - loss: 0.1641 - val_accuracy: 0.8929 - val_loss: 0.3327\n", "Epoch 19/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9424 - loss: 0.1533 - val_accuracy: 0.8949 - val_loss: 0.3435\n", "Epoch 20/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9421 - loss: 0.1555 - val_accuracy: 0.8934 - val_loss: 0.3547\n", "Epoch 21/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 2ms/step - accuracy: 0.9448 - loss: 0.1469 - val_accuracy: 0.8953 - val_loss: 0.3412\n", "Epoch 22/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9452 - loss: 0.1449 - val_accuracy: 0.8937 - val_loss: 0.3451\n", "Epoch 23/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9477 - loss: 0.1407 - val_accuracy: 0.8945 - val_loss: 0.3516\n", "Epoch 24/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 2ms/step - accuracy: 0.9472 - loss: 0.1392 - val_accuracy: 0.8927 - val_loss: 0.3789\n", "Epoch 25/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9506 - loss: 0.1318 - val_accuracy: 0.8898 - val_loss: 0.3972\n", "Epoch 26/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9517 - loss: 0.1303 - val_accuracy: 0.8828 - val_loss: 0.4100\n", "Epoch 27/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9533 - loss: 0.1262 - val_accuracy: 0.8924 - val_loss: 0.3892\n", "Epoch 28/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9553 - loss: 0.1200 - val_accuracy: 0.8952 - val_loss: 0.3770\n", "Epoch 29/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9571 - loss: 0.1156 - val_accuracy: 0.8959 - val_loss: 0.4118\n", "Epoch 30/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9559 - loss: 0.1157 - val_accuracy: 0.8954 - val_loss: 0.3937\n", "Epoch 31/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9579 - loss: 0.1112 - val_accuracy: 0.8932 - val_loss: 0.4362\n", "Epoch 32/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9582 - loss: 0.1114 - val_accuracy: 0.8940 - val_loss: 0.4085\n", "Epoch 33/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9594 - loss: 0.1066 - val_accuracy: 0.8959 - val_loss: 0.4060\n", "Epoch 34/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 3ms/step - accuracy: 0.9607 - loss: 0.1041 - val_accuracy: 0.8966 - val_loss: 0.4171\n", "Epoch 35/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9609 - loss: 0.1024 - val_accuracy: 0.8908 - val_loss: 0.4622\n", "Epoch 36/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9613 - loss: 0.1018 - val_accuracy: 0.8754 - val_loss: 0.4987\n", "Epoch 37/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9633 - loss: 0.0964 - val_accuracy: 0.8890 - val_loss: 0.4426\n", "Epoch 38/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9644 - loss: 0.0931 - val_accuracy: 0.8934 - val_loss: 0.4596\n", "Epoch 39/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9656 - loss: 0.0923 - val_accuracy: 0.8958 - val_loss: 0.4529\n", "Epoch 40/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 2ms/step - accuracy: 0.9687 - loss: 0.0821 - val_accuracy: 0.8915 - val_loss: 0.4697\n", "Epoch 41/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 2ms/step - accuracy: 0.9685 - loss: 0.0824 - val_accuracy: 0.8928 - val_loss: 0.4751\n", "Epoch 42/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9662 - loss: 0.0861 - val_accuracy: 0.8842 - val_loss: 0.5023\n", "Epoch 43/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9690 - loss: 0.0812 - val_accuracy: 0.8896 - val_loss: 0.4934\n", "Epoch 44/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9684 - loss: 0.0836 - val_accuracy: 0.8953 - val_loss: 0.4849\n", "Epoch 45/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9703 - loss: 0.0785 - val_accuracy: 0.8932 - val_loss: 0.5106\n", "Epoch 46/46\n", "\u001b[1m1500/1500\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 2ms/step - accuracy: 0.9717 - loss: 0.0782 - val_accuracy: 0.8932 - val_loss: 0.4991\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hypermodel = tuner.hypermodel.build(best_hps)\n", "\n", "# Retrain the model\n", "hypermodel.fit(img_train, label_train, epochs=best_epoch, validation_split=0.2)" ] }, { "cell_type": "markdown", "metadata": { "id": "MqU5ZVAaag2v" }, "source": [ "要完成本教程,请在测试数据上评估超模型。" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "id": "9E0BTp9Ealjb" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m2s\u001b[0m 6ms/step - accuracy: 0.8867 - loss: 0.5731\n", "[test loss, test accuracy]: [0.5481381416320801, 0.8877999782562256]\n" ] } ], "source": [ "eval_result = hypermodel.evaluate(img_test, label_test)\n", "print(\"[test loss, test accuracy]:\", eval_result)" ] }, { "cell_type": "markdown", "metadata": { "id": "EQRpPHZsz-eC" }, "source": [ "`my_dir/intro_to_kt` 目录中包含了在超参数搜索期间每次试验(模型配置)运行的详细日志和检查点。如果重新运行超参数搜索,Keras Tuner 将使用这些日志中记录的现有状态来继续搜索。要停用此行为,请在实例化调节器时传递一个附加的 `overwrite = True` 参数。" ] }, { "cell_type": "markdown", "metadata": { "id": "sKwLOzKpFGAj" }, "source": [ "## 总结\n", "\n", "在本教程中,您学习了如何使用 Keras Tuner 调节模型的超参数。要详细了解 Keras Tuner,请查看以下其他资源:\n", "\n", "- [TensorFlow 博客上的 Keras Tuner](https://blog.tensorflow.org/2020/01/hyperparameter-tuning-with-keras-tuner.html)\n", "- [Keras Tuner 网站](https://keras-team.github.io/keras-tuner/)\n", "\n", "另请查看 TensorBoard 中的 [HParams Dashboard](https://tensorflow.google.cn/tensorboard/hyperparameter_tuning_with_hparams),以交互方式调节模型超参数。" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [ "Tce3stUlHN0L" ], "name": "keras_tuner.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "xxx", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }