{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "SB93Ge748VQs" }, "outputs": [], "source": [ "##### Copyright 2019 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "0sK8X2O9bTlz" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "HEYuO5NFwDK9" }, "source": [ "# 开始使用 TensorBoard\n", "\n", "\n", " \n", " \n", " \n", " \n", "
在 TensorFlow.org 上查看 在 Google Colab 中运行 在 Github 上查看源代码 下载笔记本
" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import os\n", "os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # 设置日志级别为ERROR,以减少警告信息\n", "# 禁用 Gemini 的底层库(gRPC 和 Abseil)在初始化日志警告\n", "os.environ[\"GRPC_VERBOSITY\"] = \"ERROR\"\n", "os.environ[\"GLOG_minloglevel\"] = \"3\" # 0: INFO, 1: WARNING, 2: ERROR, 3: FATAL\n", "os.environ[\"GLOG_minloglevel\"] = \"true\"\n", "import logging\n", "import tensorflow as tf\n", "tf.get_logger().setLevel(logging.ERROR)\n", "tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)\n", "!export TF_FORCE_GPU_ALLOW_GROWTH=true\n", "from pathlib import Path\n", "\n", "temp_dir = Path(\".temp\")\n", "temp_dir.mkdir(parents=True, exist_ok=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "56V5oun18ZdZ" }, "source": [ "在机器学习中,要改进模型的某些参数,您通常需要对其进行衡量。TensorBoard 是用于提供机器学习工作流期间所需测量和呈现的工具。它使您能够跟踪实验指标(例如损失和准确率),呈现模型计算图,将嵌入向量投影到较低维度的空间等。\n", "\n", "本快速入门将展示如何快速使用 TensorBoard 。该网站上的其余指南提供了有关特定功能的更多详细信息,此处未包括其中的许多功能。 " ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "id": "6B95Hb6YVgPZ" }, "outputs": [], "source": [ "# Load the TensorBoard notebook extension\n", "%load_ext tensorboard" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "id": "_wqSAZExy6xV" }, "outputs": [], "source": [ "import tensorflow as tf\n", "import datetime" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "Ao7fJW1Pyiza" }, "outputs": [], "source": [ "# Clear any logs from previous runs\n", "!rm -rf {temp_dir}./logs/ " ] }, { "cell_type": "markdown", "metadata": { "id": "z5pr9vuHVgXY" }, "source": [ "在本例中使用 [MNIST](https://en.wikipedia.org/wiki/MNIST_database) 数据集。接下来编写一个函数对数据进行标准化,同时创建一个简单的Keras模型使图像分为10类。" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "id": "j-DHsby18cot" }, "outputs": [], "source": [ "mnist = tf.keras.datasets.mnist\n", "\n", "(x_train, y_train),(x_test, y_test) = mnist.load_data()\n", "x_train, x_test = x_train / 255.0, x_test / 255.0\n", "\n", "def create_model():\n", " return tf.keras.models.Sequential([\n", " tf.keras.layers.Flatten(name='layers_flatten'),\n", " tf.keras.layers.Dense(512, activation='relu', name='layers_dense'),\n", " tf.keras.layers.Dropout(0.2, name='layers_dropout'),\n", " tf.keras.layers.Dense(10, activation='softmax', name='layers_dense_2')\n", " ])" ] }, { "cell_type": "markdown", "metadata": { "id": "XKUjdIoV87um" }, "source": [ "## 通过 Keras Model.fit() 使用 TensorBoard" ] }, { "cell_type": "markdown", "metadata": { "id": "8CL_lxdn8-Sv" }, "source": [ "当使用 Keras's [Model.fit()](https://tensorflow.google.cn/api_docs/python/tf/keras/models/Model#fit) 函数进行训练时, 添加 `tf.keras.callback.TensorBoard` 回调可确保创建和存储日志.另外,在每个时期启用 `histogram_freq=1` 的直方图计算功能(默认情况下处于关闭状态)\n", "\n", "将日志放在带有时间戳的子目录中,以便轻松选择不同的训练运行。" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "id": "WAQThq539CEJ" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Epoch 1/5\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\n", "I0000 00:00:1729772355.509210 3633479 service.cc:146] XLA service 0x7f8fa00077b0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n", "I0000 00:00:1729772355.509244 3633479 service.cc:154] StreamExecutor device (0): NVIDIA GeForce RTX 3090, Compute Capability 8.6\n", "I0000 00:00:1729772355.509248 3633479 service.cc:154] StreamExecutor device (1): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5\n", "I0000 00:00:1729772357.575475 3633479 device_compiler.h:188] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1m1875/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m14s\u001b[0m 5ms/step - accuracy: 0.8949 - loss: 0.3614 - val_accuracy: 0.9697 - val_loss: 0.1054\n", "Epoch 2/5\n", "\u001b[1m1875/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m6s\u001b[0m 3ms/step - accuracy: 0.9688 - loss: 0.1003 - val_accuracy: 0.9737 - val_loss: 0.0842\n", "Epoch 3/5\n", "\u001b[1m1875/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m6s\u001b[0m 3ms/step - accuracy: 0.9780 - loss: 0.0698 - val_accuracy: 0.9773 - val_loss: 0.0686\n", "Epoch 4/5\n", "\u001b[1m1875/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m6s\u001b[0m 3ms/step - accuracy: 0.9841 - loss: 0.0511 - val_accuracy: 0.9794 - val_loss: 0.0646\n", "Epoch 5/5\n", "\u001b[1m1875/1875\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m6s\u001b[0m 3ms/step - accuracy: 0.9876 - loss: 0.0395 - val_accuracy: 0.9793 - val_loss: 0.0674\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model = create_model()\n", "model.compile(optimizer='adam',\n", " loss='sparse_categorical_crossentropy',\n", " metrics=['accuracy'])\n", "\n", "log_dir = temp_dir/\"logs/fit\"/datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\n", "tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)\n", "\n", "model.fit(x=x_train, \n", " y=y_train, \n", " epochs=5, \n", " validation_data=(x_test, y_test), \n", " callbacks=[tensorboard_callback])" ] }, { "cell_type": "markdown", "metadata": { "id": "asjGpmD09dRl" }, "source": [ "通过命令行 (command) 或在 notebook 体验中启动 TensorBoard ,这两个接口通常是相同的。 在 notebooks, 使用 `%tensorboard` 命令。 在命令行中, 运行不带“%”的相同命令。" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "id": "A4UKgTLb9fKI" }, "outputs": [ { "data": { "text/html": [ "\n", " \n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "%tensorboard --logdir {temp_dir}/logs/fit" ] }, { "cell_type": "markdown", "metadata": { "id": "MCsoUNb6YhGc" }, "source": [ "" ] }, { "cell_type": "markdown", "metadata": { "id": "Gi4PaRm39of2" }, "source": [ "在此示例中创建的可视化效果的简要概述以及可以找到它们的信息中心(顶部导航栏中的标签页):\n", "\n", "- **标量**显示损失和指标在每个周期如何变化。您还可以使用它们跟踪训练速度、学习率和其他标量值。可以在 **Time Series** 或 **Scalars** 信息中心找到标量。\n", "- **计算图**可以帮助您呈现模型。在这种情况下,将显示层的 Keras 计算图,这可以帮助您确保正确构建。可以在 **Graphs** 信息中心找到计算图。\n", "- **直方图**和**分布**显示张量随时间的分布。这对于呈现权重和偏差并验证它们是否以预期的方式变化很有用。可以在 **Time Series** 或 **Histograms** 信息中心中找到直方图。可以在 **Distributions** 信息中心中找到分布。\n", "\n", "当您记录其他类型的数据时,会自动启用其他 TensorBoard 信息中心。 例如,使用 Keras TensorBoard 回调还可以记录图像和嵌入向量。您可以通过点击右上角的“inactive”下拉列表来查看 TensorBoard 中还有哪些其他信息中心。" ] }, { "cell_type": "markdown", "metadata": { "id": "nB718NOH95yG" }, "source": [ "## 通过其他方法使用 TensorBoard\n" ] }, { "cell_type": "markdown", "metadata": { "id": "IKNt0nWs-Ekt" }, "source": [ "用以下方法训练时,例如 [`tf.GradientTape()`](https://tensorflow.google.cn/api_docs/python/tf/GradientTape), 会使用 `tf.summary` 记录所需的信息。\n", "\n", "使用与上述相同的数据集,但将其转换为 `tf.data.Dataset` 以利用批处理功能:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "id": "nnHx4DsMezy1" }, "outputs": [], "source": [ "train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))\n", "test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))\n", "\n", "train_dataset = train_dataset.shuffle(60000).batch(64)\n", "test_dataset = test_dataset.batch(64)" ] }, { "cell_type": "markdown", "metadata": { "id": "SzpmTmJafJ10" }, "source": [ "训练代码遵循 [advanced quickstart](https://tensorflow.google.cn/tutorials/quickstart/advanced) 教程,但显示了如何将 log 记录到 TensorBoard 。 首先选择损失和优化器:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "id": "H2Y5-aPbAANs" }, "outputs": [], "source": [ "loss_object = tf.keras.losses.SparseCategoricalCrossentropy()\n", "optimizer = tf.keras.optimizers.Adam()" ] }, { "cell_type": "markdown", "metadata": { "id": "cKhIIDj9Hbfy" }, "source": [ "创建可用于在训练期间累积值并在任何时候记录的有状态指标:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "id": "jD0tEWrgH0TL" }, "outputs": [], "source": [ "# Define our metrics\n", "train_loss = tf.keras.metrics.Mean('train_loss', dtype=tf.float32)\n", "train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('train_accuracy')\n", "test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)\n", "test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('test_accuracy')" ] }, { "cell_type": "markdown", "metadata": { "id": "szw_KrgOg-OT" }, "source": [ "定义训练和测试代码:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "id": "TTWcJO35IJgK" }, "outputs": [], "source": [ "def train_step(model, optimizer, x_train, y_train):\n", " with tf.GradientTape() as tape:\n", " predictions = model(x_train, training=True)\n", " loss = loss_object(y_train, predictions)\n", " grads = tape.gradient(loss, model.trainable_variables)\n", " optimizer.apply_gradients(zip(grads, model.trainable_variables))\n", "\n", " train_loss(loss)\n", " train_accuracy(y_train, predictions)\n", "\n", "def test_step(model, x_test, y_test):\n", " predictions = model(x_test)\n", " loss = loss_object(y_test, predictions)\n", "\n", " test_loss(loss)\n", " test_accuracy(y_test, predictions)" ] }, { "cell_type": "markdown", "metadata": { "id": "nucPZBKPJR3A" }, "source": [ "设置摘要编写器,以将摘要写到另一个日志目录中的磁盘上:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "id": "3Qp-exmbWf4w" }, "outputs": [], "source": [ "current_time = datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\n", "train_log_dir = temp_dir/'logs/gradient_tape' / current_time / 'train'\n", "test_log_dir = temp_dir/'logs/gradient_tape' / current_time / 'test'\n", "train_summary_writer = tf.summary.create_file_writer(str(train_log_dir))\n", "test_summary_writer = tf.summary.create_file_writer(str(test_log_dir))" ] }, { "cell_type": "markdown", "metadata": { "id": "qgUJgDdKWUKF" }, "source": [ "开始训练。使用 `tf.summary.scalar()` 在摘要编写器范围内的训练/测试期间记录指标(损失和准确率)以将摘要写入磁盘。您可以控制记录哪些指标以及记录的频率。其他 `tf.summary` 函数可以记录其他类型的数据。" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "id": "odWvHPpKJvb_" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Epoch 1, Loss: 0.24809525907039642, Accuracy: 92.788330078125, Test Loss: 0.11344233900308609, Test Accuracy: 96.73999786376953\n" ] }, { "ename": "AttributeError", "evalue": "'Mean' object has no attribute 'reset_states'", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)", "Cell \u001b[0;32mIn[19], line 26\u001b[0m\n\u001b[1;32m 19\u001b[0m \u001b[38;5;28mprint\u001b[39m (template\u001b[38;5;241m.\u001b[39mformat(epoch\u001b[38;5;241m+\u001b[39m\u001b[38;5;241m1\u001b[39m,\n\u001b[1;32m 20\u001b[0m train_loss\u001b[38;5;241m.\u001b[39mresult(), \n\u001b[1;32m 21\u001b[0m train_accuracy\u001b[38;5;241m.\u001b[39mresult()\u001b[38;5;241m*\u001b[39m\u001b[38;5;241m100\u001b[39m,\n\u001b[1;32m 22\u001b[0m test_loss\u001b[38;5;241m.\u001b[39mresult(), \n\u001b[1;32m 23\u001b[0m test_accuracy\u001b[38;5;241m.\u001b[39mresult()\u001b[38;5;241m*\u001b[39m\u001b[38;5;241m100\u001b[39m))\n\u001b[1;32m 25\u001b[0m \u001b[38;5;66;03m# Reset metrics every epoch\u001b[39;00m\n\u001b[0;32m---> 26\u001b[0m train_loss\u001b[38;5;241m.\u001b[39mreset_states()\n\u001b[1;32m 27\u001b[0m test_loss\u001b[38;5;241m.\u001b[39mreset_states()\n\u001b[1;32m 28\u001b[0m train_accuracy\u001b[38;5;241m.\u001b[39mreset_states()\n", "\u001b[0;31mAttributeError\u001b[0m: 'Mean' object has no attribute 'reset_states'" ] } ], "source": [ "model = create_model() # reset our model\n", "\n", "EPOCHS = 5\n", "\n", "for epoch in range(EPOCHS):\n", " for (x_train, y_train) in train_dataset:\n", " train_step(model, optimizer, x_train, y_train)\n", " with train_summary_writer.as_default():\n", " tf.summary.scalar('loss', train_loss.result(), step=epoch)\n", " tf.summary.scalar('accuracy', train_accuracy.result(), step=epoch)\n", "\n", " for (x_test, y_test) in test_dataset:\n", " test_step(model, x_test, y_test)\n", " with test_summary_writer.as_default():\n", " tf.summary.scalar('loss', test_loss.result(), step=epoch)\n", " tf.summary.scalar('accuracy', test_accuracy.result(), step=epoch)\n", " \n", " template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'\n", " print (template.format(epoch+1,\n", " train_loss.result(), \n", " train_accuracy.result()*100,\n", " test_loss.result(), \n", " test_accuracy.result()*100))\n", "\n", " # Reset metrics every epoch\n", " train_loss.reset_states()\n", " test_loss.reset_states()\n", " train_accuracy.reset_states()\n", " test_accuracy.reset_states()" ] }, { "cell_type": "markdown", "metadata": { "id": "JikosQ84fzcA" }, "source": [ "再次打开 TensorBoard,这次将其指向新的日志目录。 我们也可以启动 TensorBoard 来监视训练进度。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-Iue509kgOyE" }, "outputs": [], "source": [ "%tensorboard --logdir {temp_dir}/logs/gradient_tape" ] }, { "cell_type": "markdown", "metadata": { "id": "NVpnilhEgQXk" }, "source": [ "" ] }, { "cell_type": "markdown", "metadata": { "id": "ozbwXgPIkCKV" }, "source": [ "您现在已经了解了如何通过 Keras 回调和通过 `tf.summary` 使用 TensorBoard 来实现更多自定义场景。 " ] }, { "cell_type": "markdown", "metadata": { "id": "vsowjhkBdkbK" }, "source": [ "## TensorBoard.dev:托管并共享您的机器学习实验结果\n", "\n", "[TensorBoard.dev](https://tensorboard.dev) 是一项免费的公共服务,可让您上传您的 TensorBoard 日志并获得可在学术论文、博文、社交媒体等中与所有人共享的永久链接。这有助于实现更好的重现性和协作。\n", "\n", "要使用 TensorBoard.dev,请运行以下命令:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Q3nupQL24E5E" }, "outputs": [], "source": [ "!tensorboard dev upload \\\n", " --logdir {temp_dir}/logs/fit \\\n", " --name \"(optional) My latest experiment\" \\\n", " --description \"(optional) Simple comparison of several hyperparameters\" \\\n", " --one_shot" ] }, { "cell_type": "markdown", "metadata": { "id": "lAgEh_Ow4EX6" }, "source": [ "请注意,此调用使用感叹号前缀 (`!`) 来调用 shell,而不是使用百分比前缀 (`%`) 来调用 colab 魔法。从命令行调用此命令时,不需要任何前缀。\n", "\n", "在[此处](https://tensorboard.dev/experiment/EDZb7XgKSBKo6Gznh3i8hg/#scalars)查看示例。\n", "\n", "要了解如何使用 TensorBoard.dev 的更多详细信息,请参阅 https://tensorboard.dev/#get-started" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "get_started.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "xxx", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }