{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "tDnwEv8FtJm7" }, "outputs": [], "source": [ "##### Copyright 2018 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "JlknJBWQtKkI" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import os\n", "os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # 设置日志级别为ERROR,以减少警告信息\n", "# 禁用 Gemini 的底层库(gRPC 和 Abseil)在初始化日志警告\n", "os.environ[\"GRPC_VERBOSITY\"] = \"ERROR\"\n", "os.environ[\"GLOG_minloglevel\"] = \"3\" # 0: INFO, 1: WARNING, 2: ERROR, 3: FATAL\n", "os.environ[\"GLOG_minloglevel\"] = \"true\"\n", "import logging\n", "import tensorflow as tf\n", "tf.get_logger().setLevel(logging.ERROR)\n", "tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)\n", "!export TF_FORCE_GPU_ALLOW_GROWTH=true\n", "from pathlib import Path\n", "\n", "temp_dir = Path(\".temp\")\n", "temp_dir.mkdir(parents=True, exist_ok=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "60RdWsg1tETW" }, "source": [ "# 自定义层" ] }, { "cell_type": "markdown", "metadata": { "id": "BcJg7Enms86w" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
在 TensorFlow.org 查看 在 Google Colab 中运行在 GitHub 上查看源代码下载笔记本
" ] }, { "cell_type": "markdown", "metadata": { "id": "UEu3q4jmpKVT" }, "source": [ "我们建议使用 `tf.keras` 作为构建神经网络的高级 API。也就是说,大多数 TensorFlow API 都支持 Eager Execution 模式。\n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "id": "Py0m-N6VgQFJ" }, "outputs": [], "source": [ "import tensorflow as tf" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "id": "TluWFcB_2nP5" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU'), PhysicalDevice(name='/physical_device:GPU:1', device_type='GPU')]\n" ] } ], "source": [ "print(tf.config.list_physical_devices('GPU'))" ] }, { "cell_type": "markdown", "metadata": { "id": "zSFfVVjkrrsI" }, "source": [ "## 层:常用的实用运算集\n", "\n", "在大多数情况下,为机器学习模型编写代码时,您会希望在更高级别的抽象层上操作而非使用各个运算以及处理各个变量。\n", "\n", "通常机器学习模型可以表示为简单层的组合与堆叠,并且 TensorFlow 提供了许多常用层的集合,并使您可以方便地从头开始或采用现有层的结构自行编写特定于应用的层。\n", "\n", "TensorFlow 在 tf.keras 软件包中提供了完整的 [Keras](https://keras.io) API,Keras 层在构建您自己的模型时非常实用。\n" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "id": "8PyXlPl-4TzQ" }, "outputs": [], "source": [ "# In the tf.keras.layers package, layers are objects. To construct a layer,\n", "# simply construct the object. Most layers take as a first argument the number\n", "# of output dimensions / channels.\n", "layer = tf.keras.layers.Dense(100)" ] }, { "cell_type": "markdown", "metadata": { "id": "Fn69xxPO5Psr" }, "source": [ "[文档](https://tensorflow.google.cn/api_docs/python/tf/keras/layers)中提供了现有层的完整列表,其中包含 Dense(全连接层)、Conv2D、LSTM、BatchNormalization、Dropout 等各种层。" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "E3XKNknP5Mhb" }, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# To use a layer, simply call it.\n", "layer(tf.zeros([10, 5]))" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "Wt_Nsv-L5t2s" }, "outputs": [ { "data": { "text/plain": [ "[,\n", " ]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Layers have many useful methods. For example, you can inspect all variables\n", "# in a layer using `layer.variables` and trainable variables using\n", "# `layer.trainable_variables`. In this case a fully-connected layer\n", "# will have variables for weights and biases.\n", "layer.variables" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "id": "6ilvKjz8_4MQ" }, "outputs": [ { "data": { "text/plain": [ "(,\n", " )" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# The variables are also accessible through nice accessors\n", "layer.kernel, layer.bias" ] }, { "cell_type": "markdown", "metadata": { "id": "O0kDbE54-5VS" }, "source": [ "## 实现自定义层\n", "\n", "自行实现层的最佳方式是扩展 `tf.keras.Layer` 类并实现:\n", "\n", "1. `__init__`:您可以在其中执行所有与输入无关的初始化\n", "2. `build`:您可以在其中获得输入张量的形状,并可以进行其余初始化\n", "3. `call`:您可以在其中进行前向计算\n", "\n", "请注意,您不必等到调用 `build` 来创建变量,您还可以在 `__init__` 中创建变量。但是,在 `build` 中创建变量的优点是,它可以根据层将要运算的输入的形状启用变量创建。另一方面,在 `__init__` 中创建变量意味着需要明确指定创建变量所需的形状。" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "id": "5Byl3n1k5kIy" }, "outputs": [], "source": [ "class MyDenseLayer(tf.keras.layers.Layer):\n", " def __init__(self, num_outputs):\n", " super().__init__()\n", " self.num_outputs = num_outputs\n", "\n", " def build(self, input_shape):\n", " self.kernel = self.add_weight(\n", " shape=[int(input_shape[-1]), self.num_outputs],\n", " name=\"kernel\"\n", " )\n", "\n", " def call(self, inputs):\n", " return tf.matmul(inputs, self.kernel)\n", "\n", "layer = MyDenseLayer(10)" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "id": "vrmBsYGOnuGO" }, "outputs": [], "source": [ "_ = layer(tf.zeros([10, 5])) # Calling the layer `.builds` it." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "id": "1bsLjiPfnvat" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['kernel']\n" ] } ], "source": [ "print([var.name for var in layer.trainable_variables])" ] }, { "cell_type": "markdown", "metadata": { "id": "tk8E2vY0-z4Z" }, "source": [ "总体而言,在可能的情况下,如果代码使用标准层,它将更易于阅读和维护,因为其他读者熟悉标准层的行为。如果要使用 `tf.keras.layers` 内不包含的层,建议您提交 [Github 议题](http://github.com/tensorflow/tensorflow/issues/new),或者最好可以向我们发送拉取请求!" ] }, { "cell_type": "markdown", "metadata": { "id": "Qhg4KlbKrs3G" }, "source": [ "## 模型:组合层\n", "\n", "机器学习模型中有许多有趣的层状物都是通过组合现有层来实现的。例如,ResNet 中的每个残差块都是卷积、批次归一化和捷径的组合。层可以嵌套在其他层中。\n", "\n", "通常,当您需要以下模型方法时,您将从 `keras.Model` 继承:`Model.fit`,`Model.evaluate`, and `Model.save` (see [Custom Keras layers and models](https://tensorflow.google.cn/guide/keras/custom_layers_and_models) for details).\n", "\n", "除了跟踪变量外,`keras.Model`(非 `keras.layers.Layer` )提供的另一个功能是,`keras.Model` 还可跟踪其内部层,使它们更易于检查。\n", "\n", "例如,以下是一个 ResNet 块:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "id": "N30DTXiRASlb" }, "outputs": [], "source": [ "class ResnetIdentityBlock(tf.keras.Model):\n", " def __init__(self, kernel_size, filters):\n", " super().__init__(name='')\n", " filters1, filters2, filters3 = filters\n", "\n", " self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1))\n", " self.bn2a = tf.keras.layers.BatchNormalization()\n", "\n", " self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same')\n", " self.bn2b = tf.keras.layers.BatchNormalization()\n", "\n", " self.conv2c = tf.keras.layers.Conv2D(filters3, (1, 1))\n", " self.bn2c = tf.keras.layers.BatchNormalization()\n", "\n", " def call(self, input_tensor, training=False):\n", " x = self.conv2a(input_tensor)\n", " x = self.bn2a(x, training=training)\n", " x = tf.nn.relu(x)\n", "\n", " x = self.conv2b(x)\n", " x = self.bn2b(x, training=training)\n", " x = tf.nn.relu(x)\n", "\n", " x = self.conv2c(x)\n", " x = self.bn2c(x, training=training)\n", "\n", " x += input_tensor\n", " return tf.nn.relu(x)\n", "\n", "\n", "block = ResnetIdentityBlock(1, [1, 2, 3])" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "id": "7D8ZR5mqtokj" }, "outputs": [], "source": [ "_ = block(tf.zeros([1, 2, 3, 3])) " ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "id": "MJ8rzFpdoE_m" }, "outputs": [ { "data": { "text/plain": [ "[,\n", " ,\n", " ,\n", " ,\n", " ,\n", " ]" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "block.layers" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "id": "dewldLuDvQRM" }, "outputs": [ { "data": { "text/plain": [ "18" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(block.variables)" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "id": "FrqIXeSetaYi" }, "outputs": [ { "data": { "text/html": [ "
Model: \"\"\n",
              "
\n" ], "text/plain": [ "\u001b[1mModel: \"\"\u001b[0m\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n",
              "┃ Layer (type)                     Output Shape                  Param # ┃\n",
              "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n",
              "│ conv2d (Conv2D)                 │ (1, 2, 3, 1)           │             4 │\n",
              "├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
              "│ batch_normalization             │ (1, 2, 3, 1)           │             4 │\n",
              "│ (BatchNormalization)            │                        │               │\n",
              "├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
              "│ conv2d_1 (Conv2D)               │ (1, 2, 3, 2)           │             4 │\n",
              "├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
              "│ batch_normalization_1           │ (1, 2, 3, 2)           │             8 │\n",
              "│ (BatchNormalization)            │                        │               │\n",
              "├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
              "│ conv2d_2 (Conv2D)               │ (1, 2, 3, 3)           │             9 │\n",
              "├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
              "│ batch_normalization_2           │ (1, 2, 3, 3)           │            12 │\n",
              "│ (BatchNormalization)            │                        │               │\n",
              "└─────────────────────────────────┴────────────────────────┴───────────────┘\n",
              "
\n" ], "text/plain": [ "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n", "┃\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m Param #\u001b[0m\u001b[1m \u001b[0m┃\n", "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n", "│ conv2d (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m2\u001b[0m, \u001b[38;5;34m3\u001b[0m, \u001b[38;5;34m1\u001b[0m) │ \u001b[38;5;34m4\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ batch_normalization │ (\u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m2\u001b[0m, \u001b[38;5;34m3\u001b[0m, \u001b[38;5;34m1\u001b[0m) │ \u001b[38;5;34m4\u001b[0m │\n", "│ (\u001b[38;5;33mBatchNormalization\u001b[0m) │ │ │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ conv2d_1 (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m2\u001b[0m, \u001b[38;5;34m3\u001b[0m, \u001b[38;5;34m2\u001b[0m) │ \u001b[38;5;34m4\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ batch_normalization_1 │ (\u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m2\u001b[0m, \u001b[38;5;34m3\u001b[0m, \u001b[38;5;34m2\u001b[0m) │ \u001b[38;5;34m8\u001b[0m │\n", "│ (\u001b[38;5;33mBatchNormalization\u001b[0m) │ │ │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ conv2d_2 (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m2\u001b[0m, \u001b[38;5;34m3\u001b[0m, \u001b[38;5;34m3\u001b[0m) │ \u001b[38;5;34m9\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ batch_normalization_2 │ (\u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m2\u001b[0m, \u001b[38;5;34m3\u001b[0m, \u001b[38;5;34m3\u001b[0m) │ \u001b[38;5;34m12\u001b[0m │\n", "│ (\u001b[38;5;33mBatchNormalization\u001b[0m) │ │ │\n", "└─────────────────────────────────┴────────────────────────┴───────────────┘\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
 Total params: 41 (164.00 B)\n",
              "
\n" ], "text/plain": [ "\u001b[1m Total params: \u001b[0m\u001b[38;5;34m41\u001b[0m (164.00 B)\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
 Trainable params: 29 (116.00 B)\n",
              "
\n" ], "text/plain": [ "\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m29\u001b[0m (116.00 B)\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
 Non-trainable params: 12 (48.00 B)\n",
              "
\n" ], "text/plain": [ "\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m12\u001b[0m (48.00 B)\n" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "block.summary()" ] }, { "cell_type": "markdown", "metadata": { "id": "wYfucVw65PMj" }, "source": [ "但是,在很多时候,由多个层组合而成的模型只需要逐一地调用各层。为此,使用 `tf.keras.Sequential` 只需少量代码即可完成:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1),),\n", " tf.keras.layers.BatchNormalization(),\n", " tf.keras.layers.Conv2D(2, 1,\n", " padding='same'),\n", " tf.keras.layers.BatchNormalization(),\n", " tf.keras.layers.Conv2D(3, (1, 1)),\n", " tf.keras.layers.BatchNormalization()])\n", "my_seq(tf.zeros([1, 2, 3, 3]))" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "id": "tVAsbFITuScB" }, "outputs": [ { "data": { "text/html": [ "
Model: \"sequential\"\n",
              "
\n" ], "text/plain": [ "\u001b[1mModel: \"sequential\"\u001b[0m\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n",
              "┃ Layer (type)                     Output Shape                  Param # ┃\n",
              "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n",
              "│ conv2d_3 (Conv2D)               │ (1, 2, 3, 1)           │             4 │\n",
              "├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
              "│ batch_normalization_3           │ (1, 2, 3, 1)           │             4 │\n",
              "│ (BatchNormalization)            │                        │               │\n",
              "├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
              "│ conv2d_4 (Conv2D)               │ (1, 2, 3, 2)           │             4 │\n",
              "├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
              "│ batch_normalization_4           │ (1, 2, 3, 2)           │             8 │\n",
              "│ (BatchNormalization)            │                        │               │\n",
              "├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
              "│ conv2d_5 (Conv2D)               │ (1, 2, 3, 3)           │             9 │\n",
              "├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
              "│ batch_normalization_5           │ (1, 2, 3, 3)           │            12 │\n",
              "│ (BatchNormalization)            │                        │               │\n",
              "└─────────────────────────────────┴────────────────────────┴───────────────┘\n",
              "
\n" ], "text/plain": [ "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n", "┃\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m Param #\u001b[0m\u001b[1m \u001b[0m┃\n", "┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n", "│ conv2d_3 (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m2\u001b[0m, \u001b[38;5;34m3\u001b[0m, \u001b[38;5;34m1\u001b[0m) │ \u001b[38;5;34m4\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ batch_normalization_3 │ (\u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m2\u001b[0m, \u001b[38;5;34m3\u001b[0m, \u001b[38;5;34m1\u001b[0m) │ \u001b[38;5;34m4\u001b[0m │\n", "│ (\u001b[38;5;33mBatchNormalization\u001b[0m) │ │ │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ conv2d_4 (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m2\u001b[0m, \u001b[38;5;34m3\u001b[0m, \u001b[38;5;34m2\u001b[0m) │ \u001b[38;5;34m4\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ batch_normalization_4 │ (\u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m2\u001b[0m, \u001b[38;5;34m3\u001b[0m, \u001b[38;5;34m2\u001b[0m) │ \u001b[38;5;34m8\u001b[0m │\n", "│ (\u001b[38;5;33mBatchNormalization\u001b[0m) │ │ │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ conv2d_5 (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m2\u001b[0m, \u001b[38;5;34m3\u001b[0m, \u001b[38;5;34m3\u001b[0m) │ \u001b[38;5;34m9\u001b[0m │\n", "├─────────────────────────────────┼────────────────────────┼───────────────┤\n", "│ batch_normalization_5 │ (\u001b[38;5;34m1\u001b[0m, \u001b[38;5;34m2\u001b[0m, \u001b[38;5;34m3\u001b[0m, \u001b[38;5;34m3\u001b[0m) │ \u001b[38;5;34m12\u001b[0m │\n", "│ (\u001b[38;5;33mBatchNormalization\u001b[0m) │ │ │\n", "└─────────────────────────────────┴────────────────────────┴───────────────┘\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
 Total params: 41 (164.00 B)\n",
              "
\n" ], "text/plain": [ "\u001b[1m Total params: \u001b[0m\u001b[38;5;34m41\u001b[0m (164.00 B)\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
 Trainable params: 29 (116.00 B)\n",
              "
\n" ], "text/plain": [ "\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m29\u001b[0m (116.00 B)\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
 Non-trainable params: 12 (48.00 B)\n",
              "
\n" ], "text/plain": [ "\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m12\u001b[0m (48.00 B)\n" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "my_seq.summary()" ] }, { "cell_type": "markdown", "metadata": { "id": "c5YwYcnuK-wc" }, "source": [ "# 后续步骤\n", "\n", "现在,您可以回到上一个笔记本,调整线性回归样本以使用结构更好的层和模型。" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "custom_layers.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "xxx", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }