{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "Tce3stUlHN0L" }, "outputs": [], "source": [ "##### Copyright 2019 The TensorFlow Authors.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "tuOe1ymfHZPu" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "MfBg1C5NB3X0" }, "source": [ "# 使用 DTensor 进行分布式训练\n" ] }, { "cell_type": "markdown", "metadata": { "id": "r6P32iYYV27b" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
View on TensorFlow.org\n", "在 Google Colab 中运行 在 GitHub 上查看源代码\n", " 下载笔记本
" ] }, { "cell_type": "markdown", "metadata": { "id": "kiF4jjX4O1mF" }, "source": [ "## 文本特征向量\n", "\n", "DTensor 为您提供了一种进行跨设备分布式模型训练的方式,可以帮助您提高效率、可靠性和可扩展性。有关 DTensor 概念的更多详细信息,请参阅 [DTensor 编程指南](https://tensorflow.google.cn/guide/dtensor_overview)。\n", "\n", "在本教程中,您将使用 DTensor 训练一个情感分析模型。此示例演示了三种分布式训练方案:\n", "\n", "- 数据并行训练,此方案会将训练样本分片(分区)至设备。\n", "- 模型并行训练,此方案会将模型变量分片至设备。\n", "- 空间并行训练,此方案会将输入数据的特征分片至设备。(也称为[空间分区](https://cloud.google.com/blog/products/ai-machine-learning/train-ml-models-on-large-images-and-3d-volumes-with-spatial-partitioning-on-cloud-tpus))\n", "\n", "本教程训练部分灵感源自于 [Kaggle 情感分析指南](https://www.kaggle.com/code/anasofiauzsoy/yelp-review-sentiment-analysis-tensorflow-tfds/notebook)笔记本。要了解完整的训练和评估工作流(未使用 DTensor),请参阅该笔记本。\n", "\n", "本教程将逐步完成以下步骤:\n", "\n", "- 首先从一些数据清理开始,旨在获得词例化语句及其情感极性的 `tf.data.Dataset`。\n", "\n", "- 接下来,使用自定义 Dense 层和 BatchNorm 层构建 MLP 模型。使用 `tf.Module` 跟踪推断变量。模型构造函数将使用额外的 `Layout` 参数来控制变量的分片。\n", "\n", "- 在训练方面,您会先将数据并行训练与 `tf.experimental.dtensor` 的检查点特征结合使用。然后继续进行模型并行训练和空间并行训练。\n", "\n", "- 最后一部分简要介绍了截至 TensorFlow 2.9 版本的 `tf.saved_model` 与 `tf.experimental.dtensor` 之间的交互。\n" ] }, { "cell_type": "markdown", "metadata": { "id": "YD80veeg7QtW" }, "source": [ "## 安装\n", "\n", "DTensor 是 TensorFlow 2.9.0 版本的一部分。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-RKXLJN-7Yyb" }, "outputs": [], "source": [ "!pip install --quiet --upgrade --pre tensorflow tensorflow-datasets" ] }, { "cell_type": "markdown", "metadata": { "id": "tcxP4_Zu7ciQ" }, "source": [ "接下来,导入 `tensorflow` 和 `tensorflow.experimental.dtensor`。然后,将 TensorFlow 配置为使用 8 个虚拟 CPU。\n", "\n", "尽管本示例使用了 CPU,但 DTensor 在 CPU、GPU 或 TPU 设备上的工作方式相同。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "dXcB26oP7dUd" }, "outputs": [], "source": [ "import tempfile\n", "import numpy as np\n", "import tensorflow_datasets as tfds\n", "\n", "import tensorflow as tf\n", "\n", "from tensorflow.experimental import dtensor\n", "print('TensorFlow version:', tf.__version__)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "oHtO6MJLUXlz" }, "outputs": [], "source": [ "def configure_virtual_cpus(ncpu):\n", " phy_devices = tf.config.list_physical_devices('CPU')\n", " tf.config.set_logical_device_configuration(phy_devices[0], [\n", " tf.config.LogicalDeviceConfiguration(),\n", " ] * ncpu)\n", "\n", "configure_virtual_cpus(8)\n", "DEVICES = [f'CPU:{i}' for i in range(8)]\n", "\n", "tf.config.list_logical_devices('CPU')" ] }, { "cell_type": "markdown", "metadata": { "id": "omYd4jbF7j_I" }, "source": [ "## 下载数据集\n", "\n", "下载 IMDB 评论数据集以训练情感分析模型。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "fW4w4QlFVHhx" }, "outputs": [], "source": [ "train_data = tfds.load('imdb_reviews', split='train', shuffle_files=True, batch_size=64)\n", "train_data" ] }, { "cell_type": "markdown", "metadata": { "id": "ki3mpfi4aZH8" }, "source": [ "## 准备数据\n", "\n", "首先,将文本词例化。此处使用独热编码的扩展,即 `tf.keras.layers.TextVectorization` 的 `'tf_idf'` 模式。\n", "\n", "- 为保证速度,将词例数量限制为 1200。\n", "- 为使 `tf.Module` 保持简单,在训练之前运行 `TextVectorization` 作为预处理步骤。\n", "\n", "数据清理部分的最终结果是一个词例化文本为 `x`、标签为 `y` 的 `Dataset`。\n", "\n", "**注**:将运行 `TextVectorization` 作为预处理步骤**既非常规做法,也非推荐做法**,因为这种方式需要假定训练数据适合客户端内存,但实际情况并非总是如此。\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zNpxjku_57Lg" }, "outputs": [], "source": [ "text_vectorization = tf.keras.layers.TextVectorization(output_mode='tf_idf', max_tokens=1200, output_sequence_length=None)\n", "text_vectorization.adapt(data=train_data.map(lambda x: x['text']))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "q16bjngoVwQp" }, "outputs": [], "source": [ "def vectorize(features):\n", " return text_vectorization(features['text']), features['label']\n", "\n", "train_data_vec = train_data.map(vectorize)\n", "train_data_vec" ] }, { "cell_type": "markdown", "metadata": { "id": "atTqL9kE5wz4" }, "source": [ "## 使用 DTensor 构建神经网络\n", "\n", "现在,使用 `DTensor` 构建一个多层感知器 (MLP) 网络。网络将使用全连接 Dense 层和 BatchNorm 层。\n", "\n", "`DTensor` 会通过常规 TensorFlow 运算的单程序多数据 (SPMD) 扩展来扩展 TensorFlow,具体取决于其输入 `Tensor` 和变量的 `dtensor.Layout` 特性。\n", "\n", "`DTensor` 感知层的变量为 `dtensor.DVariable`,`DTensor` 感知层对象的构造函数除了接受常规层参数外,还会接受额外的 `Layout` 输入。\n", "\n", "注:截至 TensorFlow 2.9 版本,`tf.keras.layer.Dense` 和 `tf.keras.layer.BatchNormalization` 等 Keras 层接受 `dtensor.Layout` 参数。有关将 Keras 与 DTensor 结合使用的更多信息,请参阅 [DTensor Keras 集成教程](/tutorials/distribute/dtensor_keras_tutorial)。" ] }, { "cell_type": "markdown", "metadata": { "id": "PMCt-Gj3b3Jy" }, "source": [ "### Dense 层\n", "\n", "以下自定义 Dense 层定义了 2 个层变量:$W_{ij}$ 为权重变量,$b_i$ 为偏置项变量。\n", "\n", "$$\n", "y_j = \\sigma(\\sum_i x_i W_{ij} + b_j)\n", "$$\n" ] }, { "cell_type": "markdown", "metadata": { "id": "nYlFUJWNjl4N" }, "source": [ "### 布局推导\n", "\n", "此结果源自于以下观测值:\n", "\n", "- 对矩阵点积 $t_j = \\sum_i x_i W_{ij}$ 的运算数的首选 DTensor 分片方式为沿 $i$ 轴以相同的方式对 $\\mathbf{W}$ 和 $\\mathbf{x}$ 进行分片。\n", "\n", "- 对矩阵和 $t_j + b_j$ 的运算数的首选 DTensor 分片方式为沿 $j$ 轴以相同的方式对 $\\mathbf{t}$ 和 $\\mathbf{b}$ 进行分片。\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "VpKblz7Yb16G" }, "outputs": [], "source": [ "class Dense(tf.Module):\n", "\n", " def __init__(self, input_size, output_size,\n", " init_seed, weight_layout, activation=None):\n", " super().__init__()\n", "\n", " random_normal_initializer = tf.function(tf.random.stateless_normal)\n", "\n", " self.weight = dtensor.DVariable(\n", " dtensor.call_with_layout(\n", " random_normal_initializer, weight_layout,\n", " shape=[input_size, output_size],\n", " seed=init_seed\n", " ))\n", " if activation is None:\n", " activation = lambda x:x\n", " self.activation = activation\n", " \n", " # bias is sharded the same way as the last axis of weight.\n", " bias_layout = weight_layout.delete([0])\n", "\n", " self.bias = dtensor.DVariable(\n", " dtensor.call_with_layout(tf.zeros, bias_layout, [output_size]))\n", "\n", " def __call__(self, x):\n", " y = tf.matmul(x, self.weight) + self.bias\n", " y = self.activation(y)\n", "\n", " return y" ] }, { "cell_type": "markdown", "metadata": { "id": "tfVY_vAKbxM0" }, "source": [ "### BatchNorm\n", "\n", "批归一化层有助于避免在训练时折叠模式。在此情况下,添加批归一化层有助于在训练模型时避免产生只生成零的模型。\n", "\n", "下面的自定义 `BatchNorm` 层的构造函数不会接受 `Layout` 参数。这是因为 `BatchNorm` 没有层变量。这仍适用于 DTensor,因为层的唯一输入“x”已经是代表全局批次的 DTensor。\n", "\n", "注:使用 DTensor 时,输入张量“x”始终代表全局批次。因此,`tf.nn.batch_normalization` 将应用于全局批次。这不同于使用 `tf.distribute.MirroredStrategy` 进行的训练,其中张量“x”仅代表批次的副本分片(局部批次)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "riBA9pfhlPFq" }, "outputs": [], "source": [ "class BatchNorm(tf.Module):\n", "\n", " def __init__(self):\n", " super().__init__()\n", "\n", " def __call__(self, x, training=True):\n", " if not training:\n", " # This branch is not used in the Tutorial.\n", " pass\n", " mean, variance = tf.nn.moments(x, axes=[0])\n", " return tf.nn.batch_normalization(x, mean, variance, 0.0, 1.0, 1e-5)" ] }, { "cell_type": "markdown", "metadata": { "id": "q4R4MPz5prh4" }, "source": [ "全功能批归一化层(例如 `tf.keras.layers.BatchNormalization`)将需要其变量的 Layout 参数。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "unFcP99zprJj" }, "outputs": [], "source": [ "def make_keras_bn(bn_layout):\n", " return tf.keras.layers.BatchNormalization(gamma_layout=bn_layout,\n", " beta_layout=bn_layout,\n", " moving_mean_layout=bn_layout,\n", " moving_variance_layout=bn_layout,\n", " fused=False)" ] }, { "cell_type": "markdown", "metadata": { "id": "v8Dj7AJ_lPs0" }, "source": [ "### 将层放置到一起\n", "\n", "接下来,使用上述构建块构建多层感知器 (MLP) 网络。下图显示了两个 `Dense` 层的输入 `x` 与权重矩阵之间的轴关系,未应用任何 DTensor 分片或复制。" ] }, { "cell_type": "markdown", "metadata": { "id": "udFGAO-NrZw6" }, "source": [ "\"The\n" ] }, { "cell_type": "markdown", "metadata": { "id": "8DCQ0aQ5rQtB" }, "source": [ "第一个 `Dense` 层的输出会传递到第二个 `Dense` 层的输入(在 `BatchNorm` 之后)。因此,第一个 `Dense` 层的输出 ($\\mathbf{W_1}$) 和第二个 `Dense` 层的输入 ($\\mathbf{W_2}$) 的首选 DTensor 分片方式为沿公共轴 $\\hat{j}$ 以相同方式对 $\\mathbf{W_1}$ 和 $\\mathbf{W_2}$ 进行分片\n", "\n", "$$\n", "\\mathsf{Layout}[{W_{1,ij}}; i, j] = \\left[\\hat{i}, \\hat{j}\\right] \\\\\n", "\\mathsf{Layout}[{W_{2,jk}}; j, k] = \\left[\\hat{j}, \\hat{k} \\right]\n", "$$\n", "\n", "尽管布局推导显示 2 种布局并非独立,但为了保证模型接口的简单性,`MLP` 将接受 2 个 `Layout` 参数,每个 Dense 层一个。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "junyS-965opl" }, "outputs": [], "source": [ "from typing import Tuple\n", "\n", "class MLP(tf.Module):\n", "\n", " def __init__(self, dense_layouts: Tuple[dtensor.Layout, dtensor.Layout]):\n", " super().__init__()\n", "\n", " self.dense1 = Dense(\n", " 1200, 48, (1, 2), dense_layouts[0], activation=tf.nn.relu)\n", " self.bn = BatchNorm()\n", " self.dense2 = Dense(48, 2, (3, 4), dense_layouts[1])\n", "\n", " def __call__(self, x):\n", " y = x\n", " y = self.dense1(y)\n", " y = self.bn(y)\n", " y = self.dense2(y)\n", " return y\n" ] }, { "cell_type": "markdown", "metadata": { "id": "9dgLmebHhr7h" }, "source": [ "在布局推导约束的正确性与 API 的简单性之间进行权衡是使用 DTensor 的 API 的一项常见设计要点。也可以捕获具有不同 API 的 `Layout` 之间的依赖关系。例如,`MLPStricter` 类会在构造函数中创建 `Layout` 对象。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "wEZR7UlihsYX" }, "outputs": [], "source": [ "class MLPStricter(tf.Module):\n", "\n", " def __init__(self, mesh, input_mesh_dim, inner_mesh_dim1, output_mesh_dim):\n", " super().__init__()\n", "\n", " self.dense1 = Dense(\n", " 1200, 48, (1, 2), dtensor.Layout([input_mesh_dim, inner_mesh_dim1], mesh),\n", " activation=tf.nn.relu)\n", " self.bn = BatchNorm()\n", " self.dense2 = Dense(48, 2, (3, 4), dtensor.Layout([inner_mesh_dim1, output_mesh_dim], mesh))\n", "\n", "\n", " def __call__(self, x):\n", " y = x\n", " y = self.dense1(y)\n", " y = self.bn(y)\n", " y = self.dense2(y)\n", " return y" ] }, { "cell_type": "markdown", "metadata": { "id": "GcQi7D5mal2L" }, "source": [ "为确保模型能够正常运行,请使用完全复制的布局和完全复制的一批 `'x'` 输入来检查您的模型。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zOPuYeQwallh" }, "outputs": [], "source": [ "WORLD = dtensor.create_mesh([(\"world\", 8)], devices=DEVICES)\n", "\n", "model = MLP([dtensor.Layout.replicated(WORLD, rank=2),\n", " dtensor.Layout.replicated(WORLD, rank=2)])\n", "\n", "sample_x, sample_y = train_data_vec.take(1).get_single_element()\n", "sample_x = dtensor.copy_to_mesh(sample_x, dtensor.Layout.replicated(WORLD, rank=2))\n", "print(model(sample_x))" ] }, { "cell_type": "markdown", "metadata": { "id": "akrjDstEpDv9" }, "source": [ "## 将数据移动到设备\n", "\n", "通常,`tf.data` 迭代器(和其他数据获取方法)会产生由本地主机设备内存支持的张量对象。此数据必须传输到支持 DTensor 的张量分量的加速器设备内存。\n", "\n", "`dtensor.copy_to_mesh` 并不适合这种情况,由于 DTensor 以全局为视角,因此会将输入张量复制到所有设备上。有鉴于此,在本教程中,您将使用一个辅助函数 `repack_local_tensor` 以便于数据传输。此辅助函数会使用 `dtensor.pack` 将用于副本的全局批次的分片发送(并且仅发送)到支持副本的设备。\n", "\n", "此简化函数假定使用单客户端。在多客户端应用中确定拆分本地张量的正确方式以及在拆分的各个部分与本地设备之间建立映射可能会比较困难。\n", "\n", "我们计划提供同时支持单客户端和多客户端应用的额外 DTensor API 来简化 `tf.data` 集成。敬请期待。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "3t5WvQR4Hvo4" }, "outputs": [], "source": [ "def repack_local_tensor(x, layout):\n", " \"\"\"Repacks a local Tensor-like to a DTensor with layout.\n", "\n", " This function assumes a single-client application.\n", " \"\"\"\n", " x = tf.convert_to_tensor(x)\n", " sharded_dims = []\n", "\n", " # For every sharded dimension, use tf.split to split the along the dimension.\n", " # The result is a nested list of split-tensors in queue[0].\n", " queue = [x]\n", " for axis, dim in enumerate(layout.sharding_specs):\n", " if dim == dtensor.UNSHARDED:\n", " continue\n", " num_splits = layout.shape[axis]\n", " queue = tf.nest.map_structure(lambda x: tf.split(x, num_splits, axis=axis), queue)\n", " sharded_dims.append(dim)\n", "\n", " # Now we can build the list of component tensors by looking up the location in\n", " # the nested list of split-tensors created in queue[0].\n", " components = []\n", " for locations in layout.mesh.local_device_locations():\n", " t = queue[0]\n", " for dim in sharded_dims:\n", " split_index = locations[dim] # Only valid on single-client mesh.\n", " t = t[split_index]\n", " components.append(t)\n", "\n", " return dtensor.pack(components, layout)" ] }, { "cell_type": "markdown", "metadata": { "id": "2KKCDcjG7zj2" }, "source": [ "## 数据并行训练\n", "\n", "在本部分中,您将使用数据并行训练来训练您的 MLP 模型。以下部分将演示模型并行训练和空间并行训练。\n", "\n", "数据并行训练是分布式机器学习的常用方案:\n", "\n", "- 分别在 N 台设备上复制模型变量。\n", "- 将全局批次拆分成 N 个按副本批次。\n", "- 每个按副本批次都在副本设备上进行训练。\n", "- 在对所有副本共同执行数据加权之前进行梯度归约。\n", "\n", "数据并行训练能够根据设备数量提供近乎线性的速度提升。" ] }, { "cell_type": "markdown", "metadata": { "id": "UMsLUyTGq3oL" }, "source": [ "### 创建数据并行网格\n", "\n", "典型的数据并行训练循环使用由单个 `batch` 维度组成的 DTensor `Mesh`,其中每个设备都将成为从全局批次接收分片的副本。\n", "\n", "\"Data\n", "\n", "复制的模型在副本上运行,因此模型变量是完全复制的(未分片)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "C0IyOlxmeu4I" }, "outputs": [], "source": [ "mesh = dtensor.create_mesh([(\"batch\", 8)], devices=DEVICES)\n", "\n", "model = MLP([dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh),\n", " dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh),])\n" ] }, { "cell_type": "markdown", "metadata": { "id": "OREKwBybo1gZ" }, "source": [ "### 将训练数据打包到 DTensor\n", "\n", "训练数据批次应打包到沿 `'batch'`(第一)轴分片的 DTensor 中,以便 DTensor 将训练数据均匀分布到 `'batch'` 网格维度。\n", "\n", "**注**:在 DTensor 中,`batch size` 始终是指全局批次大小。选择批次大小时,应使其能够被 `batch` 网格维数整除。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8xMYkTpGocY8" }, "outputs": [], "source": [ "def repack_batch(x, y, mesh):\n", " x = repack_local_tensor(x, layout=dtensor.Layout(['batch', dtensor.UNSHARDED], mesh))\n", " y = repack_local_tensor(y, layout=dtensor.Layout(['batch'], mesh))\n", " return x, y\n", "\n", "sample_x, sample_y = train_data_vec.take(1).get_single_element()\n", "sample_x, sample_y = repack_batch(sample_x, sample_y, mesh)\n", "\n", "print('x', sample_x[:, 0])\n", "print('y', sample_y)" ] }, { "cell_type": "markdown", "metadata": { "id": "uONSiqOIkFL1" }, "source": [ "### 训练步骤\n", "\n", "此示例使用的是随机梯度下降优化器和自定义训练循环 (CTL)。有关这些主题的更多信息,请参阅[自定义训练循环指南](https://tensorflow.google.cn/guide/keras/writing_a_training_loop_from_scratch)和[演练](https://tensorflow.google.cn/tutorials/customization/custom_training_walkthrough)。\n", "\n", "将 `train_step` 封装为 `tf.function` 以指示该函数体将作为 TensorFlow 计算图进行跟踪。`train_step` 的函数体由前向推断传递、反向梯度传递和变量更新组成。\n", "\n", "请注意,`train_step` 的函数体不包含任何特殊的 DTensor 注解。相反,`train_step` 仅含用于处理来自输入批次和模型全局视图的输入 `x` 和 `y` 的高级 TensorFlow 运算。训练步骤中剔除了所有 DTensor 注解(`Mesh`、`Layout`)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "BwUFzLGDtQT6" }, "outputs": [], "source": [ "# Refer to the CTL (custom training loop guide)\n", "@tf.function\n", "def train_step(model, x, y, learning_rate=tf.constant(1e-4)):\n", " with tf.GradientTape() as tape:\n", " logits = model(x)\n", " # tf.reduce_sum sums the batch sharded per-example loss to a replicated\n", " # global loss (scalar).\n", " loss = tf.reduce_sum(\n", " tf.nn.sparse_softmax_cross_entropy_with_logits(\n", " logits=logits, labels=y))\n", " parameters = model.trainable_variables\n", " gradients = tape.gradient(loss, parameters)\n", " for parameter, parameter_gradient in zip(parameters, gradients):\n", " parameter.assign_sub(learning_rate * parameter_gradient)\n", "\n", " # Define some metrics\n", " accuracy = 1.0 - tf.reduce_sum(tf.cast(tf.argmax(logits, axis=-1, output_type=tf.int64) != y, tf.float32)) / x.shape[0]\n", " loss_per_sample = loss / len(x)\n", " return {'loss': loss_per_sample, 'accuracy': accuracy}" ] }, { "cell_type": "markdown", "metadata": { "id": "0OYTu4j0evWT" }, "source": [ "### 检查点\n", "\n", "您可以使用开箱即用的 `tf.train.Checkpoint` 为 DTensor 模型设置检查点。保存和还原分片 DVariable 将执行高效的分片保存和还原。目前,使用 `tf.train.Checkpoint.save` 和 `tf.train.Checkpoint.restore` 时,所有 DVariable 必须在同一个主机网格上,DVariable 和常规变量不能一起保存。您可以在[本指南](../../guide/checkpoint.ipynb)中详细了解检查点设置。\n", "\n", "还原 DTensor 检查点时,变量的 `Layout` 可能与保存检查点时不同。也就是说,保存 DTensor 模型与布局和网格无关,只会影响分片保存的效率。您可以保存具有一种网格和布局的 DTensor 模型,并在不同的网格和布局上进行还原。本教程利用此功能继续“模型并行训练”和“空间并行训练”部分的训练。\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "rsInFFJg7x9t" }, "outputs": [], "source": [ "CHECKPOINT_DIR = tempfile.mkdtemp()\n", "\n", "def start_checkpoint_manager(model):\n", " ckpt = tf.train.Checkpoint(root=model)\n", " manager = tf.train.CheckpointManager(ckpt, CHECKPOINT_DIR, max_to_keep=3)\n", "\n", " if manager.latest_checkpoint:\n", " print(\"Restoring a checkpoint\")\n", " ckpt.restore(manager.latest_checkpoint).assert_consumed()\n", " else:\n", " print(\"New training\")\n", " return manager\n" ] }, { "cell_type": "markdown", "metadata": { "id": "9r77ky5Jgp1j" }, "source": [ "### 训练循环\n", "\n", "对于数据并行训练方案,需要训练几个周期并报告进度。模型训练 3 个周期是不够的 – 50% 的准确率无异于随机猜测。\n", "\n", "启用检查点,以便稍后继续训练。在下一部分中,您将加载检查点并使用另一种并行方案进行训练。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "UaLn-vGZgqbS" }, "outputs": [], "source": [ "num_epochs = 2\n", "manager = start_checkpoint_manager(model)\n", "\n", "for epoch in range(num_epochs):\n", " step = 0\n", " pbar = tf.keras.utils.Progbar(target=int(train_data_vec.cardinality()), stateful_metrics=[])\n", " metrics = {'epoch': epoch}\n", " for x,y in train_data_vec:\n", "\n", " x, y = repack_batch(x, y, mesh)\n", "\n", " metrics.update(train_step(model, x, y, 1e-2))\n", "\n", " pbar.update(step, values=metrics.items(), finalize=False)\n", " step += 1\n", " manager.save()\n", " pbar.update(step, values=metrics.items(), finalize=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "YRFJEhum7EGD" }, "source": [ "## 模型并行训练\n", "\n", "如果您切换到二维 `Mesh`,并沿第二个网格维度对模型变量进行分片,那么训练即变为模型并行训练。\n", "\n", "在模型并行训练中,每个模型副本都会跨越多个设备(本例中为 2 个):\n", "\n", "- 有 4 个模型副本,训练数据批次会分布至这 4 个副本。\n", "- 单个模型副本中的 2 个设备会接收复制的训练数据。\n", "\n", "\"Model\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "5gZE9IT5Dzwl" }, "outputs": [], "source": [ "mesh = dtensor.create_mesh([(\"batch\", 4), (\"model\", 2)], devices=DEVICES)\n", "model = MLP([dtensor.Layout([dtensor.UNSHARDED, \"model\"], mesh), \n", " dtensor.Layout([\"model\", dtensor.UNSHARDED], mesh)])" ] }, { "cell_type": "markdown", "metadata": { "id": "Ihof3DkMFKnf" }, "source": [ "由于训练数据仍然沿批次维度进行分片,您可以重复使用与数据并行训练情况相同的 `repack_batch` 函数。DTensor 会自动将副本批次沿 `\"model\"` 网格维度复制到副本内的所有设备上。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "dZf56ynbE_p1" }, "outputs": [], "source": [ "def repack_batch(x, y, mesh):\n", " x = repack_local_tensor(x, layout=dtensor.Layout(['batch', dtensor.UNSHARDED], mesh))\n", " y = repack_local_tensor(y, layout=dtensor.Layout(['batch'], mesh))\n", " return x, y" ] }, { "cell_type": "markdown", "metadata": { "id": "UW3OXdhNFfpv" }, "source": [ "接下来,运行训练循环。训练循环将重复使用与数据并行训练示例相同的检查点管理器,并且代码看起来完全相同。\n", "\n", "您可以在模型并行训练下继续训练经过数据并行训练的模型。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "LLC0wgii7EgA" }, "outputs": [], "source": [ "num_epochs = 2\n", "manager = start_checkpoint_manager(model)\n", "\n", "for epoch in range(num_epochs):\n", " step = 0\n", " pbar = tf.keras.utils.Progbar(target=int(train_data_vec.cardinality()))\n", " metrics = {'epoch': epoch}\n", " for x,y in train_data_vec:\n", " x, y = repack_batch(x, y, mesh)\n", " metrics.update(train_step(model, x, y, 1e-2))\n", " pbar.update(step, values=metrics.items(), finalize=False)\n", " step += 1\n", " manager.save()\n", " pbar.update(step, values=metrics.items(), finalize=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "BZH-aMrVzi2L" }, "source": [ "## 空间并行训练" ] }, { "cell_type": "markdown", "metadata": { "id": "u-bK6IZ9GCS9" }, "source": [ "训练维数特别高的数据(例如非常大的图像或视频)时,可能会需要沿特征维度进行分片。这称为[空间分区](https://cloud.google.com/blog/products/ai-machine-learning/train-ml-models-on-large-images-and-3d-volumes-with-spatial-partitioning-on-cloud-tpus),它首次被引入到 TensorFlow 中,用于训练具有大量三维输入样本的模型。\n", "\n", "\"Spatial\n", "\n", "DTensor 也支持这种情况。您需要进行的唯一更改是创建一个包含 `feature` 维度的网格,并应用相应的 `Layout`。\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "jpc9mqURGpmK" }, "outputs": [], "source": [ "mesh = dtensor.create_mesh([(\"batch\", 2), (\"feature\", 2), (\"model\", 2)], devices=DEVICES)\n", "model = MLP([dtensor.Layout([\"feature\", \"model\"], mesh), \n", " dtensor.Layout([\"model\", dtensor.UNSHARDED], mesh)])\n" ] }, { "cell_type": "markdown", "metadata": { "id": "i07Wrv-jHBc1" }, "source": [ "将输入张量打包到 DTensor 时,沿 `feature` 维度对输入数据进行分片。您可以使用略有不同的 repack 函数 `repack_batch_for_spt` 来执行此操作,其中 `spt` 代表空间并行训练。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "DWR8qF6BGtFL" }, "outputs": [], "source": [ "def repack_batch_for_spt(x, y, mesh):\n", " # Shard data on feature dimension, too\n", " x = repack_local_tensor(x, layout=dtensor.Layout([\"batch\", 'feature'], mesh))\n", " y = repack_local_tensor(y, layout=dtensor.Layout([\"batch\"], mesh))\n", " return x, y" ] }, { "cell_type": "markdown", "metadata": { "id": "Ygl9dqMUHTVN" }, "source": [ "空间并行训练也可以从使用其他并行训练方案创建的检查点继续。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "p3NnpHSKo-hx" }, "outputs": [], "source": [ "num_epochs = 2\n", "\n", "manager = start_checkpoint_manager(model)\n", "for epoch in range(num_epochs):\n", " step = 0\n", " metrics = {'epoch': epoch}\n", " pbar = tf.keras.utils.Progbar(target=int(train_data_vec.cardinality()))\n", "\n", " for x, y in train_data_vec:\n", " x, y = repack_batch_for_spt(x, y, mesh)\n", " metrics.update(train_step(model, x, y, 1e-2))\n", "\n", " pbar.update(step, values=metrics.items(), finalize=False)\n", " step += 1\n", " manager.save()\n", " pbar.update(step, values=metrics.items(), finalize=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "vp4L59CpJjYr" }, "source": [ "## SavedModel 和 DTensor\n", "\n", "DTensor 与 SavedModel 的集成仍在开发中。\n", "\n", "从 TensorFlow `2.11` 开始,`tf.saved_model` 可以保存分片和复制的 DTensor 模型,并且保存将在网格的不同设备上进行高效的分片保存。但是,保存模型后,所有 DTensor 注解都会丢失,保存的签名只能用于常规张量,不能用于 DTensor。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "49HfIq_SJZoj" }, "outputs": [], "source": [ "mesh = dtensor.create_mesh([(\"world\", 1)], devices=DEVICES[:1])\n", "mlp = MLP([dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh), \n", " dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh)])\n", "\n", "manager = start_checkpoint_manager(mlp)\n", "\n", "model_for_saving = tf.keras.Sequential([\n", " text_vectorization,\n", " mlp\n", "])\n", "\n", "@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])\n", "def run(inputs):\n", " return {'result': model_for_saving(inputs)}\n", "\n", "tf.saved_model.save(\n", " model_for_saving, \"/tmp/saved_model\",\n", " signatures=run)" ] }, { "cell_type": "markdown", "metadata": { "id": "h6Csim_VMGxQ" }, "source": [ "截至 TensorFlow 2.9.0 版本,您只能使用规则张量或完全复制的 DTensor(将转换为规则张量)调用加载的签名。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "HG_ASSzR4IWW" }, "outputs": [], "source": [ "sample_batch = train_data.take(1).get_single_element()\n", "sample_batch" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qW8yKPrhKQ5b" }, "outputs": [], "source": [ "loaded = tf.saved_model.load(\"/tmp/saved_model\")\n", "\n", "run_sig = loaded.signatures[\"serving_default\"]\n", "result = run_sig(sample_batch['text'])['result']" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "GahGbv0ZmkJb" }, "outputs": [], "source": [ "np.mean(tf.argmax(result, axis=-1) == sample_batch['label'])" ] }, { "cell_type": "markdown", "metadata": { "id": "Ks-Vs9qsH6jO" }, "source": [ "## 后续步骤\n", "\n", "本教程演示了使用 DTensor 构建和训练 MLP 情感分析模型的方式。\n", "\n", "通过 `Mesh` 和 `Layout` 原语,DTensor 可以将 TensorFlow `tf.function` 转换为适合各种训练方案的分布式程序。\n", "\n", "在现实世界的机器学习应用中,应当应用评估和交叉验证以避免产生过拟合模型。本教程中介绍的技术也可用于将并行性引入到评估当中。\n", "\n", "从头开始使用 `tf.Module` 构建模型涉及到大量工作,而重复使用现有的构建块(例如层和辅助函数)可以大大加快模型开发速度。截至 TensorFlow 2.9 版本,`tf.keras.layers` 下的所有 Keras 层都接受 DTensor 布局作为其参数,并可用于构建 DTensor 模型。您甚至可以直接对 DTensor 重复使用 Keras 模型,而无需修改模型实现。有关使用 DTensor Keras 的信息,请参阅 [DTensor Keras 集成教程](https://tensorflow.google.cn/tutorials/distribute/dtensor_keras_tutorial)。 " ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "dtensor_ml_tutorial.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "xxx", "language": "python", "name": "python3" }, "language_info": { "name": "python", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 0 }