{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "b518b04cbfe0" }, "outputs": [], "source": [ "##### Copyright 2020 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "906e07f6e562", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "6ca65cda94c8" }, "source": [ "# Keras 中的循环神经网络 (RNN)" ] }, { "cell_type": "markdown", "metadata": { "id": "1e4938db0e55" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
在 TensorFlow.org 上查看 在 Google Colab 中运行 在 GitHub 上查看源代码 下载笔记本
" ] }, { "cell_type": "markdown", "metadata": { "id": "6873211b02d4" }, "source": [ "## 简介\n", "\n", "循环神经网络 (RNN) 是一类神经网络,它们在序列数据(如时间序列或自然语言)建模方面非常强大。\n", "\n", "简单来说,RNN 层会使用 `for` 循环对序列的时间步骤进行迭代,同时维持一个内部状态,对截至目前所看到的时间步骤信息进行编码。\n", "\n", "Keras RNN API 的设计重点如下:\n", "\n", "- **易于使用**:您可以使用内置 `keras.layers.RNN`、`keras.layers.LSTM` 和 `keras.layers.GRU` 层快速构建循环模型,而无需进行艰难的配置选择。\n", "\n", "- **易于自定义**:您还可以通过自定义行为来定义您自己的 RNN 单元层(`for` 循环的内部),并将其用于通用的 `keras.layers.RNN` 层(`for` 循环本身)。这使您能够以最少的代码和灵活的方式快速为不同研究思路设计原型。" ] }, { "cell_type": "markdown", "metadata": { "id": "b3600ee25c8e" }, "source": [ "## 设置" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "71c626bbac35", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import numpy as np\n", "import tensorflow as tf\n", "from tensorflow import keras\n", "from tensorflow.keras import layers" ] }, { "cell_type": "markdown", "metadata": { "id": "4041a2e9b310" }, "source": [ "## 内置 RNN 层:简单示例" ] }, { "cell_type": "markdown", "metadata": { "id": "98e0c38cf95d" }, "source": [ "Keras 中有三种内置 RNN 层:\n", "\n", "1. `keras.layers.SimpleRNN`,一个全连接 RNN,其中前一个时间步骤的输出会被馈送至下一个时间步骤。\n", "\n", "2. `keras.layers.GRU`,最初由 [Cho 等人于 2014 年](https://arxiv.org/abs/1406.1078)提出。\n", "\n", "3. `keras.layers.LSTM`,最初由 [Hochreiter 和 Schmidhuber 于 1997 年](https://www.bioinf.jku.at/publications/older/2604.pdf)提出。\n", "\n", "2015 年初,Keras 首次具有了 LSTM 和 GRU 的可重用开源 Python 实现。\n", "\n", "下面是一个 `Sequential` 模型的简单示例,该模型可以处理整数序列,将每个整数嵌入 64 维向量中,然后使用 `LSTM` 层处理向量序列。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "a5617759e54e", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "model = keras.Sequential()\n", "# Add an Embedding layer expecting input vocab of size 1000, and\n", "# output embedding dimension of size 64.\n", "model.add(layers.Embedding(input_dim=1000, output_dim=64))\n", "\n", "# Add a LSTM layer with 128 internal units.\n", "model.add(layers.LSTM(128))\n", "\n", "# Add a Dense layer with 10 units.\n", "model.add(layers.Dense(10))\n", "\n", "model.summary()" ] }, { "cell_type": "markdown", "metadata": { "id": "cb8ef33660a0" }, "source": [ "内置 RNN 支持许多实用功能:\n", "\n", "- 通过 `dropout` 和 `recurrent_dropout` 参数进行循环随机失活\n", "- 能够通过 `go_backwards` 参数反向处理输入序列\n", "- 通过 `unroll` 参数进行循环展开(这会大幅提升在 CPU 上处理短序列的速度)\n", "- …以及更多功能。\n", "\n", "有关详情,请参阅 [RNN API 文档](https://keras.io/api/layers/recurrent_layers/)。" ] }, { "cell_type": "markdown", "metadata": { "id": "43aa4e4f344d" }, "source": [ "## 输出和状态\n", "\n", "默认情况下,RNN 层的输出为每个样本包含一个向量。此向量是与最后一个时间步骤相对应的 RNN 单元输出,包含关于整个输入序列的信息。此输出的形状为 `(batch_size, units)`,其中 `units` 对应于传递给层构造函数的 `units` 参数。\n", "\n", "如果您设置了 `return_sequences=True`,RNN 层还能返回每个样本的整个输出序列(每个样本的每个时间步骤一个向量)。此输出的形状为 `(batch_size, timesteps, units)`。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "c3294dec91e4", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "model = keras.Sequential()\n", "model.add(layers.Embedding(input_dim=1000, output_dim=64))\n", "\n", "# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256)\n", "model.add(layers.GRU(256, return_sequences=True))\n", "\n", "# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128)\n", "model.add(layers.SimpleRNN(128))\n", "\n", "model.add(layers.Dense(10))\n", "\n", "model.summary()" ] }, { "cell_type": "markdown", "metadata": { "id": "266812a04bb2" }, "source": [ "此外,RNN 层还可以返回其最终内部状态。返回的状态可用于稍后恢复 RNN 执行,或[初始化另一个 RNN](https://arxiv.org/abs/1409.3215)。此设置常用于编码器-解码器序列到序列模型,其中编码器的最终状态被用作解码器的初始状态。\n", "\n", "要配置 RNN 层以返回其内部状态,请在创建该层时将 `return_state` 参数设置为 `True`。请注意,`LSTM` 具有两个状态张量,但 `GRU` 只有一个。\n", "\n", "要配置该层的初始状态,只需额外使用关键字参数 `initial_state` 调用该层。请注意,状态的形状需要匹配该层的单元大小,如下例所示。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ece412e6afbe", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "encoder_vocab = 1000\n", "decoder_vocab = 2000\n", "\n", "encoder_input = layers.Input(shape=(None,))\n", "encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(\n", " encoder_input\n", ")\n", "\n", "# Return states in addition to output\n", "output, state_h, state_c = layers.LSTM(64, return_state=True, name=\"encoder\")(\n", " encoder_embedded\n", ")\n", "encoder_state = [state_h, state_c]\n", "\n", "decoder_input = layers.Input(shape=(None,))\n", "decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(\n", " decoder_input\n", ")\n", "\n", "# Pass the 2 states to a new LSTM layer, as initial state\n", "decoder_output = layers.LSTM(64, name=\"decoder\")(\n", " decoder_embedded, initial_state=encoder_state\n", ")\n", "output = layers.Dense(10)(decoder_output)\n", "\n", "model = keras.Model([encoder_input, decoder_input], output)\n", "model.summary()" ] }, { "cell_type": "markdown", "metadata": { "id": "e97a845a372a" }, "source": [ "## RNN 层和 RNN 单元\n", "\n", "除内置 RNN 层外,RNN API 还提供单元级 API。与处理整批输入序列的 RNN 层不同,RNN 单元仅处理单个时间步骤。\n", "\n", "单元位于 RNN 层的 `for` 循环内。将单元封装在 `keras.layers.RNN` 层内,您会得到一个能够处理序列批次的层,如 `RNN(LSTMCell(10))`。\n", "\n", "从数学上看,`RNN(LSTMCell(10))` 会产生和 `LSTM(10)` 相同的结果。但实际上,此层在 TF v1.x 中的实现只会创建对应的 RNN 单元并将其封装在 RNN 层内。但是,如果使用内置的 `GRU` 和 `LSTM` 层,您就能够使用 CuDNN,并获得更出色的性能。\n", "\n", "共有三种内置 RNN 单元,每种单元对应于匹配的 RNN 层。\n", "\n", "- `keras.layers.SimpleRNNCell` 对应于 `SimpleRNN` 层。\n", "\n", "- `keras.layers.GRUCell` 对应于 `GRU` 层。\n", "\n", "- `keras.layers.LSTMCell` 对应于 `LSTM` 层。\n", "\n", "借助单元抽象和通用 `keras.layers.RNN` 类,您可以为研究轻松实现自定义 RNN 架构。" ] }, { "cell_type": "markdown", "metadata": { "id": "60b3b721d500" }, "source": [ "## 跨批次有状态性\n", "\n", "在处理非常长的序列(可能无限长)时,您可能需要使用**跨批次有状态性**模式。\n", "\n", "通常情况下,每次看到新批次时,都会重置 RNN 层的内部状态(即,假定该层看到的每个样本都独立于过去)。该层将仅在处理给定样本时保持状态。\n", "\n", "但如果您的序列非常长,一种有效做法是将它们拆分成较短的序列,然后将这些较短序列按顺序馈送给 RNN 层,而无需重置该层的状态。如此一来,该层就可以保留有关整个序列的信息,尽管它一次只能看到一个子序列。\n", "\n", "您可以通过在构造函数中设置 `stateful=True` 来执行上述操作。\n", "\n", "如果您有一个序列 `s = [t0, t1, ... t1546, t1547]`,可以将其拆分成如下式样:\n", "\n", "```\n", "s1 = [t0, t1, ... t100]\n", "s2 = [t101, ... t201]\n", "...\n", "s16 = [t1501, ... t1547]\n", "```\n", "\n", "然后,您可以通过以下方式处理它:\n", "\n", "```python\n", "lstm_layer = layers.LSTM(64, stateful=True)\n", "for s in sub_sequences:\n", " output = lstm_layer(s)\n", "```\n", "\n", "想要清除状态时,您可以使用 `layer.reset_states()`。\n", "\n", "> 注:在此设置中,假设给定批次中的样本 `i` 是上一个批次中样本 `i` 的延续。也就是说,所有批次应该包含相同的样本数量(批次大小)。例如,如果一个批次包含 `[sequence_A_from_t0_to_t100, sequence_B_from_t0_to_t100]`,则下一个批次应该包含 `[sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200]`。\n", "\n", "以下是完整示例:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "19e72be49a42", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)\n", "paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)\n", "paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)\n", "\n", "lstm_layer = layers.LSTM(64, stateful=True)\n", "output = lstm_layer(paragraph1)\n", "output = lstm_layer(paragraph2)\n", "output = lstm_layer(paragraph3)\n", "\n", "# reset_states() will reset the cached state to the original initial_state.\n", "# If no initial_state was provided, zero-states will be used by default.\n", "lstm_layer.reset_states()\n" ] }, { "cell_type": "markdown", "metadata": { "id": "ec7c316b19a1" }, "source": [ "### RNN 状态重用\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": { "id": "3cb7a8ac464a" }, "source": [ "RNN 层的记录状态不包含在 `layer.weights()` 中。如果您想重用 RNN 层的状态,可以通过 `layer.states` 找回状态值,并通过 Keras 函数式 API(如 `new_layer(inputs, initial_state=layer.states)`)或模型子类化将其用作新层的初始状态。\n", "\n", "另请注意,此情况可能不适用于序贯模型,因为它只支持具有单个输入和输出的层,而初始状态具有额外输入,因此无法在此使用。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "009c5b393adf", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)\n", "paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)\n", "paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)\n", "\n", "lstm_layer = layers.LSTM(64, stateful=True)\n", "output = lstm_layer(paragraph1)\n", "output = lstm_layer(paragraph2)\n", "\n", "existing_state = lstm_layer.states\n", "\n", "new_lstm_layer = layers.LSTM(64)\n", "new_output = new_lstm_layer(paragraph3, initial_state=existing_state)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "66c1d7f1ccba" }, "source": [ "## 双向 RNN\n", "\n", "对于时间序列以外的序列(如文本),如果 RNN 模型不仅能从头到尾处理序列,而且还能反向处理的话,它的性能通常会更好。例如,要预测句子中的下一个单词,通常比较有用的是掌握单词的上下文,而非仅仅掌握该单词前面的单词。\n", "\n", "Keras 为您提供了一个简单的 API 来构建此类双向 RNN:`keras.layers.Bidirectional` 封装容器。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8cea1781a0c2", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "model = keras.Sequential()\n", "\n", "model.add(\n", " layers.Bidirectional(layers.LSTM(64, return_sequences=True), input_shape=(5, 10))\n", ")\n", "model.add(layers.Bidirectional(layers.LSTM(32)))\n", "model.add(layers.Dense(10))\n", "\n", "model.summary()" ] }, { "cell_type": "markdown", "metadata": { "id": "dab57c97a566" }, "source": [ "`Bidirectional` 会在后台复制传入的 RNN 层,并翻转新复制的层的 `go_backwards` 字段,这样它就能按相反的顺序处理输入了。\n", "\n", "默认情况下,`Bidirectional` RNN 的输出将是前向层输出和后向层输出的串联。如果您需要串联等其他合并行为,请更改 `Bidirectional` 封装容器构造函数中的 `merge_mode` 参数。如需详细了解 `Bidirectional`,请查看 [API 文档](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/Bidirectional/)。" ] }, { "cell_type": "markdown", "metadata": { "id": "18a254dfaa73" }, "source": [ "## 性能优化和 CuDNN 内核\n", "\n", "在 TensorFlow 2.0 中,内置的 LSTM 和 GRU 层已经更新,会在 GPU 可用时默认使用 CuDNN 内核。本次更改后,之前的 `keras.layers.CuDNNLSTM/CuDNNGRU` 层已被弃用,您在构建模型时不再需要担心运行它的硬件了。\n", "\n", "由于 CuDNN 内核是基于某些假设构建的,这意味着**如果您更改了内置 LSTM 或 GRU 层的默认设置,则该层将无法使用 CuDNN 内核**。例如:\n", "\n", "- 将 `activation` 函数从 `tanh` 更改为其他。\n", "- 将 `recurrent_activation` 函数从 `sigmoid` 更改为其他。\n", "- 使用大于零的 `recurrent_dropout`。\n", "- 将 `unroll` 设置为 True,这会强制 LSTM/GRU 将内部 `tf.while_loop` 分解成未展开的 `for` 循环。\n", "- 将 `use_bias` 设置为 False。\n", "- 当输入数据没有严格正确地填充时使用遮盖(如果掩码对应于严格正确的填充数据,则仍可使用 CuDNN。这是最常见的情况)。\n", "\n", "有关约束的详细列表,请参阅 [GRU](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/LSTM/) 和 [GRU](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/GRU/) 层的文档。" ] }, { "cell_type": "markdown", "metadata": { "id": "fb8de09c4343" }, "source": [ "### 在可用时使用 CuDNN 内核\n", "\n", "让我们构建一个简单的 LSTM 模型来演示性能差异。\n", "\n", "我们将使用 MNIST 数字的行序列作为输入序列(将每一行像素视为一个时间步骤),并预测数字的标签。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "e88aab9e73c7", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "batch_size = 64\n", "# Each MNIST image batch is a tensor of shape (batch_size, 28, 28).\n", "# Each input sequence will be of size (28, 28) (height is treated like time).\n", "input_dim = 28\n", "\n", "units = 64\n", "output_size = 10 # labels are from 0 to 9\n", "\n", "# Build the RNN model\n", "def build_model(allow_cudnn_kernel=True):\n", " # CuDNN is only available at the layer level, and not at the cell level.\n", " # This means `LSTM(units)` will use the CuDNN kernel,\n", " # while RNN(LSTMCell(units)) will run on non-CuDNN kernel.\n", " if allow_cudnn_kernel:\n", " # The LSTM layer with default options uses CuDNN.\n", " lstm_layer = keras.layers.LSTM(units, input_shape=(None, input_dim))\n", " else:\n", " # Wrapping a LSTMCell in a RNN layer will not use CuDNN.\n", " lstm_layer = keras.layers.RNN(\n", " keras.layers.LSTMCell(units), input_shape=(None, input_dim)\n", " )\n", " model = keras.models.Sequential(\n", " [\n", " lstm_layer,\n", " keras.layers.BatchNormalization(),\n", " keras.layers.Dense(output_size),\n", " ]\n", " )\n", " return model\n" ] }, { "cell_type": "markdown", "metadata": { "id": "dcde82cb14d6" }, "source": [ "加载 MNIST 数据集:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "98292f8e71a9", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "mnist = keras.datasets.mnist\n", "\n", "(x_train, y_train), (x_test, y_test) = mnist.load_data()\n", "x_train, x_test = x_train / 255.0, x_test / 255.0\n", "sample, sample_label = x_train[0], y_train[0]" ] }, { "cell_type": "markdown", "metadata": { "id": "443e5458284f" }, "source": [ "创建一个模型实例并对其进行训练。\n", "\n", "我们选择 `sparse_categorical_crossentropy` 作为模型的损失函数。模型的输出形状为 `[batch_size, 10]`。模型的目标是一个整数向量,每个整数都在 0 到 9 之间。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "f85b57b010e5", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "model = build_model(allow_cudnn_kernel=True)\n", "\n", "model.compile(\n", " loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n", " optimizer=\"sgd\",\n", " metrics=[\"accuracy\"],\n", ")\n", "\n", "\n", "model.fit(\n", " x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "99ea5495e375" }, "source": [ "现在,我们与未使用 CuDNN 内核的模型进行对比:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "d4bdff02e617", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "noncudnn_model = build_model(allow_cudnn_kernel=False)\n", "noncudnn_model.set_weights(model.get_weights())\n", "noncudnn_model.compile(\n", " loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n", " optimizer=\"sgd\",\n", " metrics=[\"accuracy\"],\n", ")\n", "noncudnn_model.fit(\n", " x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "90fc64fbd4ea" }, "source": [ "在安装了 NVIDIA GPU 和 CuDNN 的计算机上运行时,使用 CuDNN 构建的模型的训练速度要比使用常规 TensorFlow 内核的模型快得多。\n", "\n", "启用了 CuDNN 的相同模型也可用来在纯 CPU 环境中运行推断。下面的 `tf.device` 注解只是强制设备放置。如果没有可用的 GPU,则该模型将默认在 CPU 上运行。\n", "\n", "您再也不必担心运行的硬件了。这是不是很棒?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7e33c62b6029", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "with tf.device(\"CPU:0\"):\n", " cpu_model = build_model(allow_cudnn_kernel=True)\n", " cpu_model.set_weights(model.get_weights())\n", " result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1)\n", " print(\n", " \"Predicted result is: %s, target result is: %s\" % (result.numpy(), sample_label)\n", " )\n", " plt.imshow(sample, cmap=plt.get_cmap(\"gray\"))" ] }, { "cell_type": "markdown", "metadata": { "id": "2f940b73a2a6" }, "source": [ "## 支持列表/字典输入或嵌套输入的 RNN\n", "\n", "实现器可以通过嵌套结构在单个时间步骤内包含更多信息。例如,一个视频帧可以同时包含音频和视频输入。在这种情况下,数据形状可以为:\n", "\n", "`[batch, timestep, {\"video\": [height, width, channel], \"audio\": [frequency]}]`\n", "\n", "在另一个示例中,手写数据可以包括笔的当前位置的 x 和 y 坐标,以及压力信息。因此,数据表示可以为:\n", "\n", "`[batch, timestep, {\"location\": [x, y], \"pressure\": [force]}]`\n", "\n", "以下代码提供了一个示例,演示了如何构建接受此类结构化输入的自定义 RNN 单元。" ] }, { "cell_type": "markdown", "metadata": { "id": "f78dc4c1c516" }, "source": [ "### 定义一个支持嵌套输入/输出的自定义单元" ] }, { "cell_type": "markdown", "metadata": { "id": "199faf57f0c5" }, "source": [ "有关自行编写层的详细信息,请参阅[通过子类化创建新层和模型](https://tensorflow.google.cn/guide/keras/custom_layers_and_models/)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "451cfd5f0cc4", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "class NestedCell(keras.layers.Layer):\n", " def __init__(self, unit_1, unit_2, unit_3, **kwargs):\n", " self.unit_1 = unit_1\n", " self.unit_2 = unit_2\n", " self.unit_3 = unit_3\n", " self.state_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]\n", " self.output_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]\n", " super(NestedCell, self).__init__(**kwargs)\n", "\n", " def build(self, input_shapes):\n", " # expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)]\n", " i1 = input_shapes[0][1]\n", " i2 = input_shapes[1][1]\n", " i3 = input_shapes[1][2]\n", "\n", " self.kernel_1 = self.add_weight(\n", " shape=(i1, self.unit_1), initializer=\"uniform\", name=\"kernel_1\"\n", " )\n", " self.kernel_2_3 = self.add_weight(\n", " shape=(i2, i3, self.unit_2, self.unit_3),\n", " initializer=\"uniform\",\n", " name=\"kernel_2_3\",\n", " )\n", "\n", " def call(self, inputs, states):\n", " # inputs should be in [(batch, input_1), (batch, input_2, input_3)]\n", " # state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)]\n", " input_1, input_2 = tf.nest.flatten(inputs)\n", " s1, s2 = states\n", "\n", " output_1 = tf.matmul(input_1, self.kernel_1)\n", " output_2_3 = tf.einsum(\"bij,ijkl->bkl\", input_2, self.kernel_2_3)\n", " state_1 = s1 + output_1\n", " state_2_3 = s2 + output_2_3\n", "\n", " output = (output_1, output_2_3)\n", " new_states = (state_1, state_2_3)\n", "\n", " return output, new_states\n", "\n", " def get_config(self):\n", " return {\"unit_1\": self.unit_1, \"unit_2\": unit_2, \"unit_3\": self.unit_3}\n" ] }, { "cell_type": "markdown", "metadata": { "id": "51355b4089d2" }, "source": [ "### 使用嵌套输入/输出构建 RNN 模型\n", "\n", "让我们构建一个使用 `keras.layers.RNN` 层和刚刚定义的自定义单元的 Keras 模型。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "b2eba7a248eb", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "unit_1 = 10\n", "unit_2 = 20\n", "unit_3 = 30\n", "\n", "i1 = 32\n", "i2 = 64\n", "i3 = 32\n", "batch_size = 64\n", "num_batches = 10\n", "timestep = 50\n", "\n", "cell = NestedCell(unit_1, unit_2, unit_3)\n", "rnn = keras.layers.RNN(cell)\n", "\n", "input_1 = keras.Input((None, i1))\n", "input_2 = keras.Input((None, i2, i3))\n", "\n", "outputs = rnn((input_1, input_2))\n", "\n", "model = keras.models.Model([input_1, input_2], outputs)\n", "\n", "model.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"accuracy\"])" ] }, { "cell_type": "markdown", "metadata": { "id": "452a99c63b7c" }, "source": [ "### 使用随机生成的数据训练模型\n", "\n", "由于此模型没有合适的候选数据集,我们使用随机 Numpy 数据进行演示。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "3987993cb7be", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "input_1_data = np.random.random((batch_size * num_batches, timestep, i1))\n", "input_2_data = np.random.random((batch_size * num_batches, timestep, i2, i3))\n", "target_1_data = np.random.random((batch_size * num_batches, unit_1))\n", "target_2_data = np.random.random((batch_size * num_batches, unit_2, unit_3))\n", "input_data = [input_1_data, input_2_data]\n", "target_data = [target_1_data, target_2_data]\n", "\n", "model.fit(input_data, target_data, batch_size=batch_size)" ] }, { "cell_type": "markdown", "metadata": { "id": "b51e87780b0f" }, "source": [ "使用 Keras `keras.layers.RNN` 层,您只需定义序列内单个步骤的数学逻辑,`keras.layers.RNN` 层将为您处理序列迭代。您可以通过这种异常强大的方式快速为新型 RNN(如 LSTM 变体)设计原型。\n", "\n", "有关详情,请访问 [API 文档](https://https://tensorflow.google.cn/api_docs/python/tf/keras/layers/RNN/)。" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "rnn.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }