{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "GE91qWZkm8ZQ" }, "outputs": [], "source": [ "##### Copyright 2019 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "YS3NA-i6nAFC", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "7SN5USFEIIK3" }, "source": [ "# 单词嵌入向量" ] }, { "cell_type": "markdown", "metadata": { "id": "Aojnnc7sXrab" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
在 TensorFlow.org 上查看 在 Google Colab 中运行 在 GitHub 上查看源代码下载笔记本
" ] }, { "cell_type": "markdown", "metadata": { "id": "Q6mJg1g3apaz" }, "source": [ "本教程将介绍单词嵌入向量。包含完整的代码,可在小型数据集上从头开始训练单词嵌入向量,并使用 [Embedding Projector](http://projector.tensorflow.org)(如下图所示)可视化这些嵌入向量。\n", "\n", "\"Screenshot\n", "\n", "## 用数字表示文本\n", "\n", "机器学习模型将向量(数字数组)作为输入。在处理文本时,我们必须先想出一种策略,将字符串转换为数字(或将文本“向量化”),然后再其馈入模型。在本部分中,我们将探究实现这一目标的三种策略。\n", "\n", "### 独热编码\n", "\n", "作为第一个想法,我们可以对词汇表中的每个单词进行“独热”编码。考虑这样一句话:“The cat sat on the mat”。这句话中的词汇(或唯一单词)是(cat、mat、on、sat、the)。为了表示每个单词,我们将创建一个长度等于词汇量的零向量,然后在与该单词对应的索引中放置一个 1。下图显示了这种方法。\n", "\n", "\"Diagram\n", "\n", "为了创建一个包含句子编码的向量,我们可以将每个单词的独热向量连接起来。\n", "\n", "要点:这种方法效率低下。一个独热编码向量十分稀疏(这意味着大多数索引为零)。假设我们的词汇表中有 10,000 个单词。为了对每个单词进行独热编码,我们将创建一个其中 99.99% 的元素都为零的向量。\n", "\n", "### 用一个唯一的数字编码每个单词\n", "\n", "我们可以尝试的第二种方法是使用唯一的数字来编码每个单词。继续上面的示例,我们可以将 1 分配给“cat”,将 2 分配给“mat”,依此类推。然后,我们可以将句子“The cat sat on the mat”编码为一个密集向量,例如 [5, 1, 4, 3, 5, 2]。这种方法是高效的。现在,我们有了一个密集向量(所有元素均已满),而不是稀疏向量。\n", "\n", "但是,这种方法有两个缺点:\n", "\n", "- 整数编码是任意的(它不会捕获单词之间的任何关系)。\n", "\n", "- 对于要解释的模型而言,整数编码颇具挑战。例如,线性分类器针对每个特征学习一个权重。由于任何两个单词的相似性与其编码的相似性之间都没有关系,因此这种特征权重组合没有意义。\n", "\n", "### 单词嵌入向量\n", "\n", "单词嵌入向量为我们提供了一种使用高效、密集表示的方法,其中相似的单词具有相似的编码。重要的是,我们不必手动指定此编码。嵌入向量是浮点值的密集向量(向量的长度是您指定的参数)。它们是可以训练的参数(模型在训练过程中学习的权重,与模型学习密集层权重的方法相同),无需手动为嵌入向量指定值。8 维的单词嵌入向量(对于小型数据集)比较常见,而在处理大型数据集时最多可达 1024 维。维度更高的嵌入向量可以捕获单词之间的细粒度关系,但需要更多的数据来学习。\n", "\n", "\"Diagram\n", "\n", "上面是一个单词嵌入向量的示意图。每个单词都表示为浮点值的 4 维向量。还可以将嵌入向量视为“查找表”。学习完这些权重后,我们可以通过在表中查找对应的密集向量来编码每个单词。" ] }, { "cell_type": "markdown", "metadata": { "id": "SZUQErGewZxE" }, "source": [ "## 设置" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "SIXEk5ON5P7h", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import tensorflow as tf" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "RutaI-Tpev3T", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "from tensorflow import keras\n", "from tensorflow.keras import layers\n", "\n", "import tensorflow_datasets as tfds\n", "tfds.disable_progress_bar()" ] }, { "cell_type": "markdown", "metadata": { "id": "eqBazMiVQkj1" }, "source": [ "## 使用嵌入向量层\n", "\n", "Keras 让使用单词嵌入向量变得轻而易举。我们来看一下[嵌入向量](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/Embedding)层。\n", "\n", "可以将嵌入向量层理解为一个从整数索引(代表特定单词)映射到密集向量(其嵌入向量)的查找表。嵌入向量的维数(或宽度)是一个参数,您可以试验它的数值,以了解多少维度适合您的问题,这与您试验密集层中神经元数量的方式非常相似。\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-OjxLVrMvWUE", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "embedding_layer = layers.Embedding(1000, 5)" ] }, { "cell_type": "markdown", "metadata": { "id": "2dKKV1L2Rk7e" }, "source": [ "创建嵌入向量层时,嵌入向量的权重会随机初始化(就像其他任何层一样)。在训练过程中,通过反向传播来逐渐调整这些权重。训练后,学习到的单词嵌入向量将粗略地编码单词之间的相似性(因为它们是针对训练模型的特定问题而学习的)。\n", "\n", "如果将整数传递给嵌入向量层,结果会将每个整数替换为嵌入向量表中的向量:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0YUjPgP7w0PO", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "result = embedding_layer(tf.constant([1,2,3]))\n", "result.numpy()" ] }, { "cell_type": "markdown", "metadata": { "id": "O4PC4QzsxTGx" }, "source": [ "对于文本或序列问题,嵌入向量层采用整数组成的 2D 张量,其形状为 `(samples, sequence_length)`,其中每个条目都是一个整数序列。它可以嵌入可变长度的序列。您可以在形状为 `(32, 10)`(32 个长度为 10 的序列组成的批次)或 `(64, 15)`(64 个长度为 15 的序列组成的批次)的批次上方馈入嵌入向量层。\n", "\n", "返回的张量比输入多一个轴,嵌入向量沿新的最后一个轴对齐。向其传递 `(2, 3)` 输入批次,输出为 `(2, 3, N)`\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "vwSYepRjyRGy", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))\n", "result.shape" ] }, { "cell_type": "markdown", "metadata": { "id": "WGQp2N92yOyB" }, "source": [ "当给定一个序列批次作为输入时,嵌入向量层将返回形状为 `(samples, sequence_length, embedding_dimensionality)` 的 3D 浮点张量。为了从可变长度的序列转换为固定表示,有多种标准方法。您可以先使用 RNN、注意力或池化层,然后再将其传递给密集层。本教程使用池化,因为它最简单。接下来,学习[使用 RNN 进行文本分类](text_classification_rnn.ipynb)教程是一个不错的选择。" ] }, { "cell_type": "markdown", "metadata": { "id": "aGicgV5qT0wh" }, "source": [ "## 从头开始学习嵌入向量" ] }, { "cell_type": "markdown", "metadata": { "id": "_Bh8B1TUT6mV" }, "source": [ "在本教程中,您将基于 IMDB 电影评论来训练情感分类器。在此过程中,模型将从头开始学习嵌入向量。我们将使用经过预处理的数据集。\n", "\n", "要从头开始加载文本数据集,请参阅[加载文本教程](../load_data/text.ipynb)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "yg6tyxPtp1TE", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "(train_data, test_data), info = tfds.load(\n", " 'imdb_reviews/subwords8k', \n", " split = (tfds.Split.TRAIN, tfds.Split.TEST), \n", " with_info=True, as_supervised=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "jjnBsFXaLVPL" }, "source": [ "获取编码器 (`tfds.features.text.SubwordTextEncoder`),并快速浏览词汇表。\n", "\n", "词汇表中的“_”代表空格。请注意词汇表如何包含完整单词(以“_”结尾)以及可用于构建更大单词的部分单词:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "MYrsTgxhLBfl", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "encoder = info.features['text'].encoder\n", "encoder.subwords[:20]" ] }, { "cell_type": "markdown", "metadata": { "id": "GwCTfSG63Qth" }, "source": [ "电影评论的长度可以不同。我们将使用 `padded_batch` 方法来标准化评论的长度。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "LwSCxER_2Lef", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "train_batches = train_data.shuffle(1000).padded_batch(10)\n", "test_batches = test_data.shuffle(1000).padded_batch(10)" ] }, { "cell_type": "markdown", "metadata": { "id": "dF8ORMt2U9lj" }, "source": [ "导入时,评论的文本是整数编码的(每个整数代表词汇表中的特定单词或单词部分)。\n", "\n", "请注意尾随零,因为批次会填充为最长的示例。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Se-phCknsoan", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "train_batch, train_labels = next(iter(train_batches))\n", "train_batch.numpy()" ] }, { "cell_type": "markdown", "metadata": { "id": "zI9_wLIiWO8Z" }, "source": [ "### 创建一个简单模型\n", "\n", "我们将使用 [Keras 序列式 API](../../guide/keras) 定义模型。在这种情况下,它是一个“连续词袋”样式的模型。\n", "\n", "- 接下来,嵌入向量层将采用整数编码的词汇表,并查找每个单词索引的嵌入向量。在模型训练时会学习这些向量。向量会向输出数组添加维度。得到的维度为:`(batch, sequence, embedding)`。\n", "\n", "- 接下来,通过对序列维度求平均值,GlobalAveragePooling1D 层会返回每个样本的固定长度输出向量。这让模型能够以最简单的方式处理可变长度的输入。\n", "\n", "- 此固定长度输出向量通过一个包含 16 个隐藏单元的完全连接(密集)层进行流水线传输。\n", "\n", "- 最后一层与单个输出节点密集连接。利用 Sigmoid 激活函数,得出此值是 0 到 1 之间的浮点数,表示评论为正面的概率(或置信度)。\n", "\n", "小心:此模型不使用遮盖,而是使用零填充作为输入的一部分,因此填充长度可能会影响输出。要解决此问题,请参阅[遮盖和填充指南](../../guide/keras/masking_and_padding)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "pHLcFtn5Wsqj", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "embedding_dim=16\n", "\n", "model = keras.Sequential([\n", " layers.Embedding(encoder.vocab_size, embedding_dim),\n", " layers.GlobalAveragePooling1D(),\n", " layers.Dense(16, activation='relu'),\n", " layers.Dense(1)\n", "])\n", "\n", "model.summary()" ] }, { "cell_type": "markdown", "metadata": { "id": "JjLNgKO7W2fe" }, "source": [ "### 编译和训练模型" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lCUgdP69Wzix", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "model.compile(optimizer='adam',\n", " loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n", " metrics=['accuracy'])\n", "\n", "history = model.fit(\n", " train_batches,\n", " epochs=10,\n", " validation_data=test_batches, validation_steps=20)" ] }, { "cell_type": "markdown", "metadata": { "id": "LQjpKVYTXU-1" }, "source": [ "通过这种方法,我们的模型可以达到约 88% 的验证准确率(请注意,该模型过度拟合,因此训练准确率要高得多)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0D3OTmOT1z1O", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "history_dict = history.history\n", "\n", "acc = history_dict['accuracy']\n", "val_acc = history_dict['val_accuracy']\n", "loss=history_dict['loss']\n", "val_loss=history_dict['val_loss']\n", "\n", "epochs = range(1, len(acc) + 1)\n", "\n", "plt.figure(figsize=(12,9))\n", "plt.plot(epochs, loss, 'bo', label='Training loss')\n", "plt.plot(epochs, val_loss, 'b', label='Validation loss')\n", "plt.title('Training and validation loss')\n", "plt.xlabel('Epochs')\n", "plt.ylabel('Loss')\n", "plt.legend()\n", "plt.show()\n", "\n", "plt.figure(figsize=(12,9))\n", "plt.plot(epochs, acc, 'bo', label='Training acc')\n", "plt.plot(epochs, val_acc, 'b', label='Validation acc')\n", "plt.title('Training and validation accuracy')\n", "plt.xlabel('Epochs')\n", "plt.ylabel('Accuracy')\n", "plt.legend(loc='lower right')\n", "plt.ylim((0.5,1))\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": { "id": "KCoA6qwqP836" }, "source": [ "## 检索学习的嵌入向量\n", "\n", "接下来,我们检索在训练期间学习的单词嵌入向量。这将是一个形状为 `(vocab_size, embedding-dimension)` 的矩阵。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "t8WwbsXCXtpa", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "e = model.layers[0]\n", "weights = e.get_weights()[0]\n", "print(weights.shape) # shape: (vocab_size, embedding_dim)" ] }, { "cell_type": "markdown", "metadata": { "id": "J8MiCA77X8B8" }, "source": [ "现在,我们将权重写入磁盘。要使用 [Embedding Projector](http://projector.tensorflow.org),我们将以制表符分隔的格式上传两个文件:一个向量文件(包含嵌入向量)和一个元数据文件(包含单词)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "GsjempweP9Lq", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import io\n", "\n", "encoder = info.features['text'].encoder\n", "\n", "out_v = io.open('vecs.tsv', 'w', encoding='utf-8')\n", "out_m = io.open('meta.tsv', 'w', encoding='utf-8')\n", "\n", "for num, word in enumerate(encoder.subwords):\n", " vec = weights[num+1] # skip 0, it's padding.\n", " out_m.write(word + \"\\n\")\n", " out_v.write('\\t'.join([str(x) for x in vec]) + \"\\n\")\n", "out_v.close()\n", "out_m.close()" ] }, { "cell_type": "markdown", "metadata": { "id": "JQyMZWyxYjMr" }, "source": [ "如果您正在 [Colaboratory](https://colab.research.google.com) 中运行本教程,则可以使用以下代码段将这些文件下载到本地计算机上(或使用文件浏览器,*View -> Table of contents -> File browser*)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-gFbbMmvYvhp", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "try:\n", " from google.colab import files\n", "except ImportError:\n", " pass\n", "else:\n", " files.download('vecs.tsv')\n", " files.download('meta.tsv')" ] }, { "cell_type": "markdown", "metadata": { "id": "PXLfFA54Yz-o" }, "source": [ "## 可视化嵌入向量\n", "\n", "为了可视化嵌入向量,我们将它们上传到 Embedding Projector。\n", "\n", "打开 [Embedding Projector](http://projector.tensorflow.org/)(也可以在本地 TensorBoard 实例中运行)。\n", "\n", "- 点击“Load data”。\n", "\n", "- 上传我们在上面创建的两个文件:`vecs.tsv` 和 `meta.tsv`。\n", "\n", "现在将显示您已训练的嵌入向量。您可以搜索单词以查找其最邻近。例如,尝试搜索“beautiful”,您可能会看到“wonderful”等相邻单词。\n", "\n", "注:您的结果可能会略有不同,具体取决于训练嵌入向量层之前如何随机初始化权重。\n", "\n", "注:您可以试验性地使用更简单的模型来生成更多可解释的嵌入向量。尝试删除 `Dense(16)` 层,重新训练模型,然后再次可视化嵌入向量。\n", "\n", "\"Screenshot\n" ] }, { "cell_type": "markdown", "metadata": { "id": "iS_uMeMw3Xpj" }, "source": [ "## 后续步骤\n" ] }, { "cell_type": "markdown", "metadata": { "id": "BSgAZpwF5xF_" }, "source": [ "本教程向您展示了如何在小数据集上从头开始训练和可视化单词嵌入向量。\n", "\n", "- 要了解循环网络,请参阅 [Keras RNN 指南](../../guide/keras/rnn.ipynb)。\n", "\n", "- 要详细了解文本分类(包括整个工作流,以及如果您对何时使用嵌入向量还是独热编码感到好奇),我们建议您阅读这篇实用的文本分类[指南](https://developers.google.com/machine-learning/guides/text-classification/step-2-5)。" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "name": "word_embeddings.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }