{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "Tce3stUlHN0L" }, "outputs": [], "source": [ "##### Copyright 2022 The TensorFlow Compression Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "tuOe1ymfHZPu", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "qFdPvlXBOdUN" }, "source": [ "# 学习的数据压缩" ] }, { "cell_type": "markdown", "metadata": { "id": "MfBg1C5NB3X0" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
在 TensorFlow.org 上查看\n", " 在 Google Colab 中运行\n", " 在 GitHub 上查看源代码\n", " 下载笔记本\n", "
" ] }, { "cell_type": "markdown", "metadata": { "id": "xHxb-dlhMIzW" }, "source": [ "## 概述\n", "\n", "此笔记本展示了如何使用神经网络和 [TensorFlow Compression](https://github.com/tensorflow/compression) 进行有损数据压缩。\n", "\n", "有损压缩涉及在**速率**、编码样本所需的预期比特数以及**失真**、样本重建中的预期误差之间进行权衡。\n", "\n", "下面的示例使用类似自动编码器的模型来压缩来自 MNIST 数据集的图像。这种方式基于[端到端优化图像压缩](https://arxiv.org/abs/1611.01704)这篇论文。\n", "\n", "有关学习的数据压缩的更多背景信息,请参阅面向熟悉经典数据压缩的读者的[这篇论文](https://arxiv.org/abs/2007.03034),或者面向机器学习受众的[这份调查](https://arxiv.org/abs/2202.06533)。\n" ] }, { "cell_type": "markdown", "metadata": { "id": "MUXex9ctTuDB" }, "source": [ "## 安装\n", "\n", "通过 `pip` 安装 Tensorflow Compression。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "K489KsEgxuLI", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "%%bash\n", "# Installs the latest version of TFC compatible with the installed TF version.\n", "\n", "read MAJOR MINOR <<< \"$(pip show tensorflow | perl -p -0777 -e 's/.*Version: (\\d+)\\.(\\d+).*/\\1 \\2/sg')\"\n", "pip install \"tensorflow-compression<$MAJOR.$(($MINOR+1))\"\n" ] }, { "cell_type": "markdown", "metadata": { "id": "WfVAmHCVxpTS" }, "source": [ "导入库依赖项。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "IqR2PQG4ZaZ0", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import tensorflow as tf\n", "import tensorflow_compression as tfc\n", "import tensorflow_datasets as tfds\n" ] }, { "cell_type": "markdown", "metadata": { "id": "wsncKT2iymgQ" }, "source": [ "## 定义训练器模型\n", "\n", "由于该模型类似于自动编码器,并且我们需要在训练和推断期间执行一组不同的功函数,设置与分类器略有不同。\n", "\n", "训练模型由三个部分组成:\n", "\n", "- **分析**(或编码器)转换,将图像转换为隐空间,\n", "- **合成**(或解码器)转换,从隐空间转换回图像空间,以及\n", "- **先验**和熵模型,对隐空间的边际概率进行建模。\n", "\n", "首先,定义转换:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8yZESLgW-vp1", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def make_analysis_transform(latent_dims):\n", " \"\"\"Creates the analysis (encoder) transform.\"\"\"\n", " return tf.keras.Sequential([\n", " tf.keras.layers.Conv2D(\n", " 20, 5, use_bias=True, strides=2, padding=\"same\",\n", " activation=\"leaky_relu\", name=\"conv_1\"),\n", " tf.keras.layers.Conv2D(\n", " 50, 5, use_bias=True, strides=2, padding=\"same\",\n", " activation=\"leaky_relu\", name=\"conv_2\"),\n", " tf.keras.layers.Flatten(),\n", " tf.keras.layers.Dense(\n", " 500, use_bias=True, activation=\"leaky_relu\", name=\"fc_1\"),\n", " tf.keras.layers.Dense(\n", " latent_dims, use_bias=True, activation=None, name=\"fc_2\"),\n", " ], name=\"analysis_transform\")\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2sHdYBzF2xcu", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def make_synthesis_transform():\n", " \"\"\"Creates the synthesis (decoder) transform.\"\"\"\n", " return tf.keras.Sequential([\n", " tf.keras.layers.Dense(\n", " 500, use_bias=True, activation=\"leaky_relu\", name=\"fc_1\"),\n", " tf.keras.layers.Dense(\n", " 2450, use_bias=True, activation=\"leaky_relu\", name=\"fc_2\"),\n", " tf.keras.layers.Reshape((7, 7, 50)),\n", " tf.keras.layers.Conv2DTranspose(\n", " 20, 5, use_bias=True, strides=2, padding=\"same\",\n", " activation=\"leaky_relu\", name=\"conv_1\"),\n", " tf.keras.layers.Conv2DTranspose(\n", " 1, 5, use_bias=True, strides=2, padding=\"same\",\n", " activation=\"leaky_relu\", name=\"conv_2\"),\n", " ], name=\"synthesis_transform\")\n" ] }, { "cell_type": "markdown", "metadata": { "id": "lYC8tHhkxTlK" }, "source": [ "训练器拥有两个转换的实例,以及先验的参数。\n", "\n", "它的 `call` 方法设置为计算如下参数:\n", "\n", "- **速率**,估计表示该批次数字所需的位数,以及\n", "- **失真**,原始数字的像素与其重建之间的平均绝对差。\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ROn2DbzsBirI", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "class MNISTCompressionTrainer(tf.keras.Model):\n", " \"\"\"Model that trains a compressor/decompressor for MNIST.\"\"\"\n", "\n", " def __init__(self, latent_dims):\n", " super().__init__()\n", " self.analysis_transform = make_analysis_transform(latent_dims)\n", " self.synthesis_transform = make_synthesis_transform()\n", " self.prior_log_scales = tf.Variable(tf.zeros((latent_dims,)))\n", "\n", " @property\n", " def prior(self):\n", " return tfc.NoisyLogistic(loc=0., scale=tf.exp(self.prior_log_scales))\n", "\n", " def call(self, x, training):\n", " \"\"\"Computes rate and distortion losses.\"\"\"\n", " # Ensure inputs are floats in the range (0, 1).\n", " x = tf.cast(x, self.compute_dtype) / 255.\n", " x = tf.reshape(x, (-1, 28, 28, 1))\n", "\n", " # Compute latent space representation y, perturb it and model its entropy,\n", " # then compute the reconstructed pixel-level representation x_hat.\n", " y = self.analysis_transform(x)\n", " entropy_model = tfc.ContinuousBatchedEntropyModel(\n", " self.prior, coding_rank=1, compression=False)\n", " y_tilde, rate = entropy_model(y, training=training)\n", " x_tilde = self.synthesis_transform(y_tilde)\n", "\n", " # Average number of bits per MNIST digit.\n", " rate = tf.reduce_mean(rate)\n", "\n", " # Mean absolute difference across pixels.\n", " distortion = tf.reduce_mean(abs(x - x_tilde))\n", "\n", " return dict(rate=rate, distortion=distortion)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "vEXbp9RV3kRX" }, "source": [ "### 计算速率和失真\n", "\n", "我们使用训练集中的一张图像逐步完成此操作。加载 MNIST 数据集进行训练和验证:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7FV99WTrIBen", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "training_dataset, validation_dataset = tfds.load(\n", " \"mnist\",\n", " split=[\"train\", \"test\"],\n", " shuffle_files=True,\n", " as_supervised=True,\n", " with_info=False,\n", ")\n" ] }, { "cell_type": "markdown", "metadata": { "id": "SwKgNTg_QfjH" }, "source": [ "接着提取一张图像 $x$:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "O-BSdeHcPBBf", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "(x, _), = validation_dataset.take(1)\n", "\n", "plt.imshow(tf.squeeze(x))\n", "print(f\"Data type: {x.dtype}\")\n", "print(f\"Shape: {x.shape}\")\n" ] }, { "cell_type": "markdown", "metadata": { "id": "V8IvuFkrRJIa" }, "source": [ "要获得隐空间表示 $y$,我们需要将其转换为 `float32`,添加一个批次维度,并将其传递给分析转换。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "jA0DOWq23lEq", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "x = tf.cast(x, tf.float32) / 255.\n", "x = tf.reshape(x, (-1, 28, 28, 1))\n", "y = make_analysis_transform(10)(x)\n", "\n", "print(\"y:\", y)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "rTojJQvZT8SX" }, "source": [ "隐空间将在测试时被量化。为了在训练期间以可微的方式对此进行建模,我们在区间 $(-.5, .5)$ 中添加均匀噪声,并将结果称为 $\\tilde y$。这与论文[端到端优化图像压缩](https://arxiv.org/abs/1611.01704)中使用的术语相同。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Spr3503OUOFQ", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "y_tilde = y + tf.random.uniform(y.shape, -.5, .5)\n", "\n", "print(\"y_tilde:\", y_tilde)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "7hRN89R7SA3U" }, "source": [ "“先验”是一个概率密度,我们训练它来模拟噪声隐空间的边缘分布。例如,它可以是一组独立的[逻辑分布](https://en.wikipedia.org/wiki/Logistic_distribution),每个隐空间维度具有不同的尺度。`tfc.NoisyLogistic` 说明了隐空间具有加性噪声的事实。随着尺度接近零,逻辑分布接近狄拉克增量(尖峰),但添加的噪声导致“嘈杂”分布改为接近均匀分布。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2tmA1Bw7ReMY", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "prior = tfc.NoisyLogistic(loc=0., scale=tf.linspace(.01, 2., 10))\n", "\n", "_ = tf.linspace(-6., 6., 501)[:, None]\n", "plt.plot(_, prior.prob(_));\n" ] }, { "cell_type": "markdown", "metadata": { "id": "2NSWtBZmUvVY" }, "source": [ "在训练期间,`tfc.ContinuousBatchedEntropyModel` 会添加均匀噪声,并使用噪声和先验来计算速率的(可微分)上限(编码隐空间表示所需的平均位数)。此界限可以作为损失最小化。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "hFuGlyJuThBC", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "entropy_model = tfc.ContinuousBatchedEntropyModel(\n", " prior, coding_rank=1, compression=False)\n", "y_tilde, rate = entropy_model(y, training=True)\n", "\n", "print(\"rate:\", rate)\n", "print(\"y_tilde:\", y_tilde)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "Cyr8DGgmWd32" }, "source": [ "最后,噪声隐空间通过合成转换向回传递以产生图像重建 $\\tilde x$。失真是原始图像与重建之间的误差。显然,使用未训练的转换时,重建不太有用。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "gtmI0xGEVym0", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "x_tilde = make_synthesis_transform()(y_tilde)\n", "\n", "# Mean absolute difference across pixels.\n", "distortion = tf.reduce_mean(abs(x - x_tilde))\n", "print(\"distortion:\", distortion)\n", "\n", "x_tilde = tf.saturate_cast(x_tilde[0] * 255, tf.uint8)\n", "plt.imshow(tf.squeeze(x_tilde))\n", "print(f\"Data type: {x_tilde.dtype}\")\n", "print(f\"Shape: {x_tilde.shape}\")\n" ] }, { "cell_type": "markdown", "metadata": { "id": "UVz3I7E8ecij" }, "source": [ "对于每个批次的数字,调用 `MNISTCompressionTrainer` 会产生该批次的平均速率和失真:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ICJnjj1LeB8L", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "(example_batch, _), = validation_dataset.batch(32).take(1)\n", "trainer = MNISTCompressionTrainer(10)\n", "example_output = trainer(example_batch)\n", "\n", "print(\"rate: \", example_output[\"rate\"])\n", "print(\"distortion: \", example_output[\"distortion\"])\n" ] }, { "cell_type": "markdown", "metadata": { "id": "lgdfRtmee5Mn" }, "source": [ "在下一部分中,我们建立模型来对这两个损失执行梯度下降。" ] }, { "cell_type": "markdown", "metadata": { "id": "fKGVwv5MAq6w" }, "source": [ "## 训练模型\n", "\n", "我们以优化速率–失真拉格朗日的方式编译训练器,即速率和失真的总和,其中一项由拉格朗日参数 $\\lambda$ 加权。\n", "\n", "此损失函数对模型的不同部分有着不同的影响:\n", "\n", "- 对分析转换进行训练以产生隐空间表示,该表示会在速率和失真之间实现所需的权衡。\n", "- 给定隐空间表示,训练合成转换以将失真最小化。\n", "- 训练先验参数以将给定隐空间表示的速率最小化。这与在最大似然意义上拟合隐空间的边缘分布的先验相同。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "k5mm1aDkcgAf", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def pass_through_loss(_, x):\n", " # Since rate and distortion are unsupervised, the loss doesn't need a target.\n", " return x\n", "\n", "def make_mnist_compression_trainer(lmbda, latent_dims=50):\n", " trainer = MNISTCompressionTrainer(latent_dims)\n", " trainer.compile(\n", " optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),\n", " # Just pass through rate and distortion as losses/metrics.\n", " loss=dict(rate=pass_through_loss, distortion=pass_through_loss),\n", " metrics=dict(rate=pass_through_loss, distortion=pass_through_loss),\n", " loss_weights=dict(rate=1., distortion=lmbda),\n", " )\n", " return trainer\n" ] }, { "cell_type": "markdown", "metadata": { "id": "DPwd4DTs3Mfr" }, "source": [ "接下来训练模型。此处不需要人工注释,因为我们只想压缩图像,所以我们使用 `map` 将它们丢弃,取而代之的是为速率和失真添加“虚拟”目标。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "QNBpCTgzAV7M", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def add_rd_targets(image, label):\n", " # Training is unsupervised, so labels aren't necessary here. However, we\n", " # need to add \"dummy\" targets for rate and distortion.\n", " return image, dict(rate=0., distortion=0.)\n", "\n", "def train_mnist_model(lmbda):\n", " trainer = make_mnist_compression_trainer(lmbda)\n", " trainer.fit(\n", " training_dataset.map(add_rd_targets).batch(128).prefetch(8),\n", " epochs=15,\n", " validation_data=validation_dataset.map(add_rd_targets).batch(128).cache(),\n", " validation_freq=1,\n", " verbose=1,\n", " )\n", " return trainer\n", "\n", "trainer = train_mnist_model(lmbda=2000)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "Td4xuttmCd7T" }, "source": [ "## 压缩一些 MNIST 图像\n", "\n", "对于测试时的压缩和解压缩,我们将训练好的模型分成两部分:\n", "\n", "- 编码器端由分析转换和熵模型组成。\n", "- 解码端由合成转换和相同的熵模型组成。\n", "\n", "测试时,隐空间没有加性噪声,但它们会被量化并随后无损压缩,因此我们给它们提供新的名称。我们将它们和图像重建分别称为 $\\hat x$ 和 $\\hat y$(按照[端到端优化图像压缩](https://arxiv.org/abs/1611.01704))。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "sBRAPa5jksss", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "class MNISTCompressor(tf.keras.Model):\n", " \"\"\"Compresses MNIST images to strings.\"\"\"\n", "\n", " def __init__(self, analysis_transform, entropy_model):\n", " super().__init__()\n", " self.analysis_transform = analysis_transform\n", " self.entropy_model = entropy_model\n", "\n", " def call(self, x):\n", " # Ensure inputs are floats in the range (0, 1).\n", " x = tf.cast(x, self.compute_dtype) / 255.\n", " y = self.analysis_transform(x)\n", " # Also return the exact information content of each digit.\n", " _, bits = self.entropy_model(y, training=False)\n", " return self.entropy_model.compress(y), bits\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "sSZ0X2xPnkN-", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "class MNISTDecompressor(tf.keras.Model):\n", " \"\"\"Decompresses MNIST images from strings.\"\"\"\n", "\n", " def __init__(self, entropy_model, synthesis_transform):\n", " super().__init__()\n", " self.entropy_model = entropy_model\n", " self.synthesis_transform = synthesis_transform\n", "\n", " def call(self, string):\n", " y_hat = self.entropy_model.decompress(string, ())\n", " x_hat = self.synthesis_transform(y_hat)\n", " # Scale and cast back to 8-bit integer.\n", " return tf.saturate_cast(tf.round(x_hat * 255.), tf.uint8)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "GI7rxeOUDnaC" }, "source": [ "当使用 `compression=True` 实例化时,熵模型将学习的先验转换为范围编码算法的表。调用 `compress()` 时,会调用此算法以将隐空间向量转换为位序列。每个二进制字符串的长度近似于隐空间的信息内容(先验下隐空间的负对数似然值)。\n", "\n", "压缩和解压缩的熵模型必须是相同的实例,因为范围编码表需要在两端完全相同。否则,可能会出现解码错误。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Dnm_p7mbnigo", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def make_mnist_codec(trainer, **kwargs):\n", " # The entropy model must be created with `compression=True` and the same\n", " # instance must be shared between compressor and decompressor.\n", " entropy_model = tfc.ContinuousBatchedEntropyModel(\n", " trainer.prior, coding_rank=1, compression=True, **kwargs)\n", " compressor = MNISTCompressor(trainer.analysis_transform, entropy_model)\n", " decompressor = MNISTDecompressor(entropy_model, trainer.synthesis_transform)\n", " return compressor, decompressor\n", "\n", "compressor, decompressor = make_mnist_codec(trainer)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "SYu5sVVH3YMv" }, "source": [ "从验证数据集中抓取 16 个图像。您可以通过将参数更改为 `skip` 来选择不同的子集。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qAxArlU728K5", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "(originals, _), = validation_dataset.batch(16).skip(3).take(1)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "CHeN_ny929YS" }, "source": [ "将它们压缩为字符串,并以位为单位跟踪它们的每个信息内容。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "smOk42gQ3IXv", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "strings, entropies = compressor(originals)\n", "\n", "print(f\"String representation of first digit in hexadecimal: 0x{strings[0].numpy().hex()}\")\n", "print(f\"Number of bits actually needed to represent it: {entropies[0]:0.2f}\")\n" ] }, { "cell_type": "markdown", "metadata": { "id": "5j9R4bTT3Qhl" }, "source": [ "从字符串中将图像解压缩回来。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "yOP6pEqU3P0w", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "reconstructions = decompressor(strings)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "JWo0Q-vy23tt" }, "source": [ "显示 16 个原始数字中的每一个及其压缩二进制表示,以及重建的数字。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "jU5IqzZzeEpf", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "#@title\n", "\n", "def display_digits(originals, strings, entropies, reconstructions):\n", " \"\"\"Visualizes 16 digits together with their reconstructions.\"\"\"\n", " fig, axes = plt.subplots(4, 4, sharex=True, sharey=True, figsize=(12.5, 5))\n", " axes = axes.ravel()\n", " for i in range(len(axes)):\n", " image = tf.concat([\n", " tf.squeeze(originals[i]),\n", " tf.zeros((28, 14), tf.uint8),\n", " tf.squeeze(reconstructions[i]),\n", " ], 1)\n", " axes[i].imshow(image)\n", " axes[i].text(\n", " .5, .5, f\"→ 0x{strings[i].numpy().hex()} →\\n{entropies[i]:0.2f} bits\",\n", " ha=\"center\", va=\"top\", color=\"white\", fontsize=\"small\",\n", " transform=axes[i].transAxes)\n", " axes[i].axis(\"off\")\n", " plt.subplots_adjust(wspace=0, hspace=0, left=0, right=1, bottom=0, top=1)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "km9PqVEtPJPc", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "display_digits(originals, strings, entropies, reconstructions)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "EzlrIOiYOzJc" }, "source": [ "请注意,编码字符串的长度与每个数字的信息内容不同。\n", "\n", "这是因为范围编码流程使用离散概率,并且具有少量开销。因此,特别是对于短字符串,这种对应关系只是近似的。不过,范围编码是**渐近最优的**:在极限情况下,期望的比特数将接近交叉熵(期望的信息内容),训练模型中的速率项是一个上限。" ] }, { "cell_type": "markdown", "metadata": { "id": "78qIG8t8FvJW" }, "source": [ "## 速率–失真权衡\n", "\n", "在上面,该模型经过训练以在用于表示每个数字的平均位数与重建中产生的错误之间进行特定权衡(由 `lmbda=2000` 给出)。\n", "\n", "当我们用不同的值重复实验时,会发生什么?\n", "\n", "我们首先将 $\\lambda$ 减少到 500。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "1iFcAD0WF78p", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def train_and_visualize_model(lmbda):\n", " trainer = train_mnist_model(lmbda=lmbda)\n", " compressor, decompressor = make_mnist_codec(trainer)\n", " strings, entropies = compressor(originals)\n", " reconstructions = decompressor(strings)\n", " display_digits(originals, strings, entropies, reconstructions)\n", "\n", "train_and_visualize_model(lmbda=500)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "Uy5OkgJMObMc" }, "source": [ "代码的比特率下降了,数字的保真度也随之降低。但是,大多数数字仍然可以识别。\n", "\n", "我们进一步减少 $\\lambda$。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NQp9_9_5GcxH", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "train_and_visualize_model(lmbda=300)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "3ELLMAN1OwMQ" }, "source": [ "字符串现在开始变得更短,大约每个数字一个字节。然而,这是有代价的。越来越多的数字变得无法辨认。\n", "\n", "这表明此模型与人类对错误的感知无关,它只是根据像素值测量绝对偏差。为了获得更好的感知图像质量,我们需要用感知损失来代替像素损失。" ] }, { "cell_type": "markdown", "metadata": { "id": "v9cWHtH0LP_r" }, "source": [ "## 使用解码器作为生成模型\n", "\n", "如果我们向解码器提供随机位,这将有效地从模型学习表示数字的分布中采样。\n", "\n", "首先,重新实例化压缩器/解压缩器而不进行完整性检查,该检查将检测输入字符串是否未完全解码。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qnic8YsM0_ke", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "compressor, decompressor = make_mnist_codec(trainer, decode_sanity_check=False)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "86uc9_Is1eeo" }, "source": [ "现在,将足够长的随机字符串输入解压缩器,以便它可以从中解码/采样数字。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "o4fP7BkqKCHY", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import os\n", "\n", "strings = tf.constant([os.urandom(8) for _ in range(16)])\n", "samples = decompressor(strings)\n", "\n", "fig, axes = plt.subplots(4, 4, sharex=True, sharey=True, figsize=(5, 5))\n", "axes = axes.ravel()\n", "for i in range(len(axes)):\n", " axes[i].imshow(tf.squeeze(samples[i]))\n", " axes[i].axis(\"off\")\n", "plt.subplots_adjust(wspace=0, hspace=0, left=0, right=1, bottom=0, top=1)\n" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "name": "data_compression.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }