{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "1SgrstLXNbG_" }, "outputs": [], "source": [ "##### Copyright 2019 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "k7gifg92NbG9", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "dCMqzy7BNbG9" }, "source": [ "# DeepDream" ] }, { "cell_type": "markdown", "metadata": { "id": "2yqCPS8SNbG8" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
在 TensorFlow.org 上查看在 Google Colab 中运行 在 Github 上查看源代码下载笔记本
" ] }, { "cell_type": "markdown", "metadata": { "id": "XPDKhwPcNbG7" }, "source": [ "本教程包含 DeepDream 的最小规模实现,如此篇由 Alexander Mordvintsev 发布的[博文](https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html)所述。\n", "\n", "DeepDream 是一项将神经网络学习模式予以可视化展现的实验。与孩子们观察云朵并尝试解释随机形状相类似,DeepDream 会过度解释并增强其在图像中看到的图案。\n", "\n", "该技术的原理是通过网络转发图像,然后计算图像相对于特定层激活的梯度。随后,图像将被修改以增加这些激活,从而增强网络可识别到的图案,并生成梦境般的图像。该过程被称为“盗梦”(引用自 [InceptionNet](https://arxiv.org/pdf/1409.4842.pdf),以及[电影](https://en.wikipedia.org/wiki/Inception)《盗梦空间》)。\n", "\n", "让我们演示如何帮助神经网络“造梦”并增强其在图像中识别到的超现实图案。\n", "\n", "![Dogception](https://tensorflow.google.cn/tutorials/generative/images/dogception.png)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Sc5Yq_Rgxreb", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import tensorflow as tf" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "g_Qp173_NbG5", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import numpy as np\n", "\n", "import matplotlib as mpl\n", "\n", "import IPython.display as display\n", "import PIL.Image" ] }, { "cell_type": "markdown", "metadata": { "id": "wgeIJg82NbG4" }, "source": [ "## 选择要梦境化的图像" ] }, { "cell_type": "markdown", "metadata": { "id": "yt6zam_9NbG4" }, "source": [ "在本教程中,我们使用了[拉布拉多寻回犬](https://commons.wikimedia.org/wiki/File:YellowLabradorLooking_new.jpg)的图片。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0lclzk9sNbG2", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "url = 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg'" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Y5BPgc8NNbG0", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "# Download an image and read it into a NumPy array.\n", "def download(url, max_dim=None):\n", " name = url.split('/')[-1]\n", " image_path = tf.keras.utils.get_file(name, origin=url)\n", " img = PIL.Image.open(image_path)\n", " if max_dim:\n", " img.thumbnail((max_dim, max_dim))\n", " return np.array(img)\n", "\n", "# Normalize an image\n", "def deprocess(img):\n", " img = 255*(img + 1.0)/2.0\n", " return tf.cast(img, tf.uint8)\n", "\n", "# Display an image\n", "def show(img):\n", " display.display(PIL.Image.fromarray(np.array(img)))\n", "\n", "\n", "# Downsizing the image makes it easier to work with.\n", "original_img = download(url, max_dim=500)\n", "show(original_img)\n", "display.display(display.HTML('Image cc-by: Von.grzanka'))" ] }, { "cell_type": "markdown", "metadata": { "id": "F4RBFfIWNbG0" }, "source": [ "## 准备特征提取模型" ] }, { "cell_type": "markdown", "metadata": { "id": "cruNQmMDNbGz" }, "source": [ "下载并准备预训练的图像分类模型。您将使用 [InceptionV3](https://keras.io/applications/#inceptionv3),它与 DeepDream 中最初使用的模型相似。请注意,任何[预训练模型](https://keras.io/applications/#models-for-image-classification-with-weights-trained-on-imagenet)均可使用,但如果对其进行了更改,则您需要调整下方的层名称。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "GlLi48GKNbGy", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "base_model = tf.keras.applications.InceptionV3(include_top=False, weights='imagenet')" ] }, { "cell_type": "markdown", "metadata": { "id": "Bujb0jPNNbGx" }, "source": [ "DeepDream 的方案是选择一个或多个层,并以使图像渐增式“激发”层的方式最大化“损失”。融入特征的复杂性取决于您选择的层,即较低的层会产生笔触或简单的图案,而较深的层则会使图像甚至整个对象呈现出复杂的特征。" ] }, { "cell_type": "markdown", "metadata": { "id": "qOVmDO4LNbGv" }, "source": [ "InceptionV3 架构十分庞大(有关模型架构的图表,请参见 TensorFlow 的[研究仓库](https://github.com/tensorflow/models/tree/master/research/inception))。对于 DeepDream,目标层是将卷积串联在一起的层。InceptionV3 中有 11 层,名为“mixed0”至“mixed10”。使用不同的层将产生不同的梦幻图像。较深的层响应较高级的特征(例如,眼睛和面部),而较浅的层则响应较简单的特征(例如,边缘、形状和纹理)。请随意尝试以下选择的层,但请记住,由于梯度计算的深度较大,较深的层(索引较高的层)将需要较长的训练时间。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "08KB502ONbGt", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "# Maximize the activations of these layers\n", "names = ['mixed3', 'mixed5']\n", "layers = [base_model.get_layer(name).output for name in names]\n", "\n", "# Create the feature extraction model\n", "dream_model = tf.keras.Model(inputs=base_model.input, outputs=layers)" ] }, { "cell_type": "markdown", "metadata": { "id": "sb7u31B4NbGt" }, "source": [ "## 计算损失\n", "\n", "损失是所选层中激活的总和。损失在每一层均会进行归一化,因此较大层的贡献不会超过较小层。通常,您会希望通过梯度下降来实现损失量最小化。但在 DeepDream 中,您将通过梯度上升使这种损失最大化。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8MhfSweXXiuq", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def calc_loss(img, model):\n", " # Pass forward the image through the model to retrieve the activations.\n", " # Converts the image into a batch of size 1.\n", " img_batch = tf.expand_dims(img, axis=0)\n", " layer_activations = model(img_batch)\n", " if len(layer_activations) == 1:\n", " layer_activations = [layer_activations]\n", "\n", " losses = []\n", " for act in layer_activations:\n", " loss = tf.math.reduce_mean(act)\n", " losses.append(loss)\n", "\n", " return tf.reduce_sum(losses)" ] }, { "cell_type": "markdown", "metadata": { "id": "k4TCNsAUO9kI" }, "source": [ "## 梯度上升\n", "\n", "所选层的损失一旦计算完成,只需相对于图像计算梯度,并将梯度添加到原始图像即可。\n", "\n", "将梯度添加到图像后,网络可以更清晰地识别图案。在每个步骤中,您都将创建一个图像,从而渐增式地激发网络中某些层的激活。\n", "\n", "下文提供了执行此操作的方法,即包装在 `tf.function` 中,从而提升性能。它使用 `input_signature` 来确保不会因图像大小不一或 `steps`/`step_size` 值不同而回溯函数。有关详细信息,请参阅[具体函数指南](../../guide/concrete_function.ipynb)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qRScWg_VNqvj", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "class DeepDream(tf.Module):\n", " def __init__(self, model):\n", " self.model = model\n", "\n", " @tf.function(\n", " input_signature=(\n", " tf.TensorSpec(shape=[None,None,3], dtype=tf.float32),\n", " tf.TensorSpec(shape=[], dtype=tf.int32),\n", " tf.TensorSpec(shape=[], dtype=tf.float32),)\n", " )\n", " def __call__(self, img, steps, step_size):\n", " print(\"Tracing\")\n", " loss = tf.constant(0.0)\n", " for n in tf.range(steps):\n", " with tf.GradientTape() as tape:\n", " # This needs gradients relative to `img`\n", " # `GradientTape` only watches `tf.Variable`s by default\n", " tape.watch(img)\n", " loss = calc_loss(img, self.model)\n", "\n", " # Calculate the gradient of the loss with respect to the pixels of the input image.\n", " gradients = tape.gradient(loss, img)\n", "\n", " # Normalize the gradients.\n", " gradients /= tf.math.reduce_std(gradients) + 1e-8 \n", " \n", " # In gradient ascent, the \"loss\" is maximized so that the input image increasingly \"excites\" the layers.\n", " # You can update the image by directly adding the gradients (because they're the same shape!)\n", " img = img + gradients*step_size\n", " img = tf.clip_by_value(img, -1, 1)\n", "\n", " return loss, img" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "yB9pTqn6xfuK", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "deepdream = DeepDream(dream_model)" ] }, { "cell_type": "markdown", "metadata": { "id": "XLArRTVHZFAi" }, "source": [ "## 主循环" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9vHEcy7dTysi", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def run_deep_dream_simple(img, steps=100, step_size=0.01):\n", " # Convert from uint8 to the range expected by the model.\n", " img = tf.keras.applications.inception_v3.preprocess_input(img)\n", " img = tf.convert_to_tensor(img)\n", " step_size = tf.convert_to_tensor(step_size)\n", " steps_remaining = steps\n", " step = 0\n", " while steps_remaining:\n", " if steps_remaining>100:\n", " run_steps = tf.constant(100)\n", " else:\n", " run_steps = tf.constant(steps_remaining)\n", " steps_remaining -= run_steps\n", " step += run_steps\n", "\n", " loss, img = deepdream(img, run_steps, tf.constant(step_size))\n", " \n", " display.clear_output(wait=True)\n", " show(deprocess(img))\n", " print (\"Step {}, loss {}\".format(step, loss))\n", "\n", "\n", " result = deprocess(img)\n", " display.clear_output(wait=True)\n", " show(result)\n", "\n", " return result" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "tEfd00rr0j8Z", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "dream_img = run_deep_dream_simple(img=original_img, \n", " steps=100, step_size=0.01)" ] }, { "cell_type": "markdown", "metadata": { "id": "2PbfXEVFNbGp" }, "source": [ "## 调高八度\n", "\n", "很好,但是首次尝试会有一些问题:\n", "\n", "1. 输出有噪声(可以使用 `tf.image.total_variation` 损失加以解决)。\n", "2. 图像分辨率低。\n", "3. 图案看起来粒度全部相同。\n", "\n", "一种可以解决所有上述问题的方法是以不同比例应用梯度上升。这将使在较小比例下生成的图案能够融合到较大比例的图案中,并附加其他细节。\n", "\n", "为此,您可以执行以前的梯度上升方法,然后增大图像尺寸(称为八度),并对多个八度重复此过程。\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0eGDSdatLT-8", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import time\n", "start = time.time()\n", "\n", "OCTAVE_SCALE = 1.30\n", "\n", "img = tf.constant(np.array(original_img))\n", "base_shape = tf.shape(img)[:-1]\n", "float_base_shape = tf.cast(base_shape, tf.float32)\n", "\n", "for n in range(-2, 3):\n", " new_shape = tf.cast(float_base_shape*(OCTAVE_SCALE**n), tf.int32)\n", "\n", " img = tf.image.resize(img, new_shape).numpy()\n", "\n", " img = run_deep_dream_simple(img=img, steps=50, step_size=0.01)\n", "\n", "display.clear_output(wait=True)\n", "img = tf.image.resize(img, base_shape)\n", "img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)\n", "show(img)\n", "\n", "end = time.time()\n", "end-start" ] }, { "cell_type": "markdown", "metadata": { "id": "s9xqyeuwLZFy" }, "source": [ "## 可选:按比例增加图块\n", "\n", "需要考虑的是,随着图像尺寸增大,执行梯度计算所需时间和内存也将随之增加。上文的八度实现不适用于非常大的图像或许多个八度。\n", "\n", "为避免此问题,您可以将图像拆分为图块并为每个图块计算梯度。\n", "\n", "在每次图块计算之前对图像应用随机偏移可防止出现图块缝隙。\n", "\n", "首先实现随机偏移:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "oGgLHk7o80ac", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def random_roll(img, maxroll):\n", " # Randomly shift the image to avoid tiled boundaries.\n", " shift = tf.random.uniform(shape=[2], minval=-maxroll, maxval=maxroll, dtype=tf.int32)\n", " img_rolled = tf.roll(img, shift=shift, axis=[0,1])\n", " return shift, img_rolled" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "sKsiqWfA9H41", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "shift, img_rolled = random_roll(np.array(original_img), 512)\n", "show(img_rolled)" ] }, { "cell_type": "markdown", "metadata": { "id": "tGIjA3UhhAt8" }, "source": [ "以下为此前定义的 `deepdream` 函数的图块式等效项:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "x__TZ0uqNbGm", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "class TiledGradients(tf.Module):\n", " def __init__(self, model):\n", " self.model = model\n", "\n", " @tf.function(\n", " input_signature=(\n", " tf.TensorSpec(shape=[None,None,3], dtype=tf.float32),\n", " tf.TensorSpec(shape=[2], dtype=tf.int32),\n", " tf.TensorSpec(shape=[], dtype=tf.int32),)\n", " )\n", " def __call__(self, img, img_size, tile_size=512):\n", " shift, img_rolled = random_roll(img, tile_size)\n", "\n", " # Initialize the image gradients to zero.\n", " gradients = tf.zeros_like(img_rolled)\n", " \n", " # Skip the last tile, unless there's only one tile.\n", " xs = tf.range(0, img_size[1], tile_size)[:-1]\n", " if not tf.cast(len(xs), bool):\n", " xs = tf.constant([0])\n", " ys = tf.range(0, img_size[0], tile_size)[:-1]\n", " if not tf.cast(len(ys), bool):\n", " ys = tf.constant([0])\n", "\n", " for x in xs:\n", " for y in ys:\n", " # Calculate the gradients for this tile.\n", " with tf.GradientTape() as tape:\n", " # This needs gradients relative to `img_rolled`.\n", " # `GradientTape` only watches `tf.Variable`s by default.\n", " tape.watch(img_rolled)\n", "\n", " # Extract a tile out of the image.\n", " img_tile = img_rolled[y:y+tile_size, x:x+tile_size]\n", " loss = calc_loss(img_tile, self.model)\n", "\n", " # Update the image gradients for this tile.\n", " gradients = gradients + tape.gradient(loss, img_rolled)\n", "\n", " # Undo the random shift applied to the image and its gradients.\n", " gradients = tf.roll(gradients, shift=-shift, axis=[0,1])\n", "\n", " # Normalize the gradients.\n", " gradients /= tf.math.reduce_std(gradients) + 1e-8 \n", "\n", " return gradients " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Vcq4GubA2e5J", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "get_tiled_gradients = TiledGradients(dream_model)" ] }, { "cell_type": "markdown", "metadata": { "id": "hYnTTs_qiaND" }, "source": [ "将此组合到一起可提供能够感知八度的可扩展 Deepdream 实现:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "gA-15DM4NbGk", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def run_deep_dream_with_octaves(img, steps_per_octave=100, step_size=0.01, \n", " octaves=range(-2,3), octave_scale=1.3):\n", " base_shape = tf.shape(img)\n", " img = tf.keras.utils.img_to_array(img)\n", " img = tf.keras.applications.inception_v3.preprocess_input(img)\n", "\n", " initial_shape = img.shape[:-1]\n", " img = tf.image.resize(img, initial_shape)\n", " for octave in octaves:\n", " # Scale the image based on the octave\n", " new_size = tf.cast(tf.convert_to_tensor(base_shape[:-1]), tf.float32)*(octave_scale**octave)\n", " new_size = tf.cast(new_size, tf.int32)\n", " img = tf.image.resize(img, new_size)\n", "\n", " for step in range(steps_per_octave):\n", " gradients = get_tiled_gradients(img, new_size)\n", " img = img + gradients*step_size\n", " img = tf.clip_by_value(img, -1, 1)\n", "\n", " if step % 10 == 0:\n", " display.clear_output(wait=True)\n", " show(deprocess(img))\n", " print (\"Octave {}, Step {}\".format(octave, step))\n", " \n", " result = deprocess(img)\n", " return result" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "T7PbRLV74RrU", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "img = run_deep_dream_with_octaves(img=original_img, step_size=0.01)\n", "\n", "display.clear_output(wait=True)\n", "img = tf.image.resize(img, base_shape)\n", "img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)\n", "show(img)" ] }, { "cell_type": "markdown", "metadata": { "id": "0Og0-qLwNbGg" }, "source": [ "好多了!调整八度数量、八度范围和激活层,以更改 DeepDream 图像的外观。\n", "\n", "读者可能还会有兴趣了解 [TensorFlow Lucid](https://github.com/tensorflow/lucid),其中对本教程中介绍的用于可视化和解释神经网络的理念进行了扩展。" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "name": "deepdream.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }