{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "id": "FhGuhbZ6M5tl" }, "outputs": [], "source": [ "##### Copyright 2022 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "AwOEIRJC6Une", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "EIdT9iu_Z4Rb" }, "source": [ "# 使用 Core API 进行矩阵逼近" ] }, { "cell_type": "markdown", "metadata": { "id": "bBIlTPscrIT9" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
在 TensorFlow.org 上查看\n", " 在 Google Colab 运行\n", " 在 GitHub 上查看源代码\n", " 下载笔记本\n", "
" ] }, { "cell_type": "markdown", "metadata": { "id": "qGw8TF2vtzru" }, "source": [ "## 简介\n", "\n", "此笔记本使用 [TensorFlow Core 低级 API](https://tensorflow.google.cn/guide/core) 展示了 TensorFlow 作为高性能科学计算平台的能力。访问 [Core API 概述](https://tensorflow.google.cn/guide/core)以详细了解 TensorFlow Core 及其预期用例。\n", "\n", "本教程探讨[奇异值分解](https://developers.google.com/machine-learning/recommendation/collaborative/matrix) (SVD) 技术及其在低秩逼近问题中的应用。SVD 用于分解实数或复数矩阵,并在数据科学中具有多种用例,例如图像压缩。本教程的图像来自 Google Brain 的 [Imagen](https://imagen.research.google/) 项目。 " ] }, { "cell_type": "markdown", "metadata": { "id": "5_FdwaovEkCC" }, "source": [ "> ![svd_intro](http://tensorflow.org/images/core/svd_intro.png)" ] }, { "cell_type": "markdown", "metadata": { "id": "nchsZfwEVtVs" }, "source": [ "## 安装" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "1rRo8oNqZ-Rj", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import matplotlib\n", "from matplotlib.image import imread\n", "from matplotlib import pyplot as plt\n", "import requests\n", "# Preset Matplotlib figure sizes.\n", "matplotlib.rcParams['figure.figsize'] = [16, 9]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9xQKvCJ85kCQ", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "import tensorflow as tf\n", "print(tf.__version__)" ] }, { "cell_type": "markdown", "metadata": { "id": "so_ewq3gAoEI" }, "source": [ "## SVD 基础知识\n", "\n", "矩阵 ${\\mathrm{A}}$ 的奇异值分解由以下因式分解确定:\n", "\n", "$${\\mathrm{A}} = {\\mathrm{U}} \\Sigma {\\mathrm{V}}^T$$\n", "\n", "其中\n", "\n", "- $\\underset{m \\times n}{\\mathrm{A}}$:输入矩阵,其中 $m \\geq n$\n", "- $\\underset{m \\times n}{\\mathrm{U}}$:正交矩阵,${\\mathrm{U}}^T{\\mathrm{U}} = {\\mathrm{I}}$,包含各个列 $u_i$,表示 ${\\mathrm{A}}$ 的左奇异向量\n", "- $\\underset{n \\times n}{\\Sigma}$:对角矩阵,包含各个对角条目 $\\sigma_i$,表示${\\mathrm{A}}$的奇异值\n", "- $\\underset{n \\times n}{{\\mathrm{V}}^T}$:正交矩阵,${\\mathrm{V}}^T{\\mathrm{V}} = {\\mathrm{I}}$,包含各个行 $v_i$,表示 ${\\mathrm{A}}$ 的右奇异向量\n", "\n", "当 $m < n$ 时,${\\mathrm{U}}$ 和 $\\Sigma$ 的维度均为 $(m \\times m)$,而 ${\\mathrm{V}}^T$ 的维度为 $(m \\times n)$。" ] }, { "cell_type": "markdown", "metadata": { "id": "enGGGXCQKNv8" }, "source": [ "> ![svd_full](http://tensorflow.org/images/core/svd_full.png)" ] }, { "cell_type": "markdown", "metadata": { "id": "NlP-cBdSKLtc" }, "source": [ "TensorFlow 的线性代数软件包具有一个函数 `tf.linalg.svd`,可用于计算一个或多个矩阵的奇异值分解。首先,定义一个简单的矩阵并计算其 SVD 因式分解。\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "C3QAcgyoeIpv", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "A = tf.random.uniform(shape=[40,30])\n", "# Compute the SVD factorization\n", "s, U, V = tf.linalg.svd(A)\n", "# Define Sigma and V Transpose\n", "S = tf.linalg.diag(s)\n", "V_T = tf.transpose(V)\n", "# Reconstruct the original matrix\n", "A_svd = U@S@V_T\n", "# Visualize \n", "plt.bar(range(len(s)), s);\n", "plt.xlabel(\"Singular value rank\")\n", "plt.ylabel(\"Singular value\")\n", "plt.title(\"Bar graph of singular values\");" ] }, { "cell_type": "markdown", "metadata": { "id": "6H_C9WhFACm4" }, "source": [ "`tf.einsum` 函数可用于根据 `tf.linalg.svd` 的输出直接计算矩阵重构。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "TPE6QeMtADUn", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "A_svd = tf.einsum('s,us,vs -> uv',s,U,V)\n", "print('\\nReconstructed Matrix, A_svd', A_svd)" ] }, { "cell_type": "markdown", "metadata": { "id": "x1m6JIsM9DLP" }, "source": [ "## 使用 SVD 进行低秩逼近\n", "\n", "矩阵的秩 ${\\mathrm{A}}$ 由其各列所跨越的向量空间的维度决定。SVD 可用于逼近具有较低秩的矩阵,这最终会降低存储矩阵表示的信息所需数据的维数。\n", "\n", "${\\mathrm{A}}$ 在 SVD 中的秩 r 逼近由以下方程定义:\n", "\n", "$${\\mathrm{A_r}} = {\\mathrm{U_r}} \\Sigma_r {\\mathrm{V_r}}^T$$\n", "\n", "其中\n", "\n", "- $\\underset{m \\times r}{\\mathrm{U_r}}$:由 ${\\mathrm{U}}$ 的前 $r$ 列组成的矩阵\n", "- $\\underset{r \\times r}{\\Sigma_r}$:由$\\Sigma$ 中的前 $r$ 个奇异值组成的对角矩阵\n", "- $\\underset{r \\times n}{\\mathrm{V_r}}^T$:由 ${\\mathrm{V}}^T$ 的前 $r$ 行组成的矩阵" ] }, { "cell_type": "markdown", "metadata": { "id": "nJWMJu36QyUV" }, "source": [ "> ![svd_approx](http://tensorflow.org/images/core/svd_approx.png)" ] }, { "cell_type": "markdown", "metadata": { "id": "TkiVUxeaQybq" }, "source": [ "首先,编写一个函数来计算给定矩阵的秩 r 逼近。这种低秩逼近过程用于图像压缩;因此,计算每个逼近的物理数据大小也很有帮助。为简单起见,假设秩 r 逼近矩阵的数据大小等于计算逼近所需的元素总数。接下来,编写一个函数来呈现原始矩阵 $\\mathrm{A}$、其秩 r 逼近 $\\mathrm{A}_r$ 和误差矩阵 $|\\mathrm{A} - \\mathrm{A}_r|$。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2oY3pMPagJrO", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def rank_r_approx(s, U, V, r, verbose=False):\n", " # Compute the matrices necessary for a rank-r approximation\n", " s_r, U_r, V_r = s[..., :r], U[..., :, :r], V[..., :, :r] # ... implies any number of extra batch axes\n", " # Compute the low-rank approximation and its size\n", " A_r = tf.einsum('...s,...us,...vs->...uv',s_r,U_r,V_r)\n", " A_r_size = tf.size(U_r) + tf.size(s_r) + tf.size(V_r)\n", " if verbose:\n", " print(f\"Approximation Size: {A_r_size}\")\n", " return A_r, A_r_size\n", "\n", "def viz_approx(A, A_r):\n", " # Plot A, A_r, and A - A_r\n", " vmin, vmax = 0, tf.reduce_max(A)\n", " fig, ax = plt.subplots(1,3)\n", " mats = [A, A_r, abs(A - A_r)]\n", " titles = ['Original A', 'Approximated A_r', 'Error |A - A_r|']\n", " for i, (mat, title) in enumerate(zip(mats, titles)):\n", " ax[i].pcolormesh(mat, vmin=vmin, vmax=vmax)\n", " ax[i].set_title(title)\n", " ax[i].axis('off')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "O3ZRkYCkX2FQ", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "print(f\"Original Size of A: {tf.size(A)}\")\n", "s, U, V = tf.linalg.svd(A)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "S1DR83VMX4cM", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "# Rank-15 approximation\n", "A_15, A_15_size = rank_r_approx(s, U, V, 15, verbose = True)\n", "viz_approx(A, A_15)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "KgFT70XFX57E", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "# Rank-3 approximation\n", "A_3, A_3_size = rank_r_approx(s, U, V, 3, verbose = True)\n", "viz_approx(A, A_3)" ] }, { "cell_type": "markdown", "metadata": { "id": "DS4XoSlTJgX0" }, "source": [ "正如预期的那样,使用较低的秩会得到不太准确的逼近。然而,这些低秩逼近的质量在现实世界的场景中通常足够好。另请注意,使用 SVD 进行低秩逼近的主要目标是减少数据的维数,而不是减少数据本身的磁盘空间。不过,随着输入矩阵的维度变高,许多低秩逼近也最终受益于缩减的数据大小。这种缩减的好处是该过程适用于图像压缩问题的原因。" ] }, { "cell_type": "markdown", "metadata": { "id": "IhsaiOnnZs6M" }, "source": [ "## 图像加载\n", "\n", "[Imagen](https://imagen.research.google/) 首页上提供了以下图像。Imagen 是由 Google Research 的 Brain 团队开发的文本到图像扩散模型。AI 根据提示创建了这张图像:“一张柯基犬在时代广场骑自行车的照片。它戴着墨镜和沙滩帽。”多么酷啊!您还可以将下面的网址更改为任何 .jpg 链接以加载选择的自定义图像。\n", "\n", "首先,读入并呈现图像。读取 JPEG 文件后,Matplotlib 会输出一个形状为 $(m \\times n \\times 3)$ 的矩阵 ${\\mathrm{I}}$,它表示一个二维图像,具有分别对应于红色、绿色和蓝色的 3 个颜色通道。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "OVsZOQUAZ2C7", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "img_link = \"https://imagen.research.google/main_gallery_images/a-photo-of-a-corgi-dog-riding-a-bike-in-times-square.jpg\"\n", "img_path = requests.get(img_link, stream=True).raw\n", "I = imread(img_path, 0)\n", "print(\"Input Image Shape:\", I.shape)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Qvs7uftcZ54x", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def show_img(I):\n", " # Display the image in matplotlib\n", " img = plt.imshow(I)\n", " plt.axis('off')\n", " return" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ZbesXO3HZ6Qs", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "show_img(I)" ] }, { "cell_type": "markdown", "metadata": { "id": "tdnUBVg_JoOa" }, "source": [ "## 图像压缩算法\n", "\n", "现在,使用 SVD 计算样本图像的低秩逼近。回想一下,图像的形状为 $(1024 \\times 1024 \\times 3)$,并且 SVD 理论仅适用于二维矩阵。这意味着必须将样本图像批处理为 3 个大小相等的矩阵,这些矩阵对应于 3 个颜色通道中的每一个。这可以通过将矩阵转置为形状 $(3 \\times 1024 \\times 1024)$ 来实现。为了清楚地呈现逼近误差,将图像的 RGB 值从 $[0,255]$ 重新缩放到 $[0,1]$。记得在呈现它们之前将逼近值裁剪到此区间内。`tf.clip_by_value` 函数对此十分有用。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "i7DDp0h7oSIk", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def compress_image(I, r, verbose=False):\n", " # Compress an image with the SVD given a rank \n", " I_size = tf.size(I)\n", " print(f\"Original size of image: {I_size}\")\n", " # Compute SVD of image\n", " I = tf.convert_to_tensor(I)/255\n", " I_batched = tf.transpose(I, [2, 0, 1]) # einops.rearrange(I, 'h w c -> c h w')\n", " s, U, V = tf.linalg.svd(I_batched)\n", " # Compute low-rank approximation of image across each RGB channel\n", " I_r, I_r_size = rank_r_approx(s, U, V, r)\n", " I_r = tf.transpose(I_r, [1, 2, 0]) # einops.rearrange(I_r, 'c h w -> h w c')\n", " I_r_prop = (I_r_size / I_size)\n", " if verbose:\n", " # Display compressed image and attributes\n", " print(f\"Number of singular values used in compression: {r}\")\n", " print(f\"Compressed image size: {I_r_size}\")\n", " print(f\"Proportion of original size: {I_r_prop:.3f}\")\n", " ax_1 = plt.subplot(1,2,1)\n", " show_img(tf.clip_by_value(I_r,0.,1.))\n", " ax_1.set_title(\"Approximated image\")\n", " ax_2 = plt.subplot(1,2,2)\n", " show_img(tf.clip_by_value(0.5+abs(I-I_r),0.,1.))\n", " ax_2.set_title(\"Error\")\n", " return I_r, I_r_prop" ] }, { "cell_type": "markdown", "metadata": { "id": "RGQ_rTyKDX9F" }, "source": [ "现在,计算以下秩的秩 r 逼近:100、50、10" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7GlKkVLGDjre", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "I_100, I_100_prop = compress_image(I, 100, verbose=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "XdvUkF5_E75D", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "I_50, I_50_prop = compress_image(I, 50, verbose=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "MsCNZ8416Sbk", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "I_10, I_10_prop = compress_image(I, 10, verbose=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "RfYYBhcuNkvH" }, "source": [ "## 评估逼近\n", "\n", "可以通过多种有趣的方法衡量有效性并更好地控制矩阵逼近。" ] }, { "cell_type": "markdown", "metadata": { "id": "D2Lotde9Zg7v" }, "source": [ "### 压缩因子与秩\n", "\n", "对于上述每个逼近,观察数据大小如何随秩变化。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "O1ariNQe6Wbl", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "plt.figure(figsize=(11,6))\n", "plt.plot([100, 50, 10], [I_100_prop, I_50_prop, I_10_prop])\n", "plt.xlabel(\"Rank\")\n", "plt.ylabel(\"Proportion of original image size\")\n", "plt.title(\"Compression factor vs rank\");" ] }, { "cell_type": "markdown", "metadata": { "id": "dvHcLRj2QoDg" }, "source": [ "基于这张图,逼近图像的压缩因子与其秩之间存在线性关系。为了进一步探索这一点,回想一下,逼近矩阵 ${\\mathrm{A}}_r$ 的数据大小被定义为其计算所需的元素总数。下面的方程可用于找出压缩因子与秩之间的关系:\n", "\n", "$$x = (m \\times r) + r + (r \\times n) = r \\times (m + n + 1)$$\n", "\n", "$$c = \\large \\frac{x}{y} = \\frac{r \\times (m + n + 1)}{m \\times n}$$\n", "\n", "其中\n", "\n", "- $x$:${\\mathrm{A_r}}$ 的大小\n", "- $y$:${\\mathrm{A}}$ 的大小\n", "- $c = \\frac{x}{y}$:压缩因子\n", "- $r$:逼近的秩\n", "- $m$ 和 $n$:${\\mathrm{A}}$ 的行维度和列维度\n", "\n", "为了找到将图像压缩到所需因子 $c$ 所需的秩 $r$,可以重新排列上述方程以求解 $r$:\n", "\n", "$$r = ⌊{\\large\\frac{c \\times m \\times n}{m + n + 1}}⌋$$\n", "\n", "请注意,此公式与颜色通道维度无关,因为各个 RGB 逼近不会相互影响。现在,编写一个函数来压缩给定所需压缩因子的输入图像。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "viVO-I60QynI", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def compress_image_with_factor(I, compression_factor, verbose=False):\n", " # Returns a compressed image based on a desired compression factor\n", " m,n,o = I.shape\n", " r = int((compression_factor * m * n)/(m + n + 1))\n", " I_r, I_r_prop = compress_image(I, r, verbose=verbose)\n", " return I_r" ] }, { "cell_type": "markdown", "metadata": { "id": "gWSv58J6LSRQ" }, "source": [ "将图像压缩到其原始大小的 15%。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "HVeeloIwQ1b6", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "compression_factor = 0.15\n", "I_r_img = compress_image_with_factor(I, compression_factor, verbose=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "LkeRyms7jZMd" }, "source": [ "### 奇异值的累积和\n", "\n", "奇异值的累积总和可作为秩 r 逼近捕获的能量量的有用指标。呈现样本图像中奇异值的 RGB 平均累积比例。`tf.cumsum` 函数对此十分有用。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "CteJ6VbKlndu", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def viz_energy(I):\n", " # Visualize the energy captured based on rank\n", " # Computing SVD\n", " I = tf.convert_to_tensor(I)/255\n", " I_batched = tf.transpose(I, [2, 0, 1]) \n", " s, U, V = tf.linalg.svd(I_batched)\n", " # Plotting average proportion across RGB channels \n", " props_rgb = tf.map_fn(lambda x: tf.cumsum(x)/tf.reduce_sum(x), s)\n", " props_rgb_mean = tf.reduce_mean(props_rgb, axis=0)\n", " plt.figure(figsize=(11,6))\n", " plt.plot(range(len(I)), props_rgb_mean, color='k')\n", " plt.xlabel(\"Rank / singular value number\")\n", " plt.ylabel(\"Cumulative proportion of singular values\")\n", " plt.title(\"RGB-averaged proportion of energy captured by the first 'r' singular values\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Vl9PKow-GgCp", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "viz_energy(I)" ] }, { "cell_type": "markdown", "metadata": { "id": "vQtwimKuQP19" }, "source": [ "看起来这个图像中超过 90% 的能量是在前 100 个奇异值中捕获的。现在,编写一个函数来压缩给定所需能量保留因子的输入图像。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "fum5Cvm7R5vH", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "def compress_image_with_energy(I, energy_factor, verbose=False):\n", " # Returns a compressed image based on a desired energy factor\n", " # Computing SVD\n", " I_rescaled = tf.convert_to_tensor(I)/255\n", " I_batched = tf.transpose(I_rescaled, [2, 0, 1]) \n", " s, U, V = tf.linalg.svd(I_batched)\n", " # Extracting singular values\n", " props_rgb = tf.map_fn(lambda x: tf.cumsum(x)/tf.reduce_sum(x), s)\n", " props_rgb_mean = tf.reduce_mean(props_rgb, axis=0)\n", " # Find closest r that corresponds to the energy factor\n", " r = tf.argmin(tf.abs(props_rgb_mean - energy_factor)) + 1\n", " actual_ef = props_rgb_mean[r]\n", " I_r, I_r_prop = compress_image(I, r, verbose=verbose)\n", " print(f\"Proportion of energy captured by the first {r} singular values: {actual_ef:.3f}\")\n", " return I_r" ] }, { "cell_type": "markdown", "metadata": { "id": "Y_rChG0OLby1" }, "source": [ "压缩图像以保留 75% 的能量。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "xDXBaZQ4c5jF", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "energy_factor = 0.75\n", "I_r_img = compress_image_with_energy(I, energy_factor, verbose=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "2tmqTW0CYX-v" }, "source": [ "### 误差和奇异值\n", "\n", "逼近误差与奇异值之间也存在一个有趣的关系。事实证明,逼近的平方 Frobenius 范数等于其被省略的奇异值的平方和:\n", "\n", "$${||A - A_r||}^2 = \\sum_{i=r+1}^{R}σ_i^2$$\n", "\n", "使用本教程开头的示例矩阵的秩 10 逼近来测试这种关系。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "hctOvN8BckiS", "vscode": { "languageId": "python" } }, "outputs": [], "source": [ "s, U, V = tf.linalg.svd(A)\n", "A_10, A_10_size = rank_r_approx(s, U, V, 10)\n", "squared_norm = tf.norm(A - A_10)**2\n", "s_squared_sum = tf.reduce_sum(s[10:]**2)\n", "print(f\"Squared Frobenius norm: {squared_norm:.3f}\")\n", "print(f\"Sum of squared singular values left out: {s_squared_sum:.3f}\")" ] }, { "cell_type": "markdown", "metadata": { "id": "vgGQuV-yqYZH" }, "source": [ "## 结论\n", "\n", "此笔记本介绍了使用 TensorFlow 实现奇异值分解并将其应用于编写图像压缩算法的过程。下面是一些可能有所帮助的提示:\n", "\n", "- [TensorFlow Core API](https://tensorflow.google.cn/guide/core) 可用于各种高性能科学计算用例。\n", "- 要详细了解 TensorFlow 线性代数功能,请访问 [linalg 模块](https://tensorflow.google.cn/api_docs/python/tf/linalg)的文档。\n", "- SVD 也可应用于构建[推荐系统](https://developers.google.com/machine-learning/recommendation/labs/movie-rec-programming-exercise)。\n", "\n", "有关使用 TensorFlow Core API 的更多示例,请查阅[指南](https://tensorflow.google.cn/guide/core)。如果您想详细了解如何加载和准备数据,请参阅有关[图像数据加载](https://tensorflow.google.cn/tutorials/load_data/images)或 [CSV 数据加载](https://tensorflow.google.cn/tutorials/load_data/csv)的教程。" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "matrix_core.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }