{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "l-23gBrt4x2B"
},
"outputs": [],
"source": [
"##### Copyright 2021 The TensorFlow Authors."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "HMUDt0CiUJk9",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "77z2OchJTk0l"
},
"source": [
"# 将 `tf.feature_column` 迁移到 Keras 预处理层\n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-5jGPDA2PDPI"
},
"source": [
"训练模型通常会伴随一些特征预处理,尤其是在处理结构化数据时。在 TensorFlow 1 中训练 `tf.estimator.Estimator` 时,通常使用 `tf.feature_column` API 执行特征预处理。在 TensorFlow 2 中,您可以直接使用 Keras 预处理层执行此操作。\n",
"\n",
"本迁移指南演示了使用特征列和预处理层的常见特征转换,然后使用这两种 API 训练一个完整的模型。\n",
"\n",
"首先,从几个必要的导入开始:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "iE0vSfMXumKI",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"import tensorflow.compat.v1 as tf1\n",
"import math"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NVPYTQAWtDwH"
},
"source": [
"接下来,添加一个用于调用特征列的效用函数进行演示:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "LAaifuuytJjM",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"def call_feature_columns(feature_columns, inputs):\n",
" # This is a convenient way to call a `feature_column` outside of an estimator\n",
" # to display its output.\n",
" feature_layer = tf1.keras.layers.DenseFeatures(feature_columns)\n",
" return feature_layer(inputs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZJnw07hYDGYt"
},
"source": [
"## 输入处理\n",
"\n",
"要将特征列与 Estimator 一起使用,模型输入始终应为张量的字典:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "y0WUpQxsKEzf",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"input_dict = {\n",
" 'foo': tf.constant([1]),\n",
" 'bar': tf.constant([0]),\n",
" 'baz': tf.constant([-1])\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xYsC6H_BJ8l3"
},
"source": [
"每个特征列都需要有一个键来索引到源数据。所有特征列的输出串联并由 Estimator 模型使用。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3fvIe3V8Ffjt",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"columns = [\n",
" tf1.feature_column.numeric_column('foo'),\n",
" tf1.feature_column.numeric_column('bar'),\n",
" tf1.feature_column.numeric_column('baz'),\n",
"]\n",
"call_feature_columns(columns, input_dict)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "hvPfCK2XGTyl"
},
"source": [
"在 Keras 中,模型输入更加灵活。`tf.keras.Model` 可以处理单个张量输入、张量特征列表或张量特征字典。可以通过在模型创建时传递 `tf.keras.Input` 的字典来处理字典输入。输入不会自动串联,这样它们便能以更灵活的方式使用。它们可以与 `tf.keras.layers.Concatenate` 串联。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5sYWENkgLWJ2",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"inputs = {\n",
" 'foo': tf.keras.Input(shape=()),\n",
" 'bar': tf.keras.Input(shape=()),\n",
" 'baz': tf.keras.Input(shape=()),\n",
"}\n",
"# Inputs are typically transformed by preprocessing layers before concatenation.\n",
"outputs = tf.keras.layers.Concatenate()(inputs.values())\n",
"model = tf.keras.Model(inputs=inputs, outputs=outputs)\n",
"model(input_dict)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "GXkmiuwXTS-B"
},
"source": [
"## 独热编码整数 ID\n",
"\n",
"常见的特征转换是对已知范围内的整数输入进行独热编码。下面是一个使用特征列的示例:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "XasXzOgatgRF",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"categorical_col = tf1.feature_column.categorical_column_with_identity(\n",
" 'type', num_buckets=3)\n",
"indicator_col = tf1.feature_column.indicator_column(categorical_col)\n",
"call_feature_columns(indicator_col, {'type': [0, 1, 2]})"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "iSCkJEQ6U-ru"
},
"source": [
"利用 Keras 预处理层,这些列可被替换为单个 `tf.keras.layers.CategoryEncoding` 层,其中 `output_mode` 设置为 `'one_hot'`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "799lbMNNuAVz",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"one_hot_layer = tf.keras.layers.CategoryEncoding(\n",
" num_tokens=3, output_mode='one_hot')\n",
"one_hot_layer([0, 1, 2])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kNzRtESU7tga"
},
"source": [
"注:对于大型独热编码,使用输出的稀疏表示会更高效。如果将 `sparse=True` 传递给 `CategoryEncoding` 层,则该层的输出将是 `tf.sparse.SparseTensor`,它可以作为 `tf.keras.layers.Dense` 层的输入高效地处理。"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Zf7kjhTiAErK"
},
"source": [
"## 归一化数字特征\n",
"\n",
"在处理具有特征列的连续浮点特征时,需要使用 `tf.feature_column.numeric_column`。在输入已经归一化的情况下,将其转换为 Keras 的操作十分简单。可以直接在模型中使用 `tf.keras.Input`,如上面所示。\n",
"\n",
"`numeric_column` 也可用于归一化输入:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "HbTMGB9XctGx",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"def normalize(x):\n",
" mean, variance = (2.0, 1.0)\n",
" return (x - mean) / math.sqrt(variance)\n",
"numeric_col = tf1.feature_column.numeric_column('col', normalizer_fn=normalize)\n",
"call_feature_columns(numeric_col, {'col': tf.constant([[0.], [1.], [2.]])})"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "M9cyhPR_drOz"
},
"source": [
"相比之下,使用 Keras,这种归一化可以使用 `tf.keras.layers.Normalization` 完成。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8bcgG-yOdqUH",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"normalization_layer = tf.keras.layers.Normalization(mean=2.0, variance=1.0)\n",
"normalization_layer(tf.constant([[0.], [1.], [2.]]))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d1InD_4QLKU-"
},
"source": [
"## 对数字特征进行分桶和独热编码"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "k5e0b8iOLRzd"
},
"source": [
"连续浮点输入的另一种常见转换是分桶为固定范围的整数。\n",
"\n",
"在特征列中,可以使用 `tf.feature_column.bucketized_column` 实现:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "_rbx6qQ-LQx7",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"numeric_col = tf1.feature_column.numeric_column('col')\n",
"bucketized_col = tf1.feature_column.bucketized_column(numeric_col, [1, 4, 5])\n",
"call_feature_columns(bucketized_col, {'col': tf.constant([1., 2., 3., 4., 5.])})\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PCYu-XtwXahx"
},
"source": [
"在 Keras 中,可以使用 `tf.keras.layers.Discretization` 代替:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "QK1WOG2uVVsL",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"discretization_layer = tf.keras.layers.Discretization(bin_boundaries=[1, 4, 5])\n",
"one_hot_layer = tf.keras.layers.CategoryEncoding(\n",
" num_tokens=4, output_mode='one_hot')\n",
"one_hot_layer(discretization_layer([1., 2., 3., 4., 5.]))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5bm9tJZAgpt4"
},
"source": [
"## 使用词汇表对字符串数据进行独热编码\n",
"\n",
"处理字符串特征通常需要词汇查找来将字符串转换为索引。下面是一个使用特征列查找字符串,然后对索引进行独热编码的示例:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3fG_igjhukCO",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"vocab_col = tf1.feature_column.categorical_column_with_vocabulary_list(\n",
" 'sizes',\n",
" vocabulary_list=['small', 'medium', 'large'],\n",
" num_oov_buckets=0)\n",
"indicator_col = tf1.feature_column.indicator_column(vocab_col)\n",
"call_feature_columns(indicator_col, {'sizes': ['small', 'medium', 'large']})"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8rBgllRtY738"
},
"source": [
"利用 Keras 预处理层,可以使用 `tf.keras.layers.StringLookup` 层,并将 `output_mode` 设置为 `'one_hot'`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "arnPlSrWvDMe",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"string_lookup_layer = tf.keras.layers.StringLookup(\n",
" vocabulary=['small', 'medium', 'large'],\n",
" num_oov_indices=0,\n",
" output_mode='one_hot')\n",
"string_lookup_layer(['small', 'medium', 'large'])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "f76MVVYO8LB5"
},
"source": [
"注:对于大型独热编码,使用输出的稀疏表示会更高效。如果将 `sparse=True` 传递给 `StringLookup` 层,则该层的输出将是 `tf.sparse.SparseTensor`,它可以作为 `tf.keras.layers.Dense` 层的输入高效地处理。"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "c1CmfSXQZHE5"
},
"source": [
"## 使用词汇表嵌入字符串数据\n",
"\n",
"对于较大的词汇表,通常需要嵌入向量才能获得良好的性能。下面是一个使用特征列嵌入字符串特征的示例:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "C3RK4HFazxlU",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"vocab_col = tf1.feature_column.categorical_column_with_vocabulary_list(\n",
" 'col',\n",
" vocabulary_list=['small', 'medium', 'large'],\n",
" num_oov_buckets=0)\n",
"embedding_col = tf1.feature_column.embedding_column(vocab_col, 4)\n",
"call_feature_columns(embedding_col, {'col': ['small', 'medium', 'large']})"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3aTRVJ6qZZH0"
},
"source": [
"利用 Keras 预处理层,可以通过组合 `tf.keras.layers.StringLookup` 层和 `tf.keras.layers.Embedding` 层来实现。`StringLookup` 的默认输出将是可直接馈送到嵌入向量中的整数索引。\n",
"\n",
"注:`Embedding` 层包含可训练参数。虽然 `StringLookup` 层可以应用于模型内部或外部的数据,但 `Embedding` 必须始终是可训练 Keras 模型的一部分才能正确训练。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8resGZPo0Fho",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"string_lookup_layer = tf.keras.layers.StringLookup(\n",
" vocabulary=['small', 'medium', 'large'], num_oov_indices=0)\n",
"embedding = tf.keras.layers.Embedding(3, 4)\n",
"embedding(string_lookup_layer(['small', 'medium', 'large']))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UwqvADV6HRdC"
},
"source": [
"## 对加权分类数据求和\n",
"\n",
"在某些情况下,您需要处理分类数据,其中类别的每次出现都附带关联的权重。在特征列中,这由 `tf.feature_column.weighted_categorical_column` 处理。与 `indicator_column` 配对时,效果是对每个类别的权重求和。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "02HqjPLMRxWn",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"ids = tf.constant([[5, 11, 5, 17, 17]])\n",
"weights = tf.constant([[0.5, 1.5, 0.7, 1.8, 0.2]])\n",
"\n",
"categorical_col = tf1.feature_column.categorical_column_with_identity(\n",
" 'ids', num_buckets=20)\n",
"weighted_categorical_col = tf1.feature_column.weighted_categorical_column(\n",
" categorical_col, 'weights')\n",
"indicator_col = tf1.feature_column.indicator_column(weighted_categorical_col)\n",
"call_feature_columns(indicator_col, {'ids': ids, 'weights': weights})"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "98jaq7Q3S9aG"
},
"source": [
"在 Keras 中,这可以通过将 `count_weights` 输入传递给 `tf.keras.layers.CategoryEncoding` 来完成,其中 `output_mode='count'`。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "JsoYUUgRS7hu",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"ids = tf.constant([[5, 11, 5, 17, 17]])\n",
"weights = tf.constant([[0.5, 1.5, 0.7, 1.8, 0.2]])\n",
"\n",
"# Using sparse output is more efficient when `num_tokens` is large.\n",
"count_layer = tf.keras.layers.CategoryEncoding(\n",
" num_tokens=20, output_mode='count', sparse=True)\n",
"tf.sparse.to_dense(count_layer(ids, count_weights=weights))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gBJxb6y2GasI"
},
"source": [
"## 嵌入加权分类数据\n",
"\n",
"您可能还想嵌入加权分类输入。在特征列中, `embedding_column` 包含 `combiner` 参数。如果任何样本包含一个类别的多个条目,则它们将根据参数设置进行组合(默认为 `'mean'`)。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "AjOt1wgmT5mM",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"ids = tf.constant([[5, 11, 5, 17, 17]])\n",
"weights = tf.constant([[0.5, 1.5, 0.7, 1.8, 0.2]])\n",
"\n",
"categorical_col = tf1.feature_column.categorical_column_with_identity(\n",
" 'ids', num_buckets=20)\n",
"weighted_categorical_col = tf1.feature_column.weighted_categorical_column(\n",
" categorical_col, 'weights')\n",
"embedding_col = tf1.feature_column.embedding_column(\n",
" weighted_categorical_col, 4, combiner='mean')\n",
"call_feature_columns(embedding_col, {'ids': ids, 'weights': weights})"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fd6eluARXndC"
},
"source": [
"在 Keras 中,`tf.keras.layers.Embedding` 没有 `combiner` 选项,但可以使用 `tf.keras.layers.Dense` 实现相同的效果。上面的 `embedding_column` 只是根据类别权重线性组合嵌入向量。虽然一开始并不明显,但它完全等效于将您的分类输入表示为大小为 `(num_tokens)` 的稀疏权重向量,随后将它们乘以形状为 `(embedding_size, num_tokens)` 的 `Dense` 内核 。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Y-vZvPyiYilE",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"ids = tf.constant([[5, 11, 5, 17, 17]])\n",
"weights = tf.constant([[0.5, 1.5, 0.7, 1.8, 0.2]])\n",
"\n",
"# For `combiner='mean'`, normalize your weights to sum to 1. Removing this line\n",
"# would be equivalent to an `embedding_column` with `combiner='sum'`.\n",
"weights = weights / tf.reduce_sum(weights, axis=-1, keepdims=True)\n",
"\n",
"count_layer = tf.keras.layers.CategoryEncoding(\n",
" num_tokens=20, output_mode='count', sparse=True)\n",
"embedding_layer = tf.keras.layers.Dense(4, use_bias=False)\n",
"embedding_layer(count_layer(ids, count_weights=weights))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3I5loEx80MVm"
},
"source": [
"## 完整的训练示例\n",
"\n",
"为了展示完整的训练工作流,首先准备一些具有三种不同类型特征的数据:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "D_7nyBee0ZBV",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"features = {\n",
" 'type': [0, 1, 1],\n",
" 'size': ['small', 'small', 'medium'],\n",
" 'weight': [2.7, 1.8, 1.6],\n",
"}\n",
"labels = [1, 1, 0]\n",
"predict_features = {'type': [0], 'size': ['foo'], 'weight': [-0.7]}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e_4Xx2c37lqD"
},
"source": [
"为 TensorFlow 1 和 TensorFlow 2 工作流定义一些通用常量:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3cyfQZ7z8jZh",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"vocab = ['small', 'medium', 'large']\n",
"one_hot_dims = 3\n",
"embedding_dims = 4\n",
"weight_mean = 2.0\n",
"weight_variance = 1.0"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ywCgU7CMIfTH"
},
"source": [
"### 使用特征列\n",
"\n",
"特征列在创建时必须作为列表传递给 Estimator,并在训练期间隐式调用。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Wsdhlm-uipr1",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"categorical_col = tf1.feature_column.categorical_column_with_identity(\n",
" 'type', num_buckets=one_hot_dims)\n",
"# Convert index to one-hot; e.g. [2] -> [0,0,1].\n",
"indicator_col = tf1.feature_column.indicator_column(categorical_col)\n",
"\n",
"# Convert strings to indices; e.g. ['small'] -> [1].\n",
"vocab_col = tf1.feature_column.categorical_column_with_vocabulary_list(\n",
" 'size', vocabulary_list=vocab, num_oov_buckets=1)\n",
"# Embed the indices.\n",
"embedding_col = tf1.feature_column.embedding_column(vocab_col, embedding_dims)\n",
"\n",
"normalizer_fn = lambda x: (x - weight_mean) / math.sqrt(weight_variance)\n",
"# Normalize the numeric inputs; e.g. [2.0] -> [0.0].\n",
"numeric_col = tf1.feature_column.numeric_column(\n",
" 'weight', normalizer_fn=normalizer_fn)\n",
"\n",
"estimator = tf1.estimator.DNNClassifier(\n",
" feature_columns=[indicator_col, embedding_col, numeric_col],\n",
" hidden_units=[1])\n",
"\n",
"def _input_fn():\n",
" return tf1.data.Dataset.from_tensor_slices((features, labels)).batch(1)\n",
"\n",
"estimator.train(_input_fn)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qPIeG_YtfNV1"
},
"source": [
"在模型上运行推断时,特征列也将用于转换输入数据。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "K-AIIB8CfSqt",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"def _predict_fn():\n",
" return tf1.data.Dataset.from_tensor_slices(predict_features).batch(1)\n",
"\n",
"next(estimator.predict(_predict_fn))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "baMA01cBIivo"
},
"source": [
"### 使用 Keras 预处理层\n",
"\n",
"Keras 预处理层在调用它们的位置上更加灵活。层可以直接应用于张量,在 `tf.data` 输入流水线内使用,或者直接构建到可训练的 Keras 模型中。\n",
"\n",
"在此示例中,您将在 `tf.data` 输入流水线中应用预处理层。为此,可以定义一个单独的 `tf.keras.Model` 来预处理您的输入特征。此模型不可训练,但可以方便地对预处理层进行分组。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NMz8RfMQdCZf",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"inputs = {\n",
" 'type': tf.keras.Input(shape=(), dtype='int64'),\n",
" 'size': tf.keras.Input(shape=(), dtype='string'),\n",
" 'weight': tf.keras.Input(shape=(), dtype='float32'),\n",
"}\n",
"# Convert index to one-hot; e.g. [2] -> [0,0,1].\n",
"type_output = tf.keras.layers.CategoryEncoding(\n",
" one_hot_dims, output_mode='one_hot')(inputs['type'])\n",
"# Convert size strings to indices; e.g. ['small'] -> [1].\n",
"size_output = tf.keras.layers.StringLookup(vocabulary=vocab)(inputs['size'])\n",
"# Normalize the numeric inputs; e.g. [2.0] -> [0.0].\n",
"weight_output = tf.keras.layers.Normalization(\n",
" axis=None, mean=weight_mean, variance=weight_variance)(inputs['weight'])\n",
"outputs = {\n",
" 'type': type_output,\n",
" 'size': size_output,\n",
" 'weight': weight_output,\n",
"}\n",
"preprocessing_model = tf.keras.Model(inputs, outputs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NRfISnj3NGlW"
},
"source": [
"注:作为在层创建时提供词汇表和归一化统计信息的替代方式,许多预处理层提供了一个 `adapt()` 方法,可用于直接从输入数据学习层状态。请参阅[预处理指南](https://tensorflow.google.cn/guide/keras/preprocessing_layers#the_adapt_method) ,了解更多详细信息。\n",
"\n",
"您现在可以在对 `tf.data.Dataset.map` 的调用中应用此模型。请注意,传递给 `map` 的函数将自动转换为 `tf.function`,并且用于编写 `tf.function` 代码的通常注意事项适用(无副作用)。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "c_6xAUnbNREh",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"# Apply the preprocessing in tf.data.Dataset.map.\n",
"dataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(1)\n",
"dataset = dataset.map(lambda x, y: (preprocessing_model(x), y),\n",
" num_parallel_calls=tf.data.AUTOTUNE)\n",
"# Display a preprocessed input sample.\n",
"next(dataset.take(1).as_numpy_iterator())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8_4u3J4NdJ8R"
},
"source": [
"接下来,可以定义一个包含可训练层的单独 `Model`。请注意此模型的输入现在如何反映预处理的特征类型和形状。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "kC9OZO5ldmP-",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"inputs = {\n",
" 'type': tf.keras.Input(shape=(one_hot_dims,), dtype='float32'),\n",
" 'size': tf.keras.Input(shape=(), dtype='int64'),\n",
" 'weight': tf.keras.Input(shape=(), dtype='float32'),\n",
"}\n",
"# Since the embedding is trainable, it needs to be part of the training model.\n",
"embedding = tf.keras.layers.Embedding(len(vocab), embedding_dims)\n",
"outputs = tf.keras.layers.Concatenate()([\n",
" inputs['type'],\n",
" embedding(inputs['size']),\n",
" tf.expand_dims(inputs['weight'], -1),\n",
"])\n",
"outputs = tf.keras.layers.Dense(1)(outputs)\n",
"training_model = tf.keras.Model(inputs, outputs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ir-cn2H_d5R7"
},
"source": [
"您现在可以使用 `tf.keras.Model.fit` 训练 `training_model`。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6TS3YJ2vnvlW",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"# Train on the preprocessed data.\n",
"training_model.compile(\n",
" loss=tf.keras.losses.BinaryCrossentropy(from_logits=True))\n",
"training_model.fit(dataset)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pSaEbOE4ecsy"
},
"source": [
"最后,在推断时,不妨将这些单独的阶段组合成处理原始特征输入的单一模型。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "QHjbIZYneboO",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"inputs = preprocessing_model.input\n",
"outputs = training_model(preprocessing_model(inputs))\n",
"inference_model = tf.keras.Model(inputs, outputs)\n",
"\n",
"predict_dataset = tf.data.Dataset.from_tensor_slices(predict_features).batch(1)\n",
"inference_model.predict(predict_dataset)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "O01VQIxCWBxU"
},
"source": [
"可以将此组合模型保存为 .keras 文件供以后使用。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6tsyVZgh7Pve",
"vscode": {
"languageId": "python"
}
},
"outputs": [],
"source": [
"inference_model.save('model.keras')\n",
"restored_model = tf.keras.models.load_model('model.keras')\n",
"restored_model.predict(predict_dataset)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IXMBwzggwUjI"
},
"source": [
"注:预处理层不可训练,这允许您使用 tf.data
异步应用它们。这样做可以获得性能优势,因为您既可以预提取预处理的批次,又可以释放任何加速器以专注于模型的可微分部分(请在使用 tf.data
API 提升性能指南的预提取部分了解更多信息)。如本指南中所示,在训练期间分离预处理并在推断期间组合预处理是一种利用这些性能提升的灵活方式。但是,如果您的模型很小或预处理时间可以忽略不计,那么从一开始就将预处理构建为一个完整的模型可能会更简单。为此,您可以从 `tf.keras.Input` 开始构建单一模型,接着是预处理层,最后是可训练层。"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2pjp7Z18gRCQ"
},
"source": [
"## 特征列对应关系表\n",
"\n",
"作为参考,下面是特征列和 Keras 预处理层之间的大致对应关系:\n",
"\n",
"\n",
"
\n",
" 特征列 | \n",
" Keras 层 | \n",
"
\n",
" \n",
" `tf.feature_column.bucketized_column` | \n",
" `tf.keras.layers.Discretization` | \n",
"
\n",
" \n",
" `tf.feature_column.categorical_column_with_hash_bucket` | \n",
" `tf.keras.layers.Hashing` | \n",
"
\n",
" \n",
" `tf.feature_column.categorical_column_with_identity` | \n",
" `tf.keras.layers.CategoryEncoding` | \n",
"
\n",
" \n",
" `tf.feature_column.categorical_column_with_vocabulary_file` | \n",
" `tf.keras.layers.StringLookup` 或 `tf.keras.layers.IntegerLookup` | \n",
"
\n",
" \n",
" `tf.feature_column.categorical_column_with_vocabulary_list` | \n",
" `tf.keras.layers.StringLookup` 或 `tf.keras.layers.IntegerLookup` | \n",
"
\n",
" \n",
" `tf.feature_column.crossed_column` | \n",
" `tf.keras.layers.experimental.preprocessing.HashedCrossing` | \n",
"
\n",
" \n",
" `tf.feature_column.embedding_column` | \n",
" `tf.keras.layers.Embedding` | \n",
"
\n",
" \n",
" `tf.feature_column.indicator_column` | \n",
" `output_mode='one_hot'` 或 `output_mode='multi_hot'`* | \n",
"
\n",
" \n",
" `tf.feature_column.numeric_column` | \n",
" `tf.keras.layers.Normalization` | \n",
"
\n",
" \n",
" `tf.feature_column.sequence_categorical_column_with_hash_bucket` | \n",
" `tf.keras.layers.Hashing` | \n",
"
\n",
" \n",
" `tf.feature_column.sequence_categorical_column_with_identity` | \n",
" `tf.keras.layers.CategoryEncoding` | \n",
"
\n",
" \n",
" `tf.feature_column.sequence_categorical_column_with_vocabulary_file` | \n",
" `tf.keras.layers.StringLookup`、`tf.keras.layers.IntegerLookup` 或 `tf.keras.layer.TextVectorization`† | \n",
"
\n",
" \n",
" `tf.feature_column.sequence_categorical_column_with_vocabulary_list` | \n",
" `tf.keras.layers.StringLookup`、`tf.keras.layers.IntegerLookup` 或 `tf.keras.layer.TextVectorization`† | \n",
"
\n",
" \n",
" `tf.feature_column.sequence_numeric_column` | \n",
" `tf.keras.layers.Normalization` | \n",
"
\n",
" \n",
" `tf.feature_column.weighted_categorical_column` | \n",
" `tf.keras.layers.CategoryEncoding` | \n",
"
\n",
"\n",
"
\n",
"\n",
"`output_mode` 可以传递给 `tf.keras.layers.CategoryEncoding`、`tf.keras.layers.StringLookup`、`tf.keras.layers.IntegerLookup` 和 `tf.keras.layers.TextVectorization`。\n",
"\n",
"† `tf.keras.layers.TextVectorization` 可以直接处理自由格式的文本输入(例如,整个句子或段落)。这不是 TensorFlow 1 中分类序列处理的一对一替代,但可以为临时文本预处理提供方便的替代。\n",
"\n",
"注:线性 Estimator(例如 `tf.estimator.LinearClassifier`)可以在没有 `embedding_column` 或 `indicator_column` 的情况下处理直接分类输入(整数索引)。但是,整数索引不能直接传递给 `tf.keras.layers.Dense` 或 `tf.keras.experimental.LinearModel`。在调用 `Dense` 或 `LinearModel` 之前,应当首先使用 `tf.layers.CategoryEncoding` 对这些输入进行编码,其中 `output_mode='count'`(如果类别大小很大,则为 `sparse=True`)。"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AQCJ6lM3YDq_"
},
"source": [
"## 后续步骤\n",
"\n",
"- 有关 Keras 预处理层的更多信息,请转到[使用预处理层](https://tensorflow.google.cn/guide/keras/preprocessing_layers)指南。\n",
"- 有关将预处理层应用于结构化数据的更深入示例,请参阅[使用 Keras 预处理层对结构化数据进行分类](../../tutorials/structured_data/preprocessing_layers.ipynb)教程。"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "migrating_feature_columns.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}