tvm.relay.testing#

Utilities for testing and benchmarks

Classes:

Prelude([mod])

Contains standard definitions.

Functions:

check_grad(func[, inputs, test_inputs, eps, ...])

Perform numerical gradient checking given a relay function.

count(prelude, n)

Takes a ConstructorValue corresponding to a nat ADT and converts it into a Python integer.

count_ops(expr)

count number of times a given op is called in the graph

create_workload(net[, initializer, seed])

Helper function to create benchmark image classification workload.

enabled_targets()

Get all enabled targets with associated devices.

gradient(expr[, mod, mode])

Transform the input function, returning a function that calculate the original result, paired with gradient of the input.

make_nat_expr(prelude, n)

Given a non-negative Python integer, constructs a Python expression representing that integer's value as a nat.

make_nat_value(prelude, n)

The inverse of count(): Given a non-negative Python integer, constructs a ConstructorValue representing that value as a nat.

run_as_python(expr[, mod, target])

Converts the given Relay expression into a Python script and executes it.

to_python(expr[, mod, target])

Converts the given Relay expression into a Python script (as a Python AST object).

class tvm.relay.testing.Prelude(mod=None)[源代码]#

Contains standard definitions.

Methods:

get_ctor(ty_name, canonical, dtype)

Get constructor corresponding to the canonical name

get_ctor_static(ty_name, name, dtype, shape)

Get constructor corresponding to the canonical name

get_global_var(canonical, dtype)

Get global var corresponding to the canonical name

get_global_var_static(canonical, dtype, shape)

Get var corresponding to the canonical name

get_name(canonical, dtype)

Get name corresponding to the canonical name

get_name_static(canonical, dtype, shape[, ...])

Get name corresponding to the canonical name

get_tensor_ctor_static(name, dtype, shape)

Get constructor corresponding to the canonical name

get_type(canonical, dtype)

Get type corresponding to the canonical name

get_type_static(canonical, dtype, shape)

Get type corresponding to the canonical name

load_prelude()

Parses the Prelude from Relay's text format into a module.

get_ctor(ty_name, canonical, dtype)[源代码]#

Get constructor corresponding to the canonical name

get_ctor_static(ty_name, name, dtype, shape)[源代码]#

Get constructor corresponding to the canonical name

get_global_var(canonical, dtype)[源代码]#

Get global var corresponding to the canonical name

get_global_var_static(canonical, dtype, shape, batch_dim=None)[源代码]#

Get var corresponding to the canonical name

get_name(canonical, dtype)[源代码]#

Get name corresponding to the canonical name

get_name_static(canonical, dtype, shape, batch_dim=None)[源代码]#

Get name corresponding to the canonical name

get_tensor_ctor_static(name, dtype, shape)[源代码]#

Get constructor corresponding to the canonical name

get_type(canonical, dtype)[源代码]#

Get type corresponding to the canonical name

get_type_static(canonical, dtype, shape)[源代码]#

Get type corresponding to the canonical name

load_prelude()[源代码]#

Parses the Prelude from Relay’s text format into a module.

tvm.relay.testing.check_grad(func, inputs=None, test_inputs=None, eps=1e-06, atol=1e-05, rtol=0.001, scale=None, mean=0, mode='higher_order', target_devices=None, executor_kind='debug')[源代码]#

Perform numerical gradient checking given a relay function.

Compare analytical gradients to numerical gradients derived from two-sided approximation. Note that this test may fail if your function input types are not of high enough precision.

Parameters#

functvm.relay.Function

The relay function to test.

inputs: List[np.array]

Optional user-provided input parameters to use. If not given, will generate random normal inputs scaled to be close to the chosen epsilon value to avoid numerical precision loss.

test_inputs: List[np.array]

The inputs to test for gradient matching. Useful in cases where some inputs are not differentiable, such as symbolic inputs to dynamic ops. If not given, all inputs are tested.

eps: float

The epsilon value to use for computing numerical gradient approximation.

atol: float

The absolute tolerance on difference between numerical and analytical gradients. Note that this needs to be scaled appropriately relative to the chosen eps and inputs.

rtol: float

The relative tolerance on difference between numerical and analytical gradients. Note that this needs to be scaled appropriately relative to the chosen eps.

scale: float

The standard deviation of the inputs.

mean: float

The mean of the inputs.

target_devices: Optional[List[Tuple[tvm.target.Target, tvm.runtime.Device]]]

A list of targets/devices on which the gradient should be tested. If not specified, will default to tvm.testing.enabled_targets().

tvm.relay.testing.count(prelude, n)[源代码]#

Takes a ConstructorValue corresponding to a nat ADT and converts it into a Python integer. This is an example of using an ADT value in Python.

tvm.relay.testing.count_ops(expr)[源代码]#

count number of times a given op is called in the graph

tvm.relay.testing.create_workload(net, initializer=None, seed=0)[源代码]#

Helper function to create benchmark image classification workload.

Parameters#

nettvm.relay.Function

The selected function of the network.

initializerInitializer

The initializer used

seedint

The seed used in initialization.

Returns#

modtvm.IRModule

The created relay module.

paramsdict of str to NDArray

The parameters.

tvm.relay.testing.enabled_targets()[源代码]#

Get all enabled targets with associated devices.

In most cases, you should use tvm.testing.parametrize_targets() instead of this function.

In this context, enabled means that TVM was built with support for this target, the target name appears in the TVM_TEST_TARGETS environment variable, and a suitable device for running this target exists. If TVM_TEST_TARGETS is not set, it defaults to variable DEFAULT_TEST_TARGETS in this module.

If you use this function in a test, you must decorate the test with tvm.testing.uses_gpu() (otherwise it will never be run on the gpu).

Returns#

targets: list

A list of pairs of all enabled devices and the associated context

tvm.relay.testing.gradient(expr, mod=None, mode='higher_order')[源代码]#

Transform the input function, returning a function that calculate the original result, paired with gradient of the input.

Parameters#

exprtvm.relay.Expr

The input expression, which is a Function or a GlobalVar.

mod : Optional[tvm.IRModule]

modeOptional[String]

The mode of the automatic differentiation algorithm. ‘first_order’ only works on first order code, but will not produce reference nor closure. ‘higher_order’ works on all code using reference and closure.

Returns#

exprtvm.relay.Expr

The transformed expression.

tvm.relay.testing.make_nat_expr(prelude, n)[源代码]#

Given a non-negative Python integer, constructs a Python expression representing that integer’s value as a nat.

tvm.relay.testing.make_nat_value(prelude, n)[源代码]#

The inverse of count(): Given a non-negative Python integer, constructs a ConstructorValue representing that value as a nat.

tvm.relay.testing.run_as_python(expr, mod=None, target=llvm -keys=cpu -mtriple=x86_64-pc-linux-gnu)[源代码]#

Converts the given Relay expression into a Python script and executes it.

Note that closures will be returned as PackedFuncs

参数:

expr (RelayExpr)

tvm.relay.testing.to_python(expr, mod=None, target=llvm -keys=cpu -mtriple=x86_64-pc-linux-gnu)[源代码]#

Converts the given Relay expression into a Python script (as a Python AST object). For easiest debugging, import the astor package and use to_source().

参数:

expr (RelayExpr)

a simple multilayer perceptron

tvm.relay.testing.mlp.get_net(batch_size, num_classes=10, image_shape=(1, 28, 28), dtype='float32')[源代码]#

Get network a simple multilayer perceptron.

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of claseses

image_shapetuple, optional

The input image shape

dtypestr, optional

The data type

Returns#

netrelay.Function

The dataflow.

tvm.relay.testing.mlp.get_workload(batch_size, num_classes=10, image_shape=(1, 28, 28), dtype='float32')[源代码]#

Get benchmark workload for a simple multilayer perceptron.

Parameters#

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of claseses

image_shapetuple, optional

The input image shape

dtypestr, optional

The data type

Returns#

modtvm.IRModule

The relay module that contains a mlp network.

paramsdict of str to NDArray

The parameters.

Adapted from tornadomeet/ResNet Original author Wei Wu

Implemented the following paper:

Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. “Identity Mappings in Deep Residual Networks”

tvm.relay.testing.resnet.get_net(batch_size, num_classes, num_layers=50, image_shape=(3, 224, 224), layout='NCHW', dtype='float32', **kwargs)[源代码]#

Adapted from tornadomeet/ResNet Original author Wei Wu

tvm.relay.testing.resnet.get_workload(batch_size=1, num_classes=1000, num_layers=18, image_shape=(3, 224, 224), layout='NCHW', dtype='float32', **kwargs)[源代码]#

Get benchmark workload for resnet

Parameters#

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of classes

num_layersint, optional

Number of layers

image_shapetuple, optional

The input image shape

layout: str

The data layout for conv2d

dtypestr, optional

The data type

kwargsdict

Extra arguments

Returns#

modtvm.IRModule

The relay module that contains a ResNet network.

paramsdict of str to NDArray

The parameters.

tvm.relay.testing.resnet.residual_unit(data, num_filter, stride, dim_match, name, bottle_neck=True, data_layout='NCHW', kernel_layout='IOHW')[源代码]#

Return ResNet Unit symbol for building ResNet

Parameters#

datastr

Input data

num_filterint

Number of output channels

bnfint

Bottle neck channels factor with regard to num_filter

stridetuple

Stride used in convolution

dim_matchbool

True means channel number between input and output is the same, otherwise means differ

namestr

Base name of the operators

tvm.relay.testing.resnet.resnet(units, num_stages, filter_list, num_classes, data_shape, bottle_neck=True, layout='NCHW', dtype='float32')[源代码]#

Return ResNet Program.

Parameters#

unitslist

Number of units in each stage

num_stagesint

Number of stages

filter_listlist

Channel size of each stage

num_classesint

Output size of symbol

data_shapetuple of int.

The shape of input data.

bottle_neckbool

Whether apply bottleneck transformation.

layout: str

The data layout for conv2d

dtypestr

The global data type.

Net of the generator of DCGAN

Adopted from: tqchen/mxnet-gan

Reference: Radford, Alec, Luke Metz, and Soumith Chintala. “Unsupervised representation learning with deep convolutional generative adversarial networks.” arXiv preprint arXiv:1511.06434 (2015).

tvm.relay.testing.dcgan.deconv2d(data, ishape, oshape, kshape, layout, name, stride=(2, 2))[源代码]#

a deconv layer that enlarges the feature map

tvm.relay.testing.dcgan.deconv2d_bn_relu(data, prefix, **kwargs)[源代码]#

a block of deconv + batch norm + relu

tvm.relay.testing.dcgan.get_net(batch_size, random_len=100, oshape=(3, 64, 64), ngf=128, code=None, layout='NCHW', dtype='float32')[源代码]#

get net of dcgan generator

tvm.relay.testing.dcgan.get_workload(batch_size, oshape=(3, 64, 64), ngf=128, random_len=100, layout='NCHW', dtype='float32')[源代码]#

Get benchmark workload for a DCGAN generator

Parameters#

batch_sizeint

The batch size used in the model

oshapetuple, optional

The shape of output image, layout=”CHW”

ngf: int, optional

The number of final feature maps in the generator

random_lenint, optional

The length of random input

layout: str, optional

The layout of conv2d transpose

dtypestr, optional

The data type

Returns#

modtvm.IRModule

The relay module that contains a DCGAN network.

paramsdict of str to NDArray

The parameters.

Port of NNVM version of MobileNet to Relay.

tvm.relay.testing.mobilenet.conv_block(data, name, channels, kernel_size=(3, 3), strides=(1, 1), padding=(1, 1), epsilon=1e-05, layout='NCHW')[源代码]#

Helper function to construct conv_bn-relu

tvm.relay.testing.mobilenet.get_workload(batch_size=1, num_classes=1000, image_shape=(3, 224, 224), dtype='float32', layout='NCHW')[源代码]#

Get benchmark workload for mobilenet

Parameters#

batch_sizeint, optional

The batch size used in the model

num_classesint, optional

Number of classes

image_shapetuple, optional

The input image shape, cooperate with layout

dtypestr, optional

The data type

layoutstr, optional

The data layout of image_shape and the operators cooperate with image_shape

Returns#

modtvm.IRModule

The relay module that contains a MobileNet network.

paramsdict of str to NDArray

The parameters.

tvm.relay.testing.mobilenet.mobile_net(num_classes=1000, data_shape=(1, 3, 224, 224), dtype='float32', alpha=1.0, is_shallow=False, layout='NCHW')[源代码]#

Function to construct a MobileNet

tvm.relay.testing.mobilenet.separable_conv_block(data, name, depthwise_channels, pointwise_channels, kernel_size=(3, 3), downsample=False, padding=(1, 1), epsilon=1e-05, layout='NCHW', dtype='float32')[源代码]#

Helper function to get a separable conv block

Implementation of a Long Short-Term Memory (LSTM) cell.

Adapted from: https://gist.github.com/merrymercy/5eb24e3b019f84200645bd001e9caae9

tvm.relay.testing.lstm.get_net(iterations, num_hidden, batch_size=1, dtype='float32')[源代码]#

Constructs an unrolled RNN with LSTM cells

tvm.relay.testing.lstm.get_workload(iterations, num_hidden, batch_size=1, dtype='float32')[源代码]#

Get benchmark workload for an LSTM RNN.

Parameters#

iterationsint

The number of iterations in the desired LSTM RNN.

num_hiddenint

The size of the hiddxen state

batch_sizeint, optional (default 1)

The batch size used in the model

dtypestr, optional (default “float32”)

The data type

Returns#

modtvm.IRModule

The relay module that contains a LSTM network.

paramsdict of str to NDArray

The parameters.

tvm.relay.testing.lstm.lstm_cell(num_hidden, batch_size=1, dtype='float32', name='')[源代码]#

Long-Short Term Memory (LSTM) network cell.

Parameters#

num_hiddenint

Number of units in output symbol.

batch_sizeint

Batch size (length of states).

Returns#

resulttvm.relay.Function

A Relay function that evaluates an LSTM cell. The function takes in a tensor of input data, a tuple of two states, and weights and biases for dense operations on the inputs and on the state. It returns a tuple with two members, an output tensor and a tuple of two new states.

Inception V3, suitable for images with around 299 x 299

Reference: Szegedy, Christian, et al. “Rethinking the Inception Architecture for Computer Vision.” arXiv preprint arXiv:1512.00567 (2015).

Adopted from apache/incubator-mxnet

example/image-classification/symbols/inception-v3.py

tvm.relay.testing.inception_v3.get_net(batch_size, num_classes, image_shape, dtype)[源代码]#

Get network a Inception v3 network.

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of claseses

image_shapetuple, optional

The input image shape

dtypestr, optional

The data type

Returns#

netrelay.Function

The dataflow.

tvm.relay.testing.inception_v3.get_workload(batch_size=1, num_classes=1000, image_shape=(3, 299, 299), dtype='float32')[源代码]#

Get benchmark workload for InceptionV3

Parameters#

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of classes

image_shapetuple, optional

The input image shape

dtypestr, optional

The data type

Returns#

modtvm.IRModule

The relay module that contains an Inception V3 network.

paramsdict of str to NDArray

The parameters.

Symbol of SqueezeNet

Reference: Iandola, Forrest N., et al. “Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size.” (2016).

tvm.relay.testing.squeezenet.get_net(batch_size, image_shape, num_classes, version, dtype)[源代码]#

Get symbol of SqueezeNet

Parameters#

batch_sizeint

The batch size used in the model

image_shapetuple, optional

The input image shape

num_classes: int

The number of classification results

versionstr, optional

“1.0” or “1.1” of SqueezeNet

tvm.relay.testing.squeezenet.get_workload(batch_size=1, num_classes=1000, version='1.0', image_shape=(3, 224, 224), dtype='float32')[源代码]#

Get benchmark workload for SqueezeNet

Parameters#

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of classes

versionstr, optional

“1.0” or “1.1” of SqueezeNet

image_shapetuple, optional

The input image shape

dtypestr, optional

The data type

Returns#

modtvm.IRModule

The relay module that contains a SqueezeNet network.

paramsdict of str to NDArray

The parameters.

References:

Simonyan, Karen, and Andrew Zisserman. “Very deep convolutional networks for large-scale image recognition.” arXiv preprint arXiv:1409.1556 (2014).

tvm.relay.testing.vgg.get_classifier(input_data, num_classes)[源代码]#

Get VGG classifier layers as fc layers.

tvm.relay.testing.vgg.get_feature(internal_layer, layers, filters, batch_norm=False)[源代码]#

Get VGG feature body as stacks of convolutions.

tvm.relay.testing.vgg.get_net(batch_size, image_shape, num_classes, dtype, num_layers=11, batch_norm=False)[源代码]#

Parameters#

batch_sizeint

The batch size used in the model

image_shapetuple, optional

The input image shape

num_classesint, optional

Number of claseses

dtypestr, optional

The data type

num_layersint

Number of layers for the variant of vgg. Options are 11, 13, 16, 19.

batch_normbool, default False

Use batch normalization.

tvm.relay.testing.vgg.get_workload(batch_size, num_classes=1000, image_shape=(3, 224, 224), dtype='float32', num_layers=11, batch_norm=False)[源代码]#

Get benchmark workload for VGG nets.

Parameters#

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of claseses

image_shapetuple, optional

The input image shape

dtypestr, optional

The data type

num_layersint

Number of layers for the variant of vgg. Options are 11, 13, 16, 19.

batch_normbool

Use batch normalization.

Returns#

modtvm.IRModule

The relay module that contains a VGG network.

paramsdict of str to NDArray

The parameters.

Port of MxNet version of Densenet to Relay. apache/incubator-mxnet

tvm.relay.testing.densenet._make_dense_block(data, num_layers, bn_size, growth_rate, index)[源代码]#

Makes a block of dense layers of the specified size.

tvm.relay.testing.densenet._make_dense_layer(data, growth_rate, bn_size, index)[源代码]#

Single densenet layer.

tvm.relay.testing.densenet._make_dense_net(num_init_features, growth_rate, block_config, data_shape, data_dtype, bn_size=4, classes=1000)[源代码]#

Builds up a densenet.

tvm.relay.testing.densenet._make_transition(data, num_output_features, index)[源代码]#

Transition between layers.

tvm.relay.testing.densenet.get_workload(densenet_size=121, classes=1000, batch_size=4, image_shape=(3, 224, 224), dtype='float32')[源代码]#

Gets benchmark workload for densenet.

Parameters#

densenet_sizeint, optional (default 121)

Parameter for the network size. The supported sizes are 121, 161, 169, and 201.

classesint, optional (default 1000)

The number of classes.

batch_sizeint, optional (detault 4)

The batch size for the network.

image_shapeshape, optional (default (3, 224, 224))

The shape of the input data.

dtypedata type, optional (default ‘float32’)

The data type of the input data.

Returns#

mod: tvm.IRModule

The relay module that contains a DenseNet network.

paramsdict of str to NDArray

The benchmark paraeters.