tvm.relay.testing#

a simple multilayer perceptron

tvm.relay.testing.mlp.get_net(batch_size, num_classes=10, image_shape=(1, 28, 28), dtype='float32')[源代码]#

Get network a simple multilayer perceptron.

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of claseses

image_shapetuple, optional

The input image shape

dtypestr, optional

The data type

Returns#

netrelay.Function

The dataflow.

tvm.relay.testing.mlp.get_workload(batch_size, num_classes=10, image_shape=(1, 28, 28), dtype='float32')[源代码]#

Get benchmark workload for a simple multilayer perceptron.

Parameters#

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of claseses

image_shapetuple, optional

The input image shape

dtypestr, optional

The data type

Returns#

modtvm.IRModule

The relay module that contains a mlp network.

paramsdict of str to NDArray

The parameters.

Adapted from tornadomeet/ResNet Original author Wei Wu

Implemented the following paper:

Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. "Identity Mappings in Deep Residual Networks"

tvm.relay.testing.resnet.get_net(batch_size, num_classes, num_layers=50, image_shape=(3, 224, 224), layout='NCHW', dtype='float32', **kwargs)[源代码]#

Adapted from tornadomeet/ResNet Original author Wei Wu

tvm.relay.testing.resnet.get_workload(batch_size=1, num_classes=1000, num_layers=18, image_shape=(3, 224, 224), layout='NCHW', dtype='float32', **kwargs)[源代码]#

Get benchmark workload for resnet

Parameters#

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of classes

num_layersint, optional

Number of layers

image_shapetuple, optional

The input image shape

layout: str

The data layout for conv2d

dtypestr, optional

The data type

kwargsdict

Extra arguments

Returns#

modtvm.IRModule

The relay module that contains a ResNet network.

paramsdict of str to NDArray

The parameters.

tvm.relay.testing.resnet.residual_unit(data, num_filter, stride, dim_match, name, bottle_neck=True, data_layout='NCHW', kernel_layout='IOHW')[源代码]#

Return ResNet Unit symbol for building ResNet

Parameters#

datastr

Input data

num_filterint

Number of output channels

bnfint

Bottle neck channels factor with regard to num_filter

stridetuple

Stride used in convolution

dim_matchbool

True means channel number between input and output is the same, otherwise means differ

namestr

Base name of the operators

tvm.relay.testing.resnet.resnet(units, num_stages, filter_list, num_classes, data_shape, bottle_neck=True, layout='NCHW', dtype='float32')[源代码]#

Return ResNet Program.

Parameters#

unitslist

Number of units in each stage

num_stagesint

Number of stages

filter_listlist

Channel size of each stage

num_classesint

Output size of symbol

data_shapetuple of int.

The shape of input data.

bottle_neckbool

Whether apply bottleneck transformation.

layout: str

The data layout for conv2d

dtypestr

The global data type.

Net of the generator of DCGAN

Adopted from: tqchen/mxnet-gan

Reference: Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks." arXiv preprint arXiv:1511.06434 (2015).

tvm.relay.testing.dcgan.deconv2d(data, ishape, oshape, kshape, layout, name, stride=(2, 2))[源代码]#

a deconv layer that enlarges the feature map

tvm.relay.testing.dcgan.deconv2d_bn_relu(data, prefix, **kwargs)[源代码]#

a block of deconv + batch norm + relu

tvm.relay.testing.dcgan.get_net(batch_size, random_len=100, oshape=(3, 64, 64), ngf=128, code=None, layout='NCHW', dtype='float32')[源代码]#

get net of dcgan generator

tvm.relay.testing.dcgan.get_workload(batch_size, oshape=(3, 64, 64), ngf=128, random_len=100, layout='NCHW', dtype='float32')[源代码]#

Get benchmark workload for a DCGAN generator

Parameters#

batch_sizeint

The batch size used in the model

oshapetuple, optional

The shape of output image, layout="CHW"

ngf: int, optional

The number of final feature maps in the generator

random_lenint, optional

The length of random input

layout: str, optional

The layout of conv2d transpose

dtypestr, optional

The data type

Returns#

modtvm.IRModule

The relay module that contains a DCGAN network.

paramsdict of str to NDArray

The parameters.

Port of NNVM version of MobileNet to Relay.

tvm.relay.testing.mobilenet.conv_block(data, name, channels, kernel_size=(3, 3), strides=(1, 1), padding=(1, 1), epsilon=1e-05, layout='NCHW')[源代码]#

Helper function to construct conv_bn-relu

tvm.relay.testing.mobilenet.get_workload(batch_size=1, num_classes=1000, image_shape=(3, 224, 224), dtype='float32', layout='NCHW')[源代码]#

Get benchmark workload for mobilenet

Parameters#

batch_sizeint, optional

The batch size used in the model

num_classesint, optional

Number of classes

image_shapetuple, optional

The input image shape, cooperate with layout

dtypestr, optional

The data type

layoutstr, optional

The data layout of image_shape and the operators cooperate with image_shape

Returns#

modtvm.IRModule

The relay module that contains a MobileNet network.

paramsdict of str to NDArray

The parameters.

tvm.relay.testing.mobilenet.mobile_net(num_classes=1000, data_shape=(1, 3, 224, 224), dtype='float32', alpha=1.0, is_shallow=False, layout='NCHW')[源代码]#

Function to construct a MobileNet

tvm.relay.testing.mobilenet.separable_conv_block(data, name, depthwise_channels, pointwise_channels, kernel_size=(3, 3), downsample=False, padding=(1, 1), epsilon=1e-05, layout='NCHW', dtype='float32')[源代码]#

Helper function to get a separable conv block

Implementation of a Long Short-Term Memory (LSTM) cell.

Adapted from: https://gist.github.com/merrymercy/5eb24e3b019f84200645bd001e9caae9

tvm.relay.testing.lstm.get_net(iterations, num_hidden, batch_size=1, dtype='float32')[源代码]#

Constructs an unrolled RNN with LSTM cells

tvm.relay.testing.lstm.get_workload(iterations, num_hidden, batch_size=1, dtype='float32')[源代码]#

Get benchmark workload for an LSTM RNN.

Parameters#

iterationsint

The number of iterations in the desired LSTM RNN.

num_hiddenint

The size of the hiddxen state

batch_sizeint, optional (default 1)

The batch size used in the model

dtypestr, optional (default "float32")

The data type

Returns#

modtvm.IRModule

The relay module that contains a LSTM network.

paramsdict of str to NDArray

The parameters.

tvm.relay.testing.lstm.lstm_cell(num_hidden, batch_size=1, dtype='float32', name='')[源代码]#

Long-Short Term Memory (LSTM) network cell.

Parameters#

num_hiddenint

Number of units in output symbol.

batch_sizeint

Batch size (length of states).

Returns#

resulttvm.relay.Function

A Relay function that evaluates an LSTM cell. The function takes in a tensor of input data, a tuple of two states, and weights and biases for dense operations on the inputs and on the state. It returns a tuple with two members, an output tensor and a tuple of two new states.

Inception V3, suitable for images with around 299 x 299

Reference: Szegedy, Christian, et al. "Rethinking the Inception Architecture for Computer Vision." arXiv preprint arXiv:1512.00567 (2015).

Adopted from apache/incubator-mxnet

example/image-classification/symbols/inception-v3.py

tvm.relay.testing.inception_v3.get_net(batch_size, num_classes, image_shape, dtype)[源代码]#

Get network a Inception v3 network.

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of claseses

image_shapetuple, optional

The input image shape

dtypestr, optional

The data type

Returns#

netrelay.Function

The dataflow.

tvm.relay.testing.inception_v3.get_workload(batch_size=1, num_classes=1000, image_shape=(3, 299, 299), dtype='float32')[源代码]#

Get benchmark workload for InceptionV3

Parameters#

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of classes

image_shapetuple, optional

The input image shape

dtypestr, optional

The data type

Returns#

modtvm.IRModule

The relay module that contains an Inception V3 network.

paramsdict of str to NDArray

The parameters.

Symbol of SqueezeNet

Reference: Iandola, Forrest N., et al. "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size." (2016).

tvm.relay.testing.squeezenet.get_net(batch_size, image_shape, num_classes, version, dtype)[源代码]#

Get symbol of SqueezeNet

Parameters#

batch_sizeint

The batch size used in the model

image_shapetuple, optional

The input image shape

num_classes: int

The number of classification results

versionstr, optional

"1.0" or "1.1" of SqueezeNet

tvm.relay.testing.squeezenet.get_workload(batch_size=1, num_classes=1000, version='1.0', image_shape=(3, 224, 224), dtype='float32')[源代码]#

Get benchmark workload for SqueezeNet

Parameters#

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of classes

versionstr, optional

"1.0" or "1.1" of SqueezeNet

image_shapetuple, optional

The input image shape

dtypestr, optional

The data type

Returns#

modtvm.IRModule

The relay module that contains a SqueezeNet network.

paramsdict of str to NDArray

The parameters.

References:

Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014).

tvm.relay.testing.vgg.get_classifier(input_data, num_classes)[源代码]#

Get VGG classifier layers as fc layers.

tvm.relay.testing.vgg.get_feature(internal_layer, layers, filters, batch_norm=False)[源代码]#

Get VGG feature body as stacks of convolutions.

tvm.relay.testing.vgg.get_net(batch_size, image_shape, num_classes, dtype, num_layers=11, batch_norm=False)[源代码]#

Parameters#

batch_sizeint

The batch size used in the model

image_shapetuple, optional

The input image shape

num_classesint, optional

Number of claseses

dtypestr, optional

The data type

num_layersint

Number of layers for the variant of vgg. Options are 11, 13, 16, 19.

batch_normbool, default False

Use batch normalization.

tvm.relay.testing.vgg.get_workload(batch_size, num_classes=1000, image_shape=(3, 224, 224), dtype='float32', num_layers=11, batch_norm=False)[源代码]#

Get benchmark workload for VGG nets.

Parameters#

batch_sizeint

The batch size used in the model

num_classesint, optional

Number of claseses

image_shapetuple, optional

The input image shape

dtypestr, optional

The data type

num_layersint

Number of layers for the variant of vgg. Options are 11, 13, 16, 19.

batch_normbool

Use batch normalization.

Returns#

modtvm.IRModule

The relay module that contains a VGG network.

paramsdict of str to NDArray

The parameters.

Port of MxNet version of Densenet to Relay. apache/incubator-mxnet

tvm.relay.testing.densenet._make_dense_block(data, num_layers, bn_size, growth_rate, index)[源代码]#

Makes a block of dense layers of the specified size.

tvm.relay.testing.densenet._make_dense_layer(data, growth_rate, bn_size, index)[源代码]#

Single densenet layer.

tvm.relay.testing.densenet._make_dense_net(num_init_features, growth_rate, block_config, data_shape, data_dtype, bn_size=4, classes=1000)[源代码]#

Builds up a densenet.

tvm.relay.testing.densenet._make_transition(data, num_output_features, index)[源代码]#

Transition between layers.

tvm.relay.testing.densenet.get_workload(densenet_size=121, classes=1000, batch_size=4, image_shape=(3, 224, 224), dtype='float32')[源代码]#

Gets benchmark workload for densenet.

Parameters#

densenet_sizeint, optional (default 121)

Parameter for the network size. The supported sizes are 121, 161, 169, and 201.

classesint, optional (default 1000)

The number of classes.

batch_sizeint, optional (detault 4)

The batch size for the network.

image_shapeshape, optional (default (3, 224, 224))

The shape of the input data.

dtypedata type, optional (default 'float32')

The data type of the input data.

Returns#

mod: tvm.IRModule

The relay module that contains a DenseNet network.

paramsdict of str to NDArray

The benchmark paraeters.