vta.top.graphpack#

A Relay implementation of graph packing.

Exceptions#

BT

Common base class for all non-exit exceptions.

Classes#

ExprDeviceAnnot

Visitor to perform graph annotation on an AST.

ExprLocator

Visitor to locate op on an AST.

ExprPack

Visitor to perform graph packing on an AST.

Functions#

_channel_const_match(channel_length, cfactor_out)

Round the channel const variant if the value not divisible by cfactor_out

_const_shape_match(data, dshape, cfactor_out)

Pad the constant if the shape[0] not divisible by cfactor_out.

_get_tensor_shape(node)

Get node shape.

_get_tensor_type(node)

Get node type.

_operator_idx_inc(expr, count_meta, operator_current_idx)

Increase operator index

_pack_batch_channel(data, dshape, bfactor, cfactor)

Pack the data channel dimension.

_pack_const(data, dshape, dtype, bfactor, cfactor)

Pack a constant parameter.

_pack_weight(data, dshape, cfactor)

Pack the weight into packed format.

_pack_weight_conv2d_transpose(data, dshape, cfactor)

Pack the weight into packed format.

_to_shape(shape)

convert shape into tuple.

_unpack_batch_channel(data, old_shape[, unpack_transpose])

Unpack the data channel dimension.

_weight_shape_match(data, dshape, channels, cfactor_out)

Pad the weight if the shape[0] not divisible by cfactor_out.

_weight_shape_match_transpose(data, dshape, channels, ...)

Pad the weight if the shape[1] not divisible by cfactor_out.

get_subgraph(expr, start_name, stop_name, ...)

We assume stop_name only appears once for simplicity.

graph_pack(expr, bfactor, cfactor, weight_bits[, ...])

Pack the graph into batch&channel packed format.

run_opt_pass(expr, opt_pass)

Exectue a relay pass.

Module Contents#

exception vta.top.graphpack.BT[源代码]#

Bases: Exception

Common base class for all non-exit exceptions.

class vta.top.graphpack.ExprDeviceAnnot(start=-1, end=-1)[源代码]#

Bases: tvm.relay.ExprMutator

Visitor to perform graph annotation on an AST.

Parameters#

start: int

the start location to mark run on vta (inclusive)

end: int

the end location to mark run on vta (exclusive)

Returns#

None

is_float_op(call)[源代码]#

check if this op belongs to a float op in general, float op's odtype is float; a special case is float->int cast, which follow this op sequence: multiply(float) -> round(float) -> clip(float) -> cast(int);

visit_call(call)[源代码]#

Visit the children.

cast[源代码]#
counter[源代码]#
cpu_dev[源代码]#
end[源代码]#
ext_dev[源代码]#
start[源代码]#
class vta.top.graphpack.ExprLocator[源代码]#

Bases: tvm.relay.ExprMutator

Visitor to locate op on an AST.

visit_call(call)[源代码]#

Visit the children.

counter[源代码]#
op2nodes[源代码]#
class vta.top.graphpack.ExprPack(bfactor, cfactor, weight_bits)[源代码]#

Bases: tvm.relay.ExprMutator

Visitor to perform graph packing on an AST.

visit_call(call)[源代码]#

Visit the children.

add[源代码]#
bfactor[源代码]#
bias_add[源代码]#
bitpack_end[源代码]#
bitpack_start[源代码]#
cfactor[源代码]#
conv2d[源代码]#
conv2d_transpose[源代码]#
multiply[源代码]#
number_of_conv2d = 0[源代码]#
pad[源代码]#
reshape[源代码]#
start_pack = False[源代码]#
unpack_transpose = True[源代码]#
upsampling[源代码]#
weight_bits[源代码]#
vta.top.graphpack._channel_const_match(channel_length, cfactor_out)[源代码]#

Round the channel const variant if the value not divisible by cfactor_out

vta.top.graphpack._const_shape_match(data, dshape, cfactor_out)[源代码]#

Pad the constant if the shape[0] not divisible by cfactor_out.

vta.top.graphpack._get_tensor_shape(node)[源代码]#

Get node shape.

vta.top.graphpack._get_tensor_type(node)[源代码]#

Get node type.

vta.top.graphpack._operator_idx_inc(expr, count_meta, operator_current_idx)[源代码]#

Increase operator index

vta.top.graphpack._pack_batch_channel(data, dshape, bfactor, cfactor)[源代码]#

Pack the data channel dimension.

vta.top.graphpack._pack_const(data, dshape, dtype, bfactor, cfactor)[源代码]#

Pack a constant parameter.

vta.top.graphpack._pack_weight(data, dshape, cfactor)[源代码]#

Pack the weight into packed format.

vta.top.graphpack._pack_weight_conv2d_transpose(data, dshape, cfactor)[源代码]#

Pack the weight into packed format.

vta.top.graphpack._to_shape(shape)[源代码]#

convert shape into tuple.

vta.top.graphpack._unpack_batch_channel(data, old_shape, unpack_transpose=False)[源代码]#

Unpack the data channel dimension.

vta.top.graphpack._weight_shape_match(data, dshape, channels, cfactor_out, transpose=False)[源代码]#

Pad the weight if the shape[0] not divisible by cfactor_out.

vta.top.graphpack._weight_shape_match_transpose(data, dshape, channels, cfactor_out)[源代码]#

Pad the weight if the shape[1] not divisible by cfactor_out.

vta.top.graphpack.get_subgraph(expr, start_name, stop_name, start_name_idx, stop_name_idx, count_meta)[源代码]#

We assume stop_name only appears once for simplicity. This constraint will be lifted in the future. bitpack_start and bitpack_end are both inclusive.

vta.top.graphpack.graph_pack(expr, bfactor, cfactor, weight_bits, start_name='nn.max_pool2d', stop_name='nn.global_avg_pool2d', start_name_idx=None, stop_name_idx=None, count_meta=False, device_annot=False, annot_start_name='nn.conv2d', annot_end_name='annotation.stop_fusion')[源代码]#

Pack the graph into batch&channel packed format.

Parameters#

exprrelay.Expr

The input program.

bfactorint

The packing factor in batch

cfactorint

The packing factor in channel

weight_bits: int

The bit-width of the weights.

start_name: str, optional

Start packing from certain known node when start_name_idx is None.

stop_name: str, optional

Stop packing from certain known node when stop_name_idx is None.

start_name_idx: int, optional

When start_name_idx not None, start packing only when node name equal start_name and node idx equals start_name_idx.

stop_name_idx: int, optional

When stop_name_idx not None, stop packing only when node name equal stop_name and node index equals stop_name_idx.

count_meta:boolean, optional

When count_meta is False, the operator increase logic would not count the meta that have the type 'relay.expr.Constant', start_name_idx and stop_name_idx follow the index from 'expr.astext(show_meta_data=False)'. When count_meta is True, the operator increase logic would count the meta.

device_annot: boolean, optional

if we want to annoate the device_type

annot_start_name: str, optional

device annotation start node, from which we mark the nodes as ext_dev

annot_end_name: str, optional

device annotation end node, after which we mark the nodes as 'cpu'

Returns#

exprExpr

The transformed expression.

vta.top.graphpack.run_opt_pass(expr, opt_pass)[源代码]#

Exectue a relay pass.