tvm.topi

目录

tvm.topi#

TVM Operator Inventory.

TOPI is the operator collection library for TVM, to provide sugars for constructing compute declaration as well as optimized schedules.

Some of the schedule function may have been specially optimized for a specific workload.

Exceptions:

InvalidShapeError

Invalid shape for a topi function.

Classes:

Cast(dtype, value[, span])

Cast expression.

PrimExpr()

Base class of all primitive expressions.

Functions:

abs(x)

Take absolute value of the input of x, element-wise.

acos(x)

Take arc cos of input x.

acosh(x)

Take arc cosh of input x.

add(lhs, rhs)

Addition with auto-broadcasting

adv_index(data, indices)

Numpy style indexing with tensors.

all(data[, axis, keepdims])

Logical AND of array elements over a given axis or a list of axes

any(data[, axis, keepdims])

Logical OR of array elements over a given axis or a list of axes

arange(start[, stop, step, dtype])

Creates a tensor with evenly spaced values within a given interval.

argmax(data[, axis, keepdims, select_last_index])

Returns the indices of the maximum values along an axis.

argmin(data[, axis, keepdims, select_last_index])

Returns the indices of the minimum values along an axis.

argsort(data[, valid_count, axis, ...])

Performs sorting along the given axis and returns an array of indices having the same shape as an input array that index data in sorted order.

argwhere(output_shape, condition)

Find the indices of elements of a tensor that are non-zero.

asin(x)

Take arc sin of input x.

asinh(x)

Take arc sinh of input x.

atan(x)

Take atan of input x.

atanh(x)

Take atanh of input x.

binary_search(ib, sequence_offset, ...)

Common IR generator for binary search used by CPU and GPU backends.

bitwise_and(lhs, rhs)

Compute element-wise bitwise and of data.

bitwise_not(data)

Compute element-wise bitwise not of data.

bitwise_or(lhs, rhs)

Compute element-wise bitwise or of data.

bitwise_xor(lhs, rhs)

Compute element-wise bitwise xor of data.

broadcast_to(data, shape)

Broadcast the src to the target shape

cast(x, dtype[, span])

Cast input to specified data type.

ceil(x)

Take ceil of input x.

ceil_log2(x)

Compute integer ceil log2 with a special code path for vulkan SPIR-V does not support log2 on fp64.

clip(x, a_min, a_max)

Clip (limit) the values in an array.

collapse_sum(data, target_shape)

Return a summation of data to the given shape.

concatenate(a_tuple[, axis])

Join a sequence of arrays along an existing axis.

const_vector(vector[, name])

convert a const numpy 1-dimensional vector to tvm tensor

cos(x)

Take cos of input x.

cosh(x)

Take cosh of input x.

cumprod(data[, axis, dtype, exclusive])

Numpy style cumprod op.

cumsum(data[, axis, dtype, exclusive])

Numpy style cumsum op.

decl_buffer(shape[, dtype, name, data, ...])

Declare a new symbolic buffer.

dft(re_data, im_data, inverse)

Computes the discrete Fourier transform of input (calculation along the last axis).

div(a, b[, span])

Compute a / b as in C/C++ semantics.

divide(lhs, rhs)

Division with auto-broadcasting

dynamic_strided_slice(a, begin, end, ...)

Slice of an array.

einsum(subscripts, *operand)

Evaluates the Einstein summation convention on the operands.

elemwise_sum(xs)

Perform element-wise sum on inputs

equal(lhs, rhs)

Compute (lhs==rhs) with auto-broadcasting

erf(x)

Take gauss error function of input x.

erf_legalize(attrs, inputs, types)

Legalizes ERF op.

exp(x)

Take exponential of input x.

expand_dims(a, axis[, num_newaxis])

Expand the shape of an array.

expand_like(a, shape_like, axis)

Expand an input array with the shape of second array.

extern(shape, inputs, fcompute[, name, ...])

Compute several tensors via an extern function.

fast_erf(x)

Take gauss error function of input x using fast_erf implementation.

fast_exp(x)

Take exponential of input x using fast_exp implementation

fast_tanh(x)

Take hyperbolic tangent of input x using fast_tanh implementation

fixed_point_multiply(x, multiplier, shift)

Fixed point multiplication between data and a fixed point constant expressed as multiplier * 2^(-shift), where multiplier is a Q-number with 31 fractional bits

fixed_point_multiply_per_axis(x, y, lshift, ...)

Fixed point multiplication between data and a fixed point constant expressed as multiplier * 2^(-shift), where multiplier is a Q-number with 31 fractional bits

flip(a[, axis])

Flip/reverse elements of an array in a particular axis.

floor(x)

Take floor of input x.

floor_divide(lhs, rhs)

Floor division with auto-broadcasting

floor_mod(lhs, rhs)

Floor modulus with auto-broadcasting

floordiv(a, b[, span])

Compute the floordiv of two expressions.

floormod(a, b[, span])

Compute the floormod of two expressions.

full(shape, dtype, fill_value)

Fill tensor with fill_value

full_like(x, fill_value)

Construct a tensor with same shape as input tensor,

gather(data, axis, indices)

Gather values along given axis from given indices.

gather_nd(a, indices)

Gather elements from a n-dimension array..

get_const_tuple(in_tuple)

Verifies input tuple is IntImm or Var, returns tuple of int or Var.

greater(lhs, rhs)

Compute (lhs>rhs) with auto-broadcasting

greater_equal(lhs, rhs)

Compute (lhs>=rhs) with auto-broadcasting

hybrid_argwhere_1d(output_shape, condition)

Find the indices of elements of a 1-D tensor that are non-zero.

hybrid_argwhere_2d(output_shape, condition)

Find the indices of elements of a 2-D tensor that are non-zero.

hybrid_argwhere_3d(output_shape, condition)

Find the indices of elements of a 3-D tensor that are non-zero.

hybrid_argwhere_4d(output_shape, condition)

Find the indices of elements of a 4-D tensor that are non-zero.

hybrid_argwhere_5d(output_shape, condition)

Find the indices of elements of a 5-D tensor that are non-zero.

identity(x)

Take identity of input x.

invert_permutation(data)

Computes the inverse permutation of data.

isfinite(x)

Check if value of x is finite, element-wise.

isinf(x)

Check if value of x is infinite, element-wise.

isnan(x)

Check if value of x is NaN, element-wise.

layout_transform(array, src_layout, dst_layout)

Transform the layout according to src_layout and dst_layout

left_shift(lhs, rhs)

Left shift with auto-broadcasting

less(lhs, rhs)

Compute (lhs<rhs) with auto-broadcasting

less_equal(lhs, rhs)

Compute (lhs<=rhs) with auto-broadcasting

log(x)

Take logarithm of input x.

log10(x)

Take logarithm to the base 10 of input x.

log2(x)

Take logarithm to the base 2 of input x.

logical_and(lhs, rhs)

Compute element-wise logical and of data.

logical_not(data)

Compute element-wise logical not of data.

logical_or(lhs, rhs)

Compute element-wise logical or of data.

logical_xor(lhs, rhs)

Compute element-wise logical xor of data.

make_idx(b, e, s, z, i)

Return the array position in the selection that corresponds to an array position in the full array.

matmul(a, b[, transp_a, transp_b])

Creates an operation that calculates a matrix multiplication (row-major notation): A(i, k) * B(k, j) if trans_a == trans_b, the usual transposed combinations, otherwise

matrix_set_diag(data, diagonal[, k, align])

Returns a tensor with the diagonals of input tensor replaced with the provided diagonal values.

max(data[, axis, keepdims])

Maximum of array elements over a given axis or a list of axes

maximum(lhs, rhs)

Take element-wise maximum of two tensors with auto-broadcasting

meshgrid(a_tuple, indexing)

Create coordinate matrices from coordinate vectors.

min(data[, axis, keepdims])

Minimum of array elements over a given axis or a list of axes

minimum(lhs, rhs)

Take element-wise maximum of two tensors with auto-broadcasting

mod(lhs, rhs)

Modulus with auto-broadcasting

multiply(lhs, rhs)

Multiplication with auto-broadcasting

ndarray_size(array[, dtype])

Get the number of elements of input array

negative(x)

Take negation of input x.

not_equal(lhs, rhs)

Compute (lhs!=rhs) with auto-broadcasting

one_hot(indices, on_value, off_value, depth, ...)

Returns a one-hot tensor where the locations repsented by indices take value on_value, other locations take value off_value.

power(lhs, rhs)

Power with auto-broadcasting

prod(data[, axis, keepdims])

Product of array elements over a given axis or a list of axes

reinterpret(x, dtype)

Reinterpret input to specified data type.

repeat(a, repeats, axis)

Repeats elements of an array.

reshape(a, newshape)

Reshape the array

reverse_sequence(a, seq_lengths[, seq_axis, ...])

Reverse the tensor for variable length slices.

right_shift(lhs, rhs)

Right shift with auto-broadcasting

round(x)

Round elements of x to nearest integer.

rsqrt(x)

Take inverse square root of input x.

scanop(data, binop, identity_value, op_name)

Cumulative binary operator (scan) with similar axis behavior as np.cumsum and np.cumprod.

scatter_elements(data, indices, updates[, ...])

Scatter elements from updates to corresponding indices of copied data.

scatter_nd(data, indices, updates, mode)

Scatter elements from a n-dimension array.

searchsorted(sorted_sequence, values[, ...])

Find indices where elements should be inserted to maintain order.

sequence_mask(data, valid_length[, ...])

Sets all elements outside the expected length of the sequence to a constant value.

shape(array[, dtype])

Get the shape of input array

sigmoid(x)

Take sigmoid tanh of input x.

sign(x)

Returns -1, 0, 1 based on sign of x.

sin(x)

Take sin of input x.

sinh(x)

Take sinh of input x.

sliding_window(data, axis, window_shape, strides)

Slide a window over the data tensor.

sort(data[, axis, is_ascend])

Performs sorting along the given axis and returns an array in sorted order.

sparse_reshape(sparse_indices, prev_shape, ...)

Reshape a Sparse Tensor

sparse_to_dense(sparse_indices, ...[, ...])

Converts a sparse representation into a dense tensor.

split(ary, indices_or_sections[, axis])

Split an array into multiple sub-arrays.

sqrt(x)

Take square root of input x.

squeeze(a[, axis])

Remove single-dimensional entries from the shape of an array.

stack(a, axis)

Repeats the whole array multiple times.

stft(data, n_fft, hop_length, win_length, ...)

The STFT computes the Fourier transform of short overlapping windows of the input. This gives frequency components of the signal as they change over time. Parameters ---------- data : relay.Expr Either a 1-D tensor or a 2-D batch tensor. n_fft : int The size of Fourier transform hop_length : int The distance between neighboring sliding window frames win_length : int The size of window frame and STFT filter window : relay.Expr A 1-D tensor window frame normalized : bool Whether to return the normalized STFT results onesided : bool Whether to return onesided result or fill with conjugate symmetry Returns ------- output : relay.Expr Tensor containing the STFT result Examples -------- .. code-block:: python.

strided_set(a, v, begin, end[, strides])

Set slice of an array.

strided_slice(a, begin, end[, strides, ...])

Slice of an array.

subtract(lhs, rhs)

Subtraction with auto-broadcasting

sum(data[, axis, keepdims])

Sum of array elements over a given axis or a list of axes

take(a, indices[, axis, batch_dims, mode])

Take elements from an array along an axis.

take_legalize(attrs, inputs, types)

Legalizes dyn.topk op.

tan(x)

Take tan of input x.

tanh(x)

Take hyperbolic tanh of input x.

tensordot(a, b, axes)

A generalization of matrix multiplication to tensor.

tile(a, reps)

Repeats the whole array multiple times.

topk(data[, k, axis, ret_type, is_ascend, dtype])

Get the top k elements in an input tensor along the given axis.

transpose(a[, axes])

Permute the dimensions of an array.

trilu(data, k, upper)

Given a 2-D matrix or batches of 2-D matrices, returns the upper or lower triangular part of the tensor.

trunc(x)

Take truncated value of the input of x, element-wise.

unique(data[, is_sorted, return_counts])

Find the unique elements of a 1-D tensor.

unravel_index(indices, shape)

Convert a flat index or array of flat indices into a tuple of coordinate arrays.

where(condition, x, y)

Get the elements, either from x or y, depending on the condition.

within_index(b, e, s, i)

Return a boolean value that indicates if i is within the given index.

exception tvm.topi.InvalidShapeError[源代码]

Invalid shape for a topi function. i.e. call winograd template for non-3x3 kernel)

class tvm.topi.Cast(dtype, value, span=None)[源代码]

Cast expression.

Parameters#

dtypestr

The data type

valuePrimExpr

The value of the function.

spanOptional[Span]

The location of this expression in the source code.

参数:
class tvm.topi.PrimExpr[源代码]

Base class of all primitive expressions.

PrimExpr is used in the low-level code optimizations and integer analysis.

tvm.topi.abs(x)[源代码]

Take absolute value of the input of x, element-wise.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.acos(x)[源代码]

Take arc cos of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.acosh(x)[源代码]

Take arc cosh of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.add(lhs, rhs)[源代码]

Addition with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.adv_index(data, indices)[源代码]

Numpy style indexing with tensors.

Parameters#

datatvm.te.Tensor

Input data.

indicesA list of tvm.te.Tensor

Tensor index.

Returns#

resulttvm.te.Tensor

Output tensor

tvm.topi.all(data, axis=None, keepdims=False)[源代码]

Logical AND of array elements over a given axis or a list of axes

Parameters#

datatvm.te.Tensor

The input tvm boolean tensor

axisNone or int or tuple of int

Axis or axes along which a logical AND is performed. The default, axis=None, will perform logical AND over all elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns#

ret : tvm.te.Tensor

tvm.topi.any(data, axis=None, keepdims=False)[源代码]

Logical OR of array elements over a given axis or a list of axes

Parameters#

datatvm.te.Tensor

The input tvm boolean tensor

axisNone or int or tuple of int

Axis or axes along which a logical OR is performed. The default, axis=None, will perform logical OR over all elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns#

ret : tvm.te.Tensor

tvm.topi.arange(start, stop=None, step=1, dtype='float32')[源代码]

Creates a tensor with evenly spaced values within a given interval.

Parameters#

starttvm.Expr, optional

Start of interval. The interval includes this value. The default start value is 0.

stoptvm.Expr

Stop of interval. The interval does not include this value.

steptvm.Expr, optional

Spacing between values. The default step size is 1.

dtypestr, optional

The target data type.

Returns#

resulttvm.te.Tensor

The resulting tensor.

tvm.topi.argmax(data, axis=None, keepdims=False, select_last_index=False)[源代码]

Returns the indices of the maximum values along an axis.

Parameters#

datatvm.te.Tensor

The input tvm tensor

axisNone or int or tuple of int

Axis or axes along which a argmax operation is performed. The default, axis=None, will find the indices of the maximum element of the elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

select_last_index: bool

Whether to select the last index if the maximum element appears multiple times, else select the first index.

Returns#

ret : tvm.te.Tensor

tvm.topi.argmin(data, axis=None, keepdims=False, select_last_index=False)[源代码]

Returns the indices of the minimum values along an axis.

Parameters#

datatvm.te.Tensor

The input tvm tensor

axisNone or int or tuple of int

Axis or axes along which a argmin operation is performed. The default, axis=None, will find the indices of minimum element all of the elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

select_last_index: bool

Whether to select the last index if the minimum element appears multiple times, else select the first index.

Returns#

ret : tvm.te.Tensor

tvm.topi.argsort(data, valid_count=None, axis=-1, is_ascend=1, dtype='float32')[源代码]

Performs sorting along the given axis and returns an array of indices having the same shape as an input array that index data in sorted order.

Parameters#

datatvm.te.Tensor

The input tensor.

valid_counttvm.te.Tensor, optional

1-D tensor for valid number of boxes.

axisint, optional

Axis along which to sort the input tensor. By default the flattened array is used.

is_ascendboolean, optional

Whether to sort in ascending or descending order.

dtypestring, optional

DType of the output indices.

Returns#

outtvm.te.Tensor

Sorted index tensor.

Example#

# An example to use argsort
dshape = (1, 5, 6)
data = te.placeholder(dshape, name="data")
axis = 0
is_ascend = False
out = argsort(data, axis=axis, is_ascend=is_ascend)
np_data = np.random.uniform(dshape)
s = topi.generic.schedule_argsort(out)
f = tvm.build(s, [data, out], "llvm")
dev = tvm.cpu()
tvm_data = tvm.nd.array(np_data, dev)
tvm_out = tvm.nd.array(np.zeros(dshape, dtype=data.dtype), dev)
f(tvm_data, tvm_out)
tvm.topi.argwhere(output_shape, condition)[源代码]

Find the indices of elements of a tensor that are non-zero.

Parameters#

conditiontvm.te.Tensor

Tensor with boolean values.

Returns#

outtvm.te.Tensor

Indices of non-zero elements.

tvm.topi.asin(x)[源代码]

Take arc sin of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.asinh(x)[源代码]

Take arc sinh of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.atan(x)[源代码]

Take atan of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.atanh(x)[源代码]

Take atanh of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.binary_search(ib, sequence_offset, search_range, sorted_sequence, value, right, out_dtype)[源代码]

Common IR generator for binary search used by CPU and GPU backends.

sorted_sequence is a N-D Buffer whose innermost dimension we want to search for value, and search_range is the size of the innermost dimension. sequence_offset is a 1-D linearlized offset specifying which of innermost sequences to search.

So the search for value is performed over sorted_sequence[sequence_offset:(sequence_offset + search_range)]. Note that we index N-D Buffer by 1-D linearlized indices.

tvm.topi.bitwise_and(lhs, rhs)[源代码]

Compute element-wise bitwise and of data.

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.bitwise_not(data)[源代码]

Compute element-wise bitwise not of data.

Parameters#

data : tvm.te.Tensor or Expr

Returns#

rettvm.te.Tensor or Expr

Returns Expr if the operand are Expr. Otherwise returns Tensor.

tvm.topi.bitwise_or(lhs, rhs)[源代码]

Compute element-wise bitwise or of data.

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.bitwise_xor(lhs, rhs)[源代码]

Compute element-wise bitwise xor of data.

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.broadcast_to(data, shape)[源代码]

Broadcast the src to the target shape

We follows the numpy broadcasting rule. See also https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html

Parameters#

datatvm.te.Tensor

The input data

shapelist or tuple

The target shape to be broadcasted.

Returns#

ret : tvm.te.Tensor

tvm.topi.cast(x, dtype, span=None)[源代码]

Cast input to specified data type.

Parameters#

xtvm.te.Tensor or Expr

Input argument.

dtypestr

Data type.

spanOptional[Span]

The location of the cast in the source.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.ceil(x)[源代码]

Take ceil of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.ceil_log2(x)[源代码]

Compute integer ceil log2 with a special code path for vulkan SPIR-V does not support log2 on fp64. Instead, we compute integer ceil_log2 via clz intrinsic when the target is vulkan.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.clip(x, a_min, a_max)[源代码]

Clip (limit) the values in an array. Given an interval, values outside the interval are clipped to the interval edges.

Parameters#

xtvm.te.Tensor

Input argument.

a_mintvm.tir.PrimExpr

Minimum value.

a_maxtvm.tir.PrimExpr

Maximum value.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.collapse_sum(data, target_shape)[源代码]

Return a summation of data to the given shape.

collapse_sum is intended as the backward operator of topi broadcast operators in the automatic differentiation process.

We expect that data is the result of broadcasting some tensor of target_shape in some broadcast operation. Thus target_shape and data.shape must follow broadcast rules.

During computation, the axes of data.shape and target_shape are checked from right to left. For every axis, if it either: - exist in data but not in target_shape, or - is larger than 1 in data and equals to 1 in target_shape, data will be summed over this axis.

Parameters#

datatvm.te.Tensor

The input tensor.

shapeTuple[int]

The shape to collapse to.

Returns#

rettvm.te.Tensor

The result tensor after summation.

tvm.topi.concatenate(a_tuple, axis=0)[源代码]

Join a sequence of arrays along an existing axis.

Parameters#

a_tupletuple of tvm.te.Tensor

The arrays to concatenate

axisint, optional

The axis along which the arrays will be joined. Default is 0.

Returns#

ret : tvm.te.Tensor

tvm.topi.const_vector(vector, name='const_vector')[源代码]

convert a const numpy 1-dimensional vector to tvm tensor

Parameters#

vector: numpy.ndarray

Const input array

name: str, optional

The name of output op

Returns#

tensor: Tensor

The created tensor

tvm.topi.cos(x)[源代码]

Take cos of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.cosh(x)[源代码]

Take cosh of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.cumprod(data, axis=None, dtype=None, exclusive=None)[源代码]

Numpy style cumprod op. Return the cumulative product of the elements along a given axis.

Parameters#

datatvm.te.Tensor

The input data to the operator.

axisint, optional

Axis along which the cumulative product is computed. The default (None) is to compute the cumproduct over the flattened array.

dtypestring, optional

Type of the returned array and of the accumulator in which the elements are multiplied. If dtype is not specified, it defaults to the dtype of data.

exclusivebool, optional

If True, will return exclusive product in which the first element is not included. In other terms, if True, the j-th output element would be the product of the first (j-1) elements. Otherwise, it would be the product of the first j elements.

Returns#

resulttvm.te.Tensor

The result has the same size as data, and the same shape as data if axis is not None. If axis is None, the result is a 1-d array.

参数:
返回类型:

Tensor

tvm.topi.cumsum(data, axis=None, dtype=None, exclusive=None)[源代码]

Numpy style cumsum op. Return the cumulative sum of the elements along a given axis.

Parameters#

datatvm.te.Tensor

The input data to the operator.

axisint, optional

Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array.

dtypestring, optional

Type of the returned array and of the accumulator in which the elements are summed. If dtype is not specified, it defaults to the dtype of data.

exclusivebool, optional

If True, will return exclusive sum in which the first element is not included. In other terms, if True, the j-th output element would be the sum of the first (j-1) elements. Otherwise, it would be the sum of the first j elements.

Returns#

resulttvm.te.Tensor

The result has the same size as data, and the same shape as data if axis is not None. If axis is None, the result is a 1-d array.

参数:
返回类型:

Tensor

tvm.topi.decl_buffer(shape, dtype=None, name='buffer', data=None, strides=None, elem_offset=None, scope='', data_alignment=-1, offset_factor=0, buffer_type='', axis_separators=None, span=None)[源代码]

Declare a new symbolic buffer.

Normally buffer is created automatically during lower and build. This is only needed if user want to specify their own buffer layout.

See the note below for detailed discussion on usage of buffer.

Parameters#

shapetuple of Expr

The shape of the buffer.

dtypestr, optional

The data type of the buffer.

namestr, optional

The name of the buffer.

dataVar, optional

The data pointer in the buffer.

strides: array of Expr

The stride of the buffer.

elem_offset: Expr, optional

The beginning offset of the array to data. In terms of number of elements of dtype.

scope: str, optional

The storage scope of the buffer, if not global. If scope equals empty string, it means it is global memory.

data_alignment: int, optional

The alignment of data pointer in bytes. If -1 is passed, the alignment will be set to TVM’s internal default.

offset_factor: int, optional

The factor of elem_offset field, when set, elem_offset is required to be multiple of offset_factor. If 0 is pssed, the alignment will be set to 1. if non-zero is passed, we will created a Var for elem_offset if elem_offset is not None.

buffer_type: str, optional, {“”, “auto_broadcast”}

auto_broadcast buffer allows one to implement broadcast computation without considering whether dimension size equals to one. TVM maps buffer[i][j][k] -> buffer[i][0][k] if dimension j’s shape equals 1.

axis_separatorslist of int, optional

If passed, a list of separators between groups of axes, each of which is flattened to an output axis. For flat memory spaces, should either be None, or an empty list.

span: Optional[Span]

The location of the decl_buffer creation in the source.

Returns#

buffertvm.tir.Buffer

The created buffer

Example#

Here’s an example of how broadcast buffer can be used to define a symbolic broadcast operation,

m0, m1, m2 = te.var("m0"), te.var("m1"), te.var("m2")
n0, n1, n2 = te.var("n0"), te.var("n1"), te.var("n2")
o0, o1, o2 = te.var("o0"), te.var("o1"), te.var("o2")
A = te.placeholder((m0, m1, m2), name='A')
B = te.placeholder((n0, n1, n2), name='B')
C = te.compute((o0, o1, o2), lambda i, j, k: A[i, j, k] + B[i, j, k], name='C')
Ab = tvm.tir.decl_buffer(A.shape, A.dtype, name="Ab", buffer_type="auto_broadcast")
Bb = tvm.tir.decl_buffer(B.shape, B.dtype, name="Bb", buffer_type="auto_broadcast")
s = te.create_schedule(C.op)
fadd = tvm.build(s, [A, B, C], target='llvm', name='bcast_add', binds={A:Ab, B:Bb})
dev = tvm.cpu(0)
a = tvm.nd.array(np.random.uniform(size=(2, 4, 3)).astype(A.dtype), dev)
b = tvm.nd.array(np.random.uniform(size=(2, 1, 3)).astype(B.dtype), dev)
c = tvm.nd.array(np.zeros((2, 4, 3), dtype=C.dtype), dev)
fadd(a, b, c)
tvm.testing.assert_allclose(c.numpy(), a.numpy() + b.numpy())

Note#

Buffer data structure reflects the DLTensor structure in dlpack. While DLTensor data structure is very general, it is usually helpful to create function that only handles specific case of data structure and make compiled function benefit from it.

If user pass strides and elem_offset is passed as None when constructing the function, then the function will be specialized for the DLTensor that is compact and aligned. If user pass a fully generic symbolic array to the strides, then the resulting function becomes fully generic.

tvm.topi.dft(re_data, im_data, inverse)[源代码]

Computes the discrete Fourier transform of input (calculation along the last axis). This gives frequency components of the signal as they change over time.

Parameters#

re_datarelay.Expr

N-D tensor, real part of the input signal.

im_datarelay.Expr

N-D tensor, imaginary part of the input signal. If the signal is real, then the values of this tensor are zeros.

inversebool

Whether to perform the inverse discrete fourier transform.

Returns#

re_outputrelay.Expr

The Fourier Transform of the input (Real part).

im_outputrelay.Expr

The Fourier Transform of the input (Imaginary part).

参数:
tvm.topi.div(a, b, span=None)[源代码]

Compute a / b as in C/C++ semantics.

Parameters#

aPrimExpr

The left hand operand, known to be non-negative.

bPrimExpr

The right hand operand, known to be non-negative.

spanOptional[Span]

The location of this operator in the source.

Returns#

resPrimExpr

The result expression.

Note#

When operands are integers, returns truncdiv(a, b, span).

tvm.topi.divide(lhs, rhs)[源代码]

Division with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.dynamic_strided_slice(a, begin, end, strides, output_shape)[源代码]

Slice of an array.

Parameters#

atvm.te.Tensor

The tensor to be sliced.

begintvm.te.Tensor

The indices to begin with in the slicing.

endtvm.te.Tensor

Indices indicating end of the slice.

stridestvm.te.Tensor

Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis.

output_shape: list of PrimExpr

Specifies the output shape

Returns#

ret : tvm.te.Tensor

tvm.topi.einsum(subscripts, *operand)[源代码]

Evaluates the Einstein summation convention on the operands.

Parameters#

subscriptsstring

Specifies the subscripts for summation as comma separated list of subscript labels. An implicit (classical Einstein summation) calculation is performed unless the explicit indicator ‘->’ is included as well as subscript labels of the precise output form.

a_tupletuple of tvm.te.Tensor

These are the Tensors for the operation. The only difference of einsum between in tvm and numpy is it needs an extra brackets for the tensors. For example, topi.einsum(“ij, jk -> ik”, (A, B)).

Returns#

outtvm.te.Tensor

The calculation based on the Einstein summation convention.

tvm.topi.elemwise_sum(xs)[源代码]

Perform element-wise sum on inputs

Parameters#

xslist of tvm.te.Tensor

Input arguments.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.equal(lhs, rhs)[源代码]

Compute (lhs==rhs) with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.erf(x)[源代码]

Take gauss error function of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.erf_legalize(attrs, inputs, types)[源代码]

Legalizes ERF op.

Parameters#

attrstvm.ir.Attrs

Attributes of current convolution

inputslist of tvm.relay.Expr

The args of the Relay expr to be legalized

typeslist of types

List of input and output types

Returns#

resulttvm.relay.Expr

The legalized expr.

tvm.topi.exp(x)[源代码]

Take exponential of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.expand_dims(a, axis, num_newaxis=1)[源代码]

Expand the shape of an array.

Parameters#

atvm.te.Tensor

The tensor to be expanded.

num_newaxis: int, optional

Number of newaxis to be inserted on axis

Returns#

ret : tvm.te.Tensor

tvm.topi.expand_like(a, shape_like, axis)[源代码]

Expand an input array with the shape of second array. This operation can always be composed of unsqueezing and expanding dims on those unsqueezed axes.

Examples#

input = [ 12.  19.  27.]
input.shape = (3,)

new_shape_array = [[[1,2],[2,3],[1,3]],
                [[1,4],[4,3],[5,2]],
                [[7,1],[7,2],[7,3]]]
new_shape_array.shape = (3, 3, 2)

expand_like(input, [1,2], new_shape_array) =
                [[[12,12],[12,12],[12,12]],
                [[19,19],[19,19],[19,19]],
                [[27,27],[27,27],[27,27]]]

Parameters#

atvm.te.Tensor

The tensor to be expanded.

shape_liketvm.te.Tensor

The tensor to with target shape.

axis: list of int

axis to be expanded on

Returns#

ret : tvm.te.Tensor

tvm.topi.extern(shape, inputs, fcompute, name='extern', dtype=None, in_buffers=None, out_buffers=None, tag='', attrs=None)[源代码]

Compute several tensors via an extern function.

Parameters#

shape: tuple or list of tuples.

The shape of the outputs.

inputs: list of Tensor

The inputs

fcompute: lambda function of inputs, outputs-> stmt

Specifies the IR statement to do the computation. See the following note for function signature of fcompute

备注

Parameters

Returns

  • stmt (tvm.tir.Stmt) - The statement that carries out array computation.

name: str, optional

The name hint of the tensor

dtype: str or list of str, optional

The data types of outputs, by default dtype will be same as inputs.

in_buffers: tvm.tir.Buffer or list of tvm.tir.Buffer, optional

Input buffers.

out_buffers: tvm.tir.Buffer or list of tvm.tir.Buffer, optional

Output buffers.

tag: str, optional

Additonal tag information about the compute.

attrs: dict, optional

The additional auxiliary attributes about the compute.

Returns#

tensor: Tensor or list of Tensors

The created tensor or tuple of tensors contains multiple outputs.

Example#

In the code below, C is generated by calling external PackedFunc tvm.contrib.cblas.matmul

A = te.placeholder((n, l), name="A")
B = te.placeholder((l, m), name="B")
C = te.extern((n, m), [A, B],
               lambda ins, outs: tvm.tir.call_packed(
                  "tvm.contrib.cblas.matmul",
                    ins[0], ins[1], outs[0], 0, 0), name="C")
tvm.topi.fast_erf(x)[源代码]

Take gauss error function of input x using fast_erf implementation.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.fast_exp(x)[源代码]

Take exponential of input x using fast_exp implementation

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.fast_tanh(x)[源代码]

Take hyperbolic tangent of input x using fast_tanh implementation

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.fixed_point_multiply(x, multiplier, shift)[源代码]

Fixed point multiplication between data and a fixed point constant expressed as multiplier * 2^(-shift), where multiplier is a Q-number with 31 fractional bits

Parameters#

xtvm.te.Tensor or Expr

Input argument.

multiplierint

Multiplier of a fixed floating point number described as multiplier*2^(-shift).

shiftint

Shift of a fixed floating point number described as multiplier*2^(-shift).

Returns#

ytvm.te.Tensor

The result.

tvm.topi.fixed_point_multiply_per_axis(x, y, lshift, rshift, is_lshift_required, is_rshift_required, axes)[源代码]

Fixed point multiplication between data and a fixed point constant expressed as multiplier * 2^(-shift), where multiplier is a Q-number with 31 fractional bits

Parameters#

xtvm.te.Tensor

Input argument.

ytvm.te.Tensor

Multiplier of a fixed floating point number described as multiplier*2^(-shift).

lshifttvm.te.Tensor

Left shifts of a fixed floating point number described as multiplier*2^(-shift).

rshifttvm.te.Tensor

Right shifts of a fixed floating point number described as multiplier*2^(-shift).

is_lshift_requiredint

Whether we need to do left shift or not.

is_rshift_requiredint

Whether we need to do right shift or not.

Returns#

ztvm.te.Tensor

The result.

参数:
tvm.topi.flip(a, axis=0)[源代码]

Flip/reverse elements of an array in a particular axis.

Parameters#

atvm.te.Tensor

The tensor to be expanded.

axisint, optional

The axis along which the tensors will be reveresed.

Returns#

ret : tvm.te.Tensor

tvm.topi.floor(x)[源代码]

Take floor of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.floor_divide(lhs, rhs)[源代码]

Floor division with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.floor_mod(lhs, rhs)[源代码]

Floor modulus with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.floordiv(a, b, span=None)[源代码]

Compute the floordiv of two expressions.

Parameters#

aPrimExpr

The left hand operand

bPrimExpr

The right hand operand

spanOptional[Span]

The location of this operator in the source.

Returns#

resPrimExpr

The result expression.

tvm.topi.floormod(a, b, span=None)[源代码]

Compute the floormod of two expressions.

Parameters#

aPrimExpr

The left hand operand

bPrimExpr

The right hand operand

spanOptional[Span]

The location of this operator in the source.

Returns#

resPrimExpr

The result expression.

tvm.topi.full(shape, dtype, fill_value)[源代码]

Fill tensor with fill_value

Parameters#

shapetuple

Input tensor shape.

dtypestr

Data type

fill_valuefloat

Value to be filled

Returns#

ytvm.te.Tensor

The result.

tvm.topi.full_like(x, fill_value)[源代码]
Construct a tensor with same shape as input tensor,

then fill tensor with fill_value.

Parameters#

xtvm.te.Tensor

Input argument.

fill_valuefloat

Value to be filled

Returns#

ytvm.te.Tensor

The result.

tvm.topi.gather(data, axis, indices)[源代码]

Gather values along given axis from given indices.

E.g. for a 3D tensor, output is computed as:

out[i][j][k] = data[indices[i][j][k]][j][k]  # if axis == 0
out[i][j][k] = data[i][indices[i][j][k]][k]  # if axis == 1
out[i][j][k] = data[i][j][indices[i][j][k]]  # if axis == 2

indices must have same shape as data, except at dimension axis which must just be not null. Output will have same shape as indices.

Parameters#

datatvm.te.Tensor

The input data to the operator.

axis: int

The axis along which to index.

indicestvm.te.Tensor

The indices of the values to extract.

Returns#

ret : tvm.te.Tensor

tvm.topi.gather_nd(a, indices)[源代码]

Gather elements from a n-dimension array..

Parameters#

atvm.te.Tensor

The source array.

indicestvm.te.Tensor

The indices of the values to extract.

Returns#

ret : tvm.te.Tensor

tvm.topi.get_const_tuple(in_tuple)[源代码]

Verifies input tuple is IntImm or Var, returns tuple of int or Var.

Parameters#

in_tupletuple of Expr

The input.

Returns#

out_tupletuple of int

The output.

tvm.topi.greater(lhs, rhs)[源代码]

Compute (lhs>rhs) with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.greater_equal(lhs, rhs)[源代码]

Compute (lhs>=rhs) with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.hybrid_argwhere_1d(output_shape, condition)[源代码]

Find the indices of elements of a 1-D tensor that are non-zero.

Parameters#

conditiontvm.te.Tensor

1-D tensor with boolean values.

Returns#

outtvm.te.Tensor

Indices of non-zero elements.

tvm.topi.hybrid_argwhere_2d(output_shape, condition)[源代码]

Find the indices of elements of a 2-D tensor that are non-zero.

Parameters#

conditiontvm.te.Tensor

2-D tensor with boolean values.

Returns#

outtvm.te.Tensor

Indices of non-zero elements.

tvm.topi.hybrid_argwhere_3d(output_shape, condition)[源代码]

Find the indices of elements of a 3-D tensor that are non-zero.

Parameters#

conditiontvm.te.Tensor

3-D tensor with boolean values.

Returns#

outtvm.te.Tensor

Indices of non-zero elements.

tvm.topi.hybrid_argwhere_4d(output_shape, condition)[源代码]

Find the indices of elements of a 4-D tensor that are non-zero.

Parameters#

conditiontvm.te.Tensor

4-D tensor with boolean values.

Returns#

outtvm.te.Tensor

Indices of non-zero elements.

tvm.topi.hybrid_argwhere_5d(output_shape, condition)[源代码]

Find the indices of elements of a 5-D tensor that are non-zero.

Parameters#

conditiontvm.te.Tensor

5-D tensor with boolean values.

Returns#

outtvm.te.Tensor

Indices of non-zero elements.

tvm.topi.identity(x)[源代码]

Take identity of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.invert_permutation(data)[源代码]

Computes the inverse permutation of data.

Parameters#

datatvm.te.Tensor

Input data

Returns#

resulttvm.te.Tensor

Output tensor

Examples#

data = [3, 4, 0, 2, 1]
topi.invert_permutation(data) = [2, 4, 3, 0, 1]
tvm.topi.isfinite(x)[源代码]

Check if value of x is finite, element-wise.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.isinf(x)[源代码]

Check if value of x is infinite, element-wise.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.isnan(x)[源代码]

Check if value of x is NaN, element-wise.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.layout_transform(array, src_layout, dst_layout, schedule_rule='None')[源代码]

Transform the layout according to src_layout and dst_layout

Parameters#

arraytvm.te.Tensor

The source array.

src_layoutstr

the source layout.

dst_layoutstr

the destination layout.

schedule_rulestr

the schedule rule to apply if any

tvm.topi.left_shift(lhs, rhs)[源代码]

Left shift with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.less(lhs, rhs)[源代码]

Compute (lhs<rhs) with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.less_equal(lhs, rhs)[源代码]

Compute (lhs<=rhs) with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.log(x)[源代码]

Take logarithm of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.log10(x)[源代码]

Take logarithm to the base 10 of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.log2(x)[源代码]

Take logarithm to the base 2 of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.logical_and(lhs, rhs)[源代码]

Compute element-wise logical and of data.

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.logical_not(data)[源代码]

Compute element-wise logical not of data.

Parameters#

data : tvm.te.Tensor or Expr

Returns#

rettvm.te.Tensor or Expr

Returns Expr if the operand are Expr. Otherwise returns Tensor.

tvm.topi.logical_or(lhs, rhs)[源代码]

Compute element-wise logical or of data.

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.logical_xor(lhs, rhs)[源代码]

Compute element-wise logical xor of data.

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.make_idx(b, e, s, z, i)[源代码]

Return the array position in the selection that corresponds to an array position in the full array.

The returned value is only meaningful if within_index() returns True for the same set of parameters.

Parameters#

bExpr

beginning of the index

eExpr

end of the index

sExpr

strides of index

zExpr

size of the indexed dimension

iExpr

array position

Returns#

position: Expr

int expression that corresponds to an array position in the selection.

tvm.topi.matmul(a, b, transp_a=False, transp_b=False)[源代码]

Creates an operation that calculates a matrix multiplication (row-major notation): A(i, k) * B(k, j) if trans_a == trans_b, the usual transposed combinations, otherwise

Parameters#

a : The matrix A b : The matrix B trans_a : Is A’s layout transposed? trans_b : Is B’s layout transposed?

Returns#

A Tensor whose op member is the matmul operation

tvm.topi.matrix_set_diag(data, diagonal, k=0, align='RIGHT_LEFT')[源代码]

Returns a tensor with the diagonals of input tensor replaced with the provided diagonal values.

Parameters#

datarelay.Expr

Input Tensor.

diagonalrelay.Expr

Values to be filled in the diagonal.

kint or tuple of int, optional

Diagonal Offset(s). The diagonal or range of diagonals to set. (0 by default) Positive value means superdiagonal, 0 refers to the main diagonal, and negative value means subdiagonals. k can be a single integer (for a single diagonal) or a pair of integers specifying the low and high ends of a matrix band. k[0] must not be larger than k[1].

alignstring, optional

Some diagonals are shorter than max_diag_len and need to be padded. align is a string specifying how superdiagonals and subdiagonals should be aligned, respectively. There are four possible alignments: “RIGHT_LEFT” (default), “LEFT_RIGHT”, “LEFT_LEFT”, and “RIGHT_RIGHT”. “RIGHT_LEFT” aligns superdiagonals to the right (left-pads the row) and subdiagonals to the left (right-pads the row). It is the packing format LAPACK uses. cuSPARSE uses “LEFT_RIGHT”, which is the opposite alignment.

Returns#

resultrelay.Expr

New tensor with given diagonal values.

Examples#

data = [[[7, 7, 7, 7],
         [7, 7, 7, 7],
         [7, 7, 7, 7]],
        [[7, 7, 7, 7],
         [7, 7, 7, 7],
         [7, 7, 7, 7]]]

diagonal = [[1, 2, 3],
            [4, 5, 6]]

topi.matrix_set_diag(input, diagonal) =
    [[[1, 7, 7, 7],
      [7, 2, 7, 7],
      [7, 7, 3, 7]],
     [[4, 7, 7, 7],
      [7, 5, 7, 7],
      [7, 7, 6, 7]]]
tvm.topi.max(data, axis=None, keepdims=False)[源代码]

Maximum of array elements over a given axis or a list of axes

Parameters#

datatvm.te.Tensor

The input tvm tensor

axisNone or int or tuple of int

Axis or axes along which the max operation is performed. The default, axis=None, will find the max element from all of the elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns#

ret : tvm.te.Tensor

tvm.topi.maximum(lhs, rhs)[源代码]

Take element-wise maximum of two tensors with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.meshgrid(a_tuple, indexing)[源代码]

Create coordinate matrices from coordinate vectors.

Parameters#

a_tupletuple of tvm.te.Tensor

The coordinate vectors or scalars.

indexingstr

Indexing mode, either “ij” or “xy”.

Returns#

resulttuple of tvm.te.Tensor

The resulting grids for each axis.

tvm.topi.min(data, axis=None, keepdims=False)[源代码]

Minimum of array elements over a given axis or a list of axes

Parameters#

datatvm.te.Tensor

The input tvm tensor

axisNone or int or tuple of int

Axis or axes along which a minimum operation is performed. The default, axis=None, will find the minimum element from all of the elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns#

ret : tvm.te.Tensor

tvm.topi.minimum(lhs, rhs)[源代码]

Take element-wise maximum of two tensors with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.mod(lhs, rhs)[源代码]

Modulus with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.multiply(lhs, rhs)[源代码]

Multiplication with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.ndarray_size(array, dtype='int32')[源代码]

Get the number of elements of input array

Parameters#

arraytvm.te.Tensor

The source tensor.

dtypestr, optional

The target data type.

Returns#

resulttvm.te.Tensor

The resulting tensor.

tvm.topi.negative(x)[源代码]

Take negation of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.not_equal(lhs, rhs)[源代码]

Compute (lhs!=rhs) with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.one_hot(indices, on_value, off_value, depth, axis, dtype)[源代码]

Returns a one-hot tensor where the locations repsented by indices take value on_value, other locations take value off_value. Final dimension is <indices outer dimensions> x depth x <indices inner dimensions>.

Parameters#

indicestvm.te.Tensor

Locations to set to on_value.

on_valuetvm.te.Tensor

Value to fill at indices.

off_valuetvm.te.Tensor

Value to fill at all other positions besides indices.

depthint

Depth of the one-hot dimension.

axisint

Axis to fill.

dtyperelay.DataType

Data type of the output tensor.

Returns#

retrelay.Expr

The one-hot tensor.

Examples#

indices = [0, 1, 2]

relay.one_hot(indices, 3) =
    [[1, 0, 0],
     [0, 1, 0],
     [0, 0, 1]]
tvm.topi.power(lhs, rhs)[源代码]

Power with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.prod(data, axis=None, keepdims=False)[源代码]

Product of array elements over a given axis or a list of axes

Parameters#

datatvm.te.Tensor

The input tvm tensor

axisNone or int or tuple of int

Axis or axes along which a prod operation is performed. The default, axis=None, will get the prod element over all of the elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns#

ret : tvm.te.Tensor

tvm.topi.reinterpret(x, dtype)[源代码]

Reinterpret input to specified data type.

Parameters#

xtvm.te.Tensor

Input argument.

dtypestr

Data type.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.repeat(a, repeats, axis)[源代码]

Repeats elements of an array.

Parameters#

atvm.te.Tensor

The tensor to be repeated.

repeats: int, required

Number of repetitions for each element

axis: int, optional

The axis along which to repeat values

Returns#

ret : tvm.te.Tensor

tvm.topi.reshape(a, newshape)[源代码]

Reshape the array

Parameters#

atvm.te.Tensor

The tensor to be reshaped

newshapetuple of ints

The new shape

Returns#

ret : tvm.te.Tensor

tvm.topi.reverse_sequence(a, seq_lengths, seq_axis=1, batch_axis=0)[源代码]

Reverse the tensor for variable length slices. Input is first sliced along batch axis and then elements are reversed along seq axis.

Parameters#

atvm.te.Tensor

The tensor to be reversed.

seq_lengthstvm.te.Tensor

A 1D Tensor with length a.dims[batch_axis] Must be one of the following types: int32, int64 if seq_lengths[i] > a.dims[seq_axis], it is rounded to a.dims[seq_axis] if seq_lengths[i] < 1, it is rounded to 1

seq_axisint, optional

The axis along which the elements will be reversed. Default is 1.

batch_axisint, optional

The axis along which the tensor will be sliced. Default is 0.

Returns#

rettvm.te.Tensor

The computed result of same shape and type as of input.

tvm.topi.right_shift(lhs, rhs)[源代码]

Right shift with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.round(x)[源代码]

Round elements of x to nearest integer.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.rsqrt(x)[源代码]

Take inverse square root of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.scanop(data, binop, identity_value, op_name, axis=None, dtype=None, exclusive=None)[源代码]

Cumulative binary operator (scan) with similar axis behavior as np.cumsum and np.cumprod.

See cumprod and cumsum for an example of use.

E.g. if * is your binary operator and the input tensor is [1, 2, 3, 4] the output may be [1, 1 * 2, 1 * 2 * 3, 1 * 2 * 3 * 4]

Parameters#

datatvm.te.Tensor

The input data to the operator.

binop: Callable (tvm.Expr, tvm.Expr) -> tvm.Expr

A binary operator which should be associative and commutative. E.g. if * is your operator then a * (b * c) = (a * b) * c and a * b = b * a

identity_value: tvm.Expr

A value for the binary operation which provides the identity property. E.g. if * is your operator and i is the identity_value then a * i = a for all a in the domain of your operation.

axisint, optional

Axis along which the operation is computed. The default (None) is to compute the cumulative operation over the flattened array.

dtypestring, optional

Type of the returned array and of the accumulator in which the elements are computed. If dtype is not specified, it defaults to the dtype of data.

exclusivebool, optional

If True will return exclusive cumulative operation in which the first element is not included. In other terms, if True, the j-th output element would be the cumulative operation of the first (j-1) elements. Otherwise, it would be the cumulative operation of the first j elements. The cumulative operation of zero elements is assumed to be the identity_value.

Returns#

resulttvm.te.Tensor

The result has the same size as data, and the same shape as data if axis is not None. If axis is None, the result is a 1-d array.

参数:
  • data (Tensor)

  • binop (Callable[[tvm.Expr, tvm.Expr], tvm.Expr])

  • identity_value (tvm.Expr)

  • op_name (str)

  • axis (int | None)

  • dtype (str | None)

  • exclusive (bool | None)

返回类型:

Tensor

tvm.topi.scatter_elements(data, indices, updates, axis=0, reduction='update')[源代码]

Scatter elements from updates to corresponding indices of copied data.

Data, indices, updates and output have the same shape. Indices can not have duplicates (if idx1 != idx2, then indices[idx1] != indices[idx2]) if reduction == “update”.

output[indices[i][j]][j] = f(output[indices[i][j]][j], updates[i][j]) if axis = 0
output[i][indices[i][j]] = f(output[i][indices[i][j]], updates[i][j]) if axis = 1

where the update function f is determined by the reduction. Five types of the function are supported: “update”, “add”, “mul”, “min” and “max” (see below)

Parameters#

datatvm.te.Tensor

The source array.

indicestvm.te.Tensor

The indices of the values to extract.

updatestvm.te.Tensor

The updates to apply at the Indices

axisoptional, int

The axis to scatter on. It is zero by default.

reductionoptional, string

The update mode for the algorithm, either “update”, “add”, “mul”, “min” or “max” If update, the update values will replace the input data If add, the update values will be added to the input data If mul, the input data will be multiplied on the update values If mean, the input data will be mean between the update values and the input data If min, there is choice of minimal between the update values and the input data If max, there is choice of maximal between the update values and the input data It is “update” by default

Returns#

ret : tvm.te.Tensor

tvm.topi.scatter_nd(data, indices, updates, mode)[源代码]

Scatter elements from a n-dimension array.

Given updates with shape (Y_0, …, Y_{K-1}, X_M, …, X_{N-1}), indices with shape (M, Y_0, …, Y_{K-1}), and output copied from data with shape (X_0, X_1, …, X_{N-1}), scatter_nd computes

output[indices[0, y_0, ..., y_{K-1}],
       ...,
       indices[M-1, y_0, ..., y_{K-1}],
       x_M,
       ...,
       x_{N-1}
      ] = f(output[...], updates[y_0, ..., y_{K-1}, x_M, ..., x_{N-1}])

where the update function f is determinted by the mode.

Parameters#

datatvm.te.Tensor

The source array.

indicestvm.te.Tensor

The indices of the values to extract.

updatestvm.te.Tensor

The updates to apply at the Indices

modestring

The update mode for the algorithm, either “update” or “add” If update, the update values will replace the input data If add, the update values will be added to the input data

Returns#

ret : tvm.te.Tensor

tvm.topi.searchsorted(sorted_sequence, values, right=False, out_dtype='int64')[源代码]
Find indices where elements should be inserted to maintain order.

If sorted_sequence is N-dimensional, the innermost dimension of values are searched in the corresponding dimension of sorted_sequence.

Parameters#

sorted_sequencete.Tensor

N-D or 1-D Tensor, containing monotonically increasing sequence on the innermost dimension.

valueste.Tensor

N-D Tensor containing the search values. When sorted_sequence is 1-D, the shape of values can be arbitrary. Otherwise, ranks of sorted_sequence and values must be the same, and outer N-1 axes must have the same size.

rightbool, optional

Controls which index is returned if a value lands exactly on one of sorted values. If False, the index of the first suitable location found is given. If true, return the last such index. If there is no suitable index, return either 0 or N (where N is the size of the innermost dimension).

dtypestring, optional

The data type of the output indices.

Returns#

indiceste.Tensor

Tensor with same shape as values, representing the indices of elements of values if they are inserted in sorted_sequence.

tvm.topi.sequence_mask(data, valid_length, mask_value=0, axis=0)[源代码]

Sets all elements outside the expected length of the sequence to a constant value.

This function takes an n-dimensional input array of the form [MAX_LENGTH, batch_size, …] or [batch_size, MAX_LENGTH, …] and returns an array of the same shape.

axis means the axis of the length dimension and can only be 0 or 1. If axis is 0, the data must have shape [MAX_LENGTH, batch_size, …]. Otherwise (axis=1), the data must have shape [batch_size, MAX_LENGTH, …].

valid_length gives the length of each sequence. valid_length should be a 1D int array with positive ints and has dimension [batch_size,].

Parameters#

datatvm.te.Tensor

N-D with shape [MAX_LENGTH, batch_size, …] or [batch_size, MAX_LENGTH, …] depending on the value of axis.

valid_lengthtvm.te.Tensor

1-D with shape [batch_size,]

mask_valuefloat, optional

The masking value, default 0

axisint, optional

axis of the length dimension, must be 0 or 1, default 0

Returns#

outputtvm.te.Tensor

N-D with shape [MAX_LENGTH, batch_size, …] or [batch_size, MAX_LENGTH, …] depending on the value of axis.

tvm.topi.shape(array, dtype='int32')[源代码]

Get the shape of input array

Parameters#

arraytvm.te.Tensor

The source tensor.

dtypestr, optional

The target data type.

Returns#

resulttvm.te.Tensor

The resulting tensor.

tvm.topi.sigmoid(x)[源代码]

Take sigmoid tanh of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.sign(x)[源代码]

Returns -1, 0, 1 based on sign of x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.sin(x)[源代码]

Take sin of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.sinh(x)[源代码]

Take sinh of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.sliding_window(data, axis, window_shape, strides)[源代码]

Slide a window over the data tensor.

Parameters#

datarelay.Expr

The input data to the operator.

axisint

What axis the window begins sliding over. Window will be slid over this axis and all following axes. The axis value determines the window shape (and thus, the number of strides): window shape and strides must both be of length data.ndim-axis.

window_shapeList[int]

The window shape to form over the input. Window shape must be of length data.ndim-axis.

stridesList[int]

How to stride the window along each dimension. Strides must be of length data.ndim-axis.

Returns#

resultrelay.Expr

The resulting tensor.

tvm.topi.sort(data, axis=-1, is_ascend=1)[源代码]

Performs sorting along the given axis and returns an array in sorted order.

Parameters#

datatvm.te.Tensor

The input tensor.

axisint, optional

Axis along which to sort the input tensor. By default the flattened array is used.

is_ascendboolean, optional

Whether to sort in ascending or descending order.

dtypestring, optional

DType of the output indices.

Returns#

outtvm.te.Tensor

Sorted index tensor.

tvm.topi.sparse_reshape(sparse_indices, prev_shape, new_shape, new_sparse_indices_shape, new_shape_shape)[源代码]

Reshape a Sparse Tensor

Parameters#

sparse_indicesrelay.Expr

A 2-D tensor[N, n_dim] of integers containing location of sparse values, where N is the number of sparse values and n_dim is the number of dimensions of the dense_shape

prev_shaperelay.Expr

A 1-D tensor containing the previous shape of the dense tensor

new_shaperelay.Expr

A 1-D tensor containing the new shape of the dense tensor

Returns#

result: relay.Expr

Output tensor.

Examples#

sparse_indices = [[0, 0, 0],
                    [0, 0, 1],
                    [0, 1, 0],
                    [1, 0, 0],
                    [1, 2, 3]]
prev_shape = [2, 3, 4]
new_shape = [9, -1]
new_sparse_indices, new_shape = relay.sparse_reshape(
    sparse_indices, prev_shape, new_shape)
new_sparse_indices = [[0, 0],
                      [0, 1],
                      [1, 2],
                      [4, 2],
                      [8, 1]]
new_shape = [9, 4]
tvm.topi.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value=0)[源代码]

Converts a sparse representation into a dense tensor.

Example:: - sparse_to_dense([[0, 0], [1, 1]], [2, 2], [3, 3], 0) = [[3, 0], [0, 3]]

Parameters#

sparse_indicestvm.te.Tensor

A 0-D, 1-D, or 2-D tensor of integers containing location of sparse values.

output_shapeA list of integers

Shape of the dense output tensor.

sparse_valuestvm.te.Tensor

A 0-D or 1-D tensor containing the sparse values for the sparse indices.

default_valuetvm.te.Tensor

A 0-D tensor containing the default value for the remaining locations. Defaults to 0.

Returns#

resulttvm.te.Tensor

Dense tensor of shape output_shape. Has the same type as sparse_values.

tvm.topi.split(ary, indices_or_sections, axis=0)[源代码]

Split an array into multiple sub-arrays.

Parameters#

ary : tvm.te.Tensor

indices_or_sections : int or 1-D array

axis : int

Returns#

ret : tuple of tvm.te.Tensor

tvm.topi.sqrt(x)[源代码]

Take square root of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.squeeze(a, axis=None)[源代码]

Remove single-dimensional entries from the shape of an array.

Parameters#

a : tvm.te.Tensor

axisNone or int or tuple of ints, optional

Selects a subset of the single-dimensional entries in the shape. If an axis is selected with shape entry greater than one, an error is raised.

Returns#

squeezed : tvm.te.Tensor

tvm.topi.stack(a, axis)[源代码]

Repeats the whole array multiple times.

Parameters#

atvm.te.Tensor

The tensor to be stacked.

axisint, optional

The axis in the result array along which the input arrays are stacked.

Returns#

ret : tvm.te.Tensor

tvm.topi.stft(data, n_fft, hop_length, win_length, window, normalized, onesided, output_shape)[源代码]

The STFT computes the Fourier transform of short overlapping windows of the input. This gives frequency components of the signal as they change over time. Parameters ———- data : relay.Expr

Either a 1-D tensor or a 2-D batch tensor.

n_fftint

The size of Fourier transform

hop_lengthint

The distance between neighboring sliding window frames

win_lengthint

The size of window frame and STFT filter

windowrelay.Expr

A 1-D tensor window frame

normalizedbool

Whether to return the normalized STFT results

onesidedbool

Whether to return onesided result or fill with conjugate symmetry

Returns#

outputrelay.Expr

Tensor containing the STFT result

Examples#

data = [1, 2, 3, 4, 5, 6]
window = [4, 3, 2]
[n_fft, hop_length, win_length, normalized, onesided] = [3, 3, 3, False, True]
relay.stft(data, n_fft, hop_length, win_length, window, normalized, onesided)
-> [[[15.0000,  0.0000], [34.0000,  0.0000]], [[ 4.5000,  0.8660], [ 1.0000, -1.7321]]]
tvm.topi.strided_set(a, v, begin, end, strides=None)[源代码]

Set slice of an array.

Parameters#

atvm.te.Tensor

The tensor to be sliced.

vtvm.te.Tensor

The values to set

begin: tvm.te.Tensor

The indices to begin with in the slicing.

end: tvm.te.Tensor

Indices indicating end of the slice.

strides: tvm.te.Tensor, optional

Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis.

Returns#

ret : tvm.te.Tensor

tvm.topi.strided_slice(a, begin, end, strides=None, axes=None, slice_mode='end')[源代码]

Slice of an array.

Parameters#

atvm.te.Tensor

The tensor to be sliced.

beginlist of int

The indices to begin with in the slicing.

endlist of int

Indices indicating end of the slice.

strideslist of int, optional

Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis.

axeslist of int, optional

Axes along which slicing is applied. When it is specified, begin, end strides, and axes need to a list of integers of the same length.

slice_modestr, optional

The slice mode [end, size]. end - The ending indices for the slice [default]. size - The input strides will be ignored, input end in this mode indicates the sizeof a slice starting at the location specified by begin. If end[i] is -1, all remaining elements in that dimension are included in the slice.

Returns#

ret : tvm.te.Tensor

tvm.topi.subtract(lhs, rhs)[源代码]

Subtraction with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.sum(data, axis=None, keepdims=False)[源代码]

Sum of array elements over a given axis or a list of axes

Parameters#

datatvm.te.Tensor

The input tvm tensor

axisNone or int or tuple of int

Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns#

ret : tvm.te.Tensor

tvm.topi.take(a, indices, axis=None, batch_dims=0, mode='clip')[源代码]

Take elements from an array along an axis.

Parameters#

atvm.te.Tensor

The source array.

indicestvm.te.Tensor

The indices of the values to extract.

axisint, optional

The axis over which to select values. By default, the flattened input array is used.

batch_dimsint

The number of batch dimensions. By default is 0.

modestr, optional

Specifies how out-of-bound indices will behave. clip - clip to the range (default) wrap - wrap around the indices fast - no clip or wrap around (user must make sure indices are in-bound)

Returns#

ret : tvm.te.Tensor

tvm.topi.take_legalize(attrs, inputs, types)[源代码]

Legalizes dyn.topk op.

Parameters#

attrstvm.ir.Attrs

Attributes of current op

inputslist of tvm.relay.Expr

The args of the Relay expr to be legalized

typeslist of types

List of input and output types

Returns#

resulttvm.relay.Expr

The legalized expr

tvm.topi.tan(x)[源代码]

Take tan of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.tanh(x)[源代码]

Take hyperbolic tanh of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.tensordot(a, b, axes)[源代码]

A generalization of matrix multiplication to tensor.

Parameters#

a : The tensor A b : The tensor B axes : The number of dimensions to reduce over

Returns#

A Tensor computing the result

tvm.topi.tile(a, reps)[源代码]

Repeats the whole array multiple times.

Parameters#

atvm.te.Tensor

The tensor to be tiled.

reps: tuple of ints, required

The number of times for repeating the tensor

Returns#

ret : tvm.te.Tensor

tvm.topi.topk(data, k=1, axis=-1, ret_type='both', is_ascend=False, dtype='int64')[源代码]

Get the top k elements in an input tensor along the given axis.

Parameters#

datatvm.te.Tensor

The input tensor.

kint or tvm.te.Tensor, optional

Number of top elements to select. Return all elements if k < 1.

axisint, optional

Axis long which to sort the input tensor.

ret_type: str, optional

The return type [both, values, indices]. “both”: return both top k data and indices. “values”: return top k data only. “indices”: return top k indices only.

is_ascendboolean, optional

Whether to sort in ascending or descending order.

dtypestring, optional

The data type of the indices output.

Returns#

outtvm.te.Tensor or List[tvm.te.Tensor]

The computed result.

tvm.topi.transpose(a, axes=None)[源代码]

Permute the dimensions of an array.

Parameters#

atvm.te.Tensor

The tensor to be expanded.

axes: tuple of ints, optional

By default, reverse the dimensions.

Returns#

ret : tvm.te.Tensor

tvm.topi.trilu(data, k, upper)[源代码]

Given a 2-D matrix or batches of 2-D matrices, returns the upper or lower triangular part of the tensor.

Parameters#

data: tvm.te.Tensor

The tensor that trilu will be applied to. Must be either a 2D matrix or a tensor of batches of 2D matrices.

k: tvm.te.Tensor

The number of diagonals above or below the main diagonal to exclude or include.

upper: bool

If True, only upper triangular values of input are kept, if False, the lower triangular values are kept.

Returns#

retrelay.Expr

The new tensor with appropriate diagonals set to zero.

Examples#

x = [[0, 1, 2],
     [3, 4, 5],
     [6, 7, 8]]

relay.trilu(x, True, 0) =
    [[0, 1, 2],
     [0, 4, 5],
     [0, 0, 8]]
tvm.topi.trunc(x)[源代码]

Take truncated value of the input of x, element-wise.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.unique(data, is_sorted=True, return_counts=False)[源代码]

Find the unique elements of a 1-D tensor. Please note output and counts are all padded to have the same length of data and element with index >= num_unique[0] has undefined value.

Parameters#

datatvm.te.Tensor

A 1-D tensor of integers.

sortedbool

Whether to sort the unique elements in ascending order before returning as output.

return_countsbool

Whether to return the count of each unique element.

Returns#

uniquetvm.te.Tensor

A 1-D tensor containing the unique elements of the input data tensor. The same size as the input data. If there are less unique elements than input data, the end of the tensor is padded with zeros.

indicestvm.te.Tensor

A 1-D tensor. The same size as output. For each entry in output, it contains the index of its first occurence in the input data. The end of the tensor is padded with the length of the input data.

inverse_indicestvm.te.Tensor

A 1-D tensor. For each entry in data, it contains the index of that data element in the unique array. (Note that inverse_indices is very similar to indices if output is not sorted.)

num_uniquetvm.te.Tensor

A 1-D tensor with size=1 containing the number of unique elements in the input data tensor.

counts (optional)tvm.te.Tensor

A 1-D tensor containing the count of each unique element in the output.

Examples#

[output, indices, num_unique] = unique([4, 5, 1, 2, 3, 3, 4, 5], False, False)
output          =  [4, 5, 1, 2, 3, _, _, _]
indices         =  [0, 1, 2, 3, 4, _, _, _]
inverse_indices =  [0, 1, 2, 3, 4, 4, 0, 1]
num_unique      =  [5]

[output, indices, num_unique, counts] = unique([4, 5, 1, 2, 3, 3, 4, 5], False, True)
output          =  [4, 5, 1, 2, 3, _, _, _]
indices         =  [0, 1, 2, 3, 4, _, _, _]
inverse_indices =  [0, 1, 2, 3, 4, 4, 0, 1]
num_unique      =  [5]
counts          =  [2, 2, 1, 1, 2, _, _, _]

[output, indices, num_unique] = unique([4, 5, 1, 2, 3, 3, 4, 5], True)
output          =  [1, 2, 3, 4, 5, _, _, _]
indices         =  [2, 3, 4, 0, 1, _, _, _]
inverse_indices =  [3, 4, 0, 1, 2, 2, 3, 4]
num_unique      =  [5]
tvm.topi.unravel_index(indices, shape)[源代码]

Convert a flat index or array of flat indices into a tuple of coordinate arrays.

Example:: - unravel_index([22, 41, 37], [7, 6]) = [[3, 6, 6], [4, 5, 1]]

Parameters#

indicesrelay.Expr

An integer array containing indices.

shaperelay.Expr

The shape of the array.

Returns#

resultrelay.Expr

The tuple of coordinate arrays.

tvm.topi.where(condition, x, y)[源代码]

Get the elements, either from x or y, depending on the condition.

Parameters#

conditiontvm.te.Tensor

The condition array.

xtvm.te.Tensor

First array to be selected.

ytvm.te.Tensor

Second array to be selected.

Returns#

resulttvm.te.Tensor

A Tensor selected from x or y depending on condition.

tvm.topi.within_index(b, e, s, i)[源代码]

Return a boolean value that indicates if i is within the given index.

Parameters#

bExpr

beginning of the index

eExpr

end of the index

sExpr

strides of index

iExpr

array position

Returns#

selected: Expr

bool expression that is True is the array position would be selected by the index and False otherwise

tvm.topi.nn#

Neural network operators

Classes:

Workload(in_dtype, out_dtype, height, width, ...)

Functions:

adaptive_pool(data, output_size, pool_type)

Perform pooling on height and width dimension of data.

adaptive_pool1d(data, output_size, pool_type)

Perform pooling on three dimensional data.

adaptive_pool3d(data, output_size, pool_type)

Perform pooling on three dimensional data.

add(lhs, rhs)

Addition with auto-broadcasting

add_alter_layout(_attrs, _inputs, _tinfos, ...)

Change add layout.

batch_matmul(tensor_a, tensor_b[, oshape, ...])

Compute batch matrix multiplication of tensor_a and tensor_b.

batch_matmul_legalize(attrs, inputs, types)

Legalizes batch_matmul op.

batch_norm(data, gamma, beta, moving_mean, ...)

Batch normalization layer (Ioffe and Szegedy, 2014).

batch_to_space_nd(data, block_shape, ...)

Perform space to batch transformation on the data

bias_add_legalize(_attrs, _inputs, _tinfos)

Legalize bias_add layout.

binarize_pack(data[, axis, name])

Binarization and bit-packing along a certain axis.

binary_dense(data, weight)

Binary matrix multiplication using xor and bit-count.

bitpack(data, bits, pack_axis, bit_axis, ...)

Packs data into format necessary for bitserial computation

bitserial_conv2d_legalize(attrs, inputs, types)

Legalizes Bitserial Conv2D op.

bitserial_conv2d_nchw(data, kernel, stride, ...)

Bitserial Conv2D operator.

bitserial_conv2d_nhwc(data, kernel, stride, ...)

Bitserial Conv2D operator.

bitserial_dense(data, weight, data_bits, ...)

The default implementation of bitserial dense in topi.

concatenate(a_tuple[, axis])

Join a sequence of arrays along an existing axis.

conv(inp, filt, stride, padding, dilation, ...)

Convolution operator in NCHW or NHWC layout.

conv1d(data, kernel[, strides, padding, ...])

1D convolution forward operator.

conv1d_ncw(data, kernel[, strides, padding, ...])

1D convolution in NCW layout.

conv1d_nwc(data, kernel[, strides, padding, ...])

1D convolution in NWC layout.

conv1d_transpose_ncw(data, kernel, stride, ...)

Transposed 1D convolution ncw forward operator.

conv2d(input, filter, strides, padding, dilation)

Conv2D operator.

conv2d_NCHWc(data, kernel, stride, padding, ...)

Conv2D operator for nChw[x]c layout.

conv2d_NCHWc_int8(data, kernel, stride, ...)

Conv2D operator for nChw[x]c layout.

conv2d_alter_layout(attrs, inputs, tinfos, ...)

Change Conv2D layout.

conv2d_gemm_weight_transform(kernel, tile_N, ...)

Weight transformation for winograd

conv2d_hwcn(Input, Filter, stride, padding, ...)

Convolution operator in HWCN layout.

conv2d_infer_layout(workload, cfg)

Infer input/output shapes and layouts from a workload and cfg.

conv2d_legalize(attrs, inputs, types)

Legalizes Conv2D op.

conv2d_nchw(Input, Filter, stride, padding, ...)

Convolution operator in NCHW layout.

conv2d_nhwc(Input, Filter, stride, padding, ...)

Convolution operator in NHWC layout.

conv2d_transpose_alter_layout(attrs, inputs, ...)

Change Conv2D_Transpose layout.

conv2d_transpose_legalize(attrs, inputs, types)

Legalizes Transposed 2D convolution op.

conv2d_transpose_nchw(Input, Filter, ...)

Transposed 2D convolution nchw forward operator.

conv2d_transpose_nchw_preprocess(data, ...)

Preprocess data and kernel to make the compute pattern of conv2d_transpose the same as conv2d

conv2d_winograd_nchw(data, weight, strides, ...)

Conv2D Winograd in NCHW layout.

conv2d_winograd_nchw_without_weight_transform(...)

Conv2D Winograd without layout transform in NCHW layout.

conv2d_winograd_nhwc(data, weight, strides, ...)

Conv2D Winograd in NHWC layout.

conv2d_winograd_nhwc_without_weight_transform(...)

Conv2D Winograd without layout transform in NHWC layout.

conv2d_winograd_nnpack_weight_transform(...)

Weight transformation for winograd

conv2d_winograd_weight_transform(kernel, ...)

Weight transformation for winograd

conv3d_alter_layout(attrs, inputs, tinfos, ...)

Change Conv3D layout.

conv3d_ncdhw(Input, Filter, stride, padding, ...)

Conv3D operator in NCDHW layout.

conv3d_ndhwc(Input, Filter, stride, padding, ...)

Convolution operator in NDHWC layout.

conv3d_transpose_legalize(attrs, inputs, types)

Legalizes Transposed 3D convolution op.

conv3d_transpose_ncdhw(Input, Filter, ...)

Transposed 3D convolution ncdhw forward operator.

conv3d_transpose_ncdhw_preprocess(data, ...)

Preprocess data and kernel to make the compute pattern of conv3d_transpose the same as conv3d

conv3d_winograd_weight_transform(kernel, ...)

Weight transformation for 3D winograd

correlation_nchw(data1, data2, kernel_size, ...)

Correlation operator in NCHW layout.

declaration_conv2d_transpose_impl(data, ...)

Implementation of conv2d transpose

declaration_conv3d_transpose_impl(data, ...)

Implementation of conv3d transpose

deformable_conv2d_nchw(data, offset, kernel, ...)

Deformable conv2D operator in NCHW layout.

deformable_conv2d_nhwc(data, offset, kernel, ...)

Deformable conv2D operator in NHWC layout.

dense(data, weight[, bias, out_dtype, ...])

The default implementation of dense in topi.

dense_alter_layout(attrs, inputs, tinfos, ...)

Change dense layout.

dense_legalize(attrs, inputs, types)

Legalizes dense op.

dense_pack(data, weight[, bias, out_dtype])

The default implementation of dense_pack in topi.

depth_to_space(data, block_size[, layout, mode])

Perform depth to space transformation on the data

depthwise_conv2d_NCHWc(Input, Filter, ...[, ...])

Depthwise convolution NCHW[x]c forward operator.

depthwise_conv2d_backward_input_nhwc(Filter, ...)

Depthwise convolution nhwc backward wrt input operator.

depthwise_conv2d_backward_weight_nhwc(Input, ...)

Depthwise convolution nhwc backward wrt weight operator.

depthwise_conv2d_infer_layout(workload, cfg)

Infer input/output shapes and layouts from a workload and cfg.

depthwise_conv2d_nchw(Input, Filter, stride, ...)

Depthwise convolution nchw forward operator.

depthwise_conv2d_nhwc(Input, Filter, stride, ...)

Depthwise convolution nhwc forward operator.

dilate(data, strides[, dilation_value, name])

Dilate data with given dilation value (0 by default).

equal_const_int(expr, value)

Returns if expr equals value.

fast_softmax(x[, axis])

Perform softmax activation on the data.

fifo_buffer(data, buffer, axis)

FIFO buffer to enable computation reuse in CNNs with sliding indow input

flatten(data)

Flattens the input array into a 2-D array by collapsing the higher dimensions.

get_const_int(expr)

Verifies expr is integer and get the constant value.

get_const_tuple(in_tuple)

Verifies input tuple is IntImm or Var, returns tuple of int or Var.

get_pad_tuple(padding, kernel)

Common code to get the pad option

get_pad_tuple1d(padding, kernel)

Common code to get the pad option

get_pad_tuple3d(padding, kernel)

Common code to get the pad option

get_pad_tuple_generic(padding, kernel)

Common code to get the pad option

global_pool(data, pool_type[, layout])

Perform global pooling on height and width dimension of data.

group_conv1d_ncw(data, kernel[, strides, ...])

1D convolution forward operator for NCW layout.

group_conv1d_nwc(data, kernel[, strides, ...])

1D convolution forward operator for NWC layout.

group_conv1d_transpose_ncw(data, kernel, ...)

Transposed 1D group convolution ncw forward operator.

group_conv2d_nchw(Input, Filter, stride, ...)

Group convolution operator in NCHW layout.

group_conv2d_nhwc(Input, Filter, stride, ...)

Group convolution operator in NHWC layout.

group_conv2d_transpose_nchw(data, kernel, ...)

Group convolution operator in NCHW layout.

group_conv3d_transpose_ncdhw(data, kernel, ...)

Transposed group 3D convolution ncdhw forward operator.

group_norm(data, gamma, beta, num_groups, ...)

Group normalization operator.

instance_norm(data, gamma, beta, axis[, epsilon])

Instance normalization operator.

layer_norm(data, gamma, beta, axis[, epsilon])

Layer normalization operator.

layout_transform(tensor, current_layout, ...)

Transform a tensor with the current layout to the desired layout.

leaky_relu(x, alpha)

Take leaky relu of input x.

log_softmax(x[, axis])

Perform log softmax activation on the data

lrn(data, size[, axis, alpha, beta, bias])

Perform the across channels local response normalisation on the input data.

lstm(Xs, Wi, Wh[, Bi, Bh, h_init, c_init, ...])

General LSTM implemented using TE scan.

matmul(tensor_a, tensor_b[, bias, ...])

The default implementation of matmul in topi.

matmul_legalize(attrs, inputs, types)

Legalizes matmul op.

mirror_pad(data, pad_before[, pad_after, ...])

Pad Input with mirroring either symmetric or reflected.

namedtuple(typename, field_names, *[, ...])

Returns a new subclass of tuple with named fields.

nll_loss(predictions, targets, weights, ...)

Negative log likelihood loss on the input data.

pad(data, pad_before[, pad_after, ...])

Pad Input with zeros.

pool1d(data, kernel, stride, dilation, ...)

Perform pooling on width dimension of data.

pool2d(data, kernel, stride, dilation, ...)

Perform pooling on height and width dimension of data.

pool3d(data, kernel, stride, dilation, ...)

Perform pooling on depth, height and width dimension of data.

pool_grad(grads, data, kernel, stride, ...)

Gradient of pooling on height and width dimension of data.

prelu(x, slope[, axis])

PReLU.

qnn_conv2d_alter_layout(_attrs, _inputs, ...)

Change qnn.conv2d layout.

qnn_dense_alter_layout(_attrs, _inputs, ...)

Change qnn.dense layout.

qnn_requantize_alter_layout(_attrs, _inputs, ...)

Change requantize layout.

reduce(function, iterable[, initial])

Apply a function of two arguments cumulatively to the items of a sequence or iterable, from left to right, so as to reduce the iterable to a single value.

relu(x)

Take relu of input x.

rms_norm(data, weight, axis[, epsilon])

Root mean square normalization operator.

scale_shift_nchw(Input, Scale, Shift)

Batch normalization operator in inference.

scale_shift_nchwc(Input, Scale, Shift)

Batch normalization operator in inference.

scale_shift_nhwc(Input, Scale, Shift)

Batch normalization operator in inference.

simplify(expr)

Simplify the expression if it is Expr, directly return if it is int.

simulated_dequantize(data, in_dtype[, ...])

Simulated QNN dequantize operator that mimics QNN outputs without changing datatype.

simulated_quantize(data, out_dtype[, ...])

Simulated QNN quantize operator that mimics QNN outputs without changing datatype.

softmax(x[, axis])

Perform softmax activation on the data.

softmax_common(x, axis, use_fast_exp)

The common part of softmax and fast_softmax

space_to_batch_nd(data, block_shape, ...[, ...])

Perform batch to space transformation on the data

space_to_depth(data, block_size[, layout])

Perform space to depth transformation on the data

sparse_add(dense_data, sparse_data, ...)

Computes sparse-dense addition

sparse_conv2d(dense_data, sparse_data, ...)

Computes sparse-conv2d(1*1) of data and (weight_data, weight_indices, weight_indptr)

sparse_dense(dense_data, sparse_data, ...[, ...])

Computes sparse-dense matrix multiplication of data and (weight_data, weight_indices, weight_indptr).T, if sparse_lhs=False or Computes sparse-dense matrix multiplication of (data_data, data_indices, data_indptr) and weight.T, if sparse_lhs=True

sparse_dense_alter_layout(_attrs, _inputs, ...)

Change Sparse Dense layout.

sparse_dense_sp_lhs(data_data, data_indices, ...)

Computes sparse-dense matrix multiplication of (data_data, data_indices, data_indptr) and weight.T

sparse_dense_sp_rhs(data, weight_data, ...)

Computes sparse-dense matrix multiplication of data and (weight_data, weight_indices, weight_indptr).T

sparse_transpose(sparse_data, ...)

Transpose a square sparse matrix, A is an n-by-n sparse matrix in the CSR format.

strided_slice(a, begin, end[, strides, ...])

Slice of an array.

try_get_conv2d_sparse_input(args)

Analyze the input data from the given args.

try_get_sparse_input(args)

Analyze the input data from the given args.

unpack_NCHWc_to_nchw(packed_out, out_dtype)

Unpack conv2d_NCHWc output from layout NCHWc to NCHW

upsampling(data, scale_h, scale_w[, layout, ...])

Perform upsampling on the data.

upsampling3d(data, scale_d, scale_h, scale_w)

Perform upsampling on the data.

winograd_transform_matrices(tile_size, ...)

Compute the A, B, and G transform matrices for tile_size as a tvm.Expr.

class tvm.topi.nn.Workload(in_dtype, out_dtype, height, width, in_filter, out_filter, kernel_h, kernel_w, padt, padl, padb, padr, dilation_h, dilation_w, stride_h, stride_w)#

Attributes:

dilation_h

Alias for field number 12

dilation_w

Alias for field number 13

height

Alias for field number 2

in_dtype

Alias for field number 0

in_filter

Alias for field number 4

kernel_h

Alias for field number 6

kernel_w

Alias for field number 7

out_dtype

Alias for field number 1

out_filter

Alias for field number 5

padb

Alias for field number 10

padl

Alias for field number 9

padr

Alias for field number 11

padt

Alias for field number 8

stride_h

Alias for field number 14

stride_w

Alias for field number 15

width

Alias for field number 3

dilation_h#

Alias for field number 12

dilation_w#

Alias for field number 13

height#

Alias for field number 2

in_dtype#

Alias for field number 0

in_filter#

Alias for field number 4

kernel_h#

Alias for field number 6

kernel_w#

Alias for field number 7

out_dtype#

Alias for field number 1

out_filter#

Alias for field number 5

padb#

Alias for field number 10

padl#

Alias for field number 9

padr#

Alias for field number 11

padt#

Alias for field number 8

stride_h#

Alias for field number 14

stride_w#

Alias for field number 15

width#

Alias for field number 3

tvm.topi.nn.adaptive_pool(data, output_size, pool_type, layout='NCHW')[源代码]#
Perform pooling on height and width dimension of data.

The pooling kernel and stride sizes are automatically chosen for desired output sizes. It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.

Parameters#

datatvm.te.Tensor

n-D with shape of layout

output_sizetuple of int

output height and width.

pool_typestr

Pool type, ‘max’ or ‘avg’

layout: string

Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.

Returns#

outputtvm.te.Tensor

n-D in the same layout

tvm.topi.nn.adaptive_pool1d(data, output_size, pool_type, layout='NCW')[源代码]#

Perform pooling on three dimensional data. See the two dimensional version above for details.

tvm.topi.nn.adaptive_pool3d(data, output_size, pool_type, layout='NCDHW')[源代码]#

Perform pooling on three dimensional data. See the two dimensional version above for details.

tvm.topi.nn.add(lhs, rhs)[源代码]#

Addition with auto-broadcasting

Parameters#

lhstvm.te.Tensor or Expr

The left operand

rhstvm.te.Tensor or Expr

The right operand

Returns#

rettvm.te.Tensor or Expr

Returns Expr if both operands are Expr. Otherwise returns Tensor.

tvm.topi.nn.add_alter_layout(_attrs, _inputs, _tinfos, _out_type)[源代码]#

Change add layout.

Add is not a QNN-specific function, but this generic exists so that bias add operations can be fused with input zero point add optimizations, which only happens if the previous operator is quantized.

Parameters#

attrstvm.ir.Attrs

Attributes of current convolution

inputstvm.relay.Expr

Grouped input symbols

tinfoslist

Input shape and dtype

out_type: type

The output type

Note#

Unlike other TOPI functions, this function operates on both graph level and operator level.

tvm.topi.nn.batch_matmul(tensor_a, tensor_b, oshape=None, out_dtype=None, transpose_a=False, transpose_b=True, auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)[源代码]#

Compute batch matrix multiplication of tensor_a and tensor_b.

Both tensor_a and tensor_b can be transposed. For legacy reason, we use NT format (transpose_a=False, transpose_b=True) by default.

Parameters#

tensor_atvm.te.Tensor

3-D with shape [batch, M, K] or [batch, K, M].

tensor_btvm.te.Tensor

3-D with shape [batch, K, N] or [batch, N, K].

oshapeList[Optional]

Explicit intended output shape of the computation. Can be useful in cases with dynamic input shapes.

out_dtypeOptional[str]

Specifies the output data type for mixed precision batch matmul.

transpose_aOptional[bool] = False

Whether the first tensor is in transposed format.

transpose_bOptional[bool] = True

Whether the second tensor is in transposed format.

auto_scheduler_rewritten_layout: Optional[str] = “”

The layout after auto-scheduler’s layout rewrite pass.

meta_schedule_original_shape: Optional[List[PrimExpr]] = None

The original shape of the tensor

Returns#

outputtvm.te.Tensor

3-D with shape [batch, M, N]

tvm.topi.nn.batch_matmul_legalize(attrs, inputs, types)[源代码]#

Legalizes batch_matmul op.

Parameters#

attrstvm.ir.Attrs

Attributes of current batch_matmul

inputslist of tvm.relay.Expr

The args of the Relay expr to be legalized

typeslist of types

List of input and output types

Returns#

resulttvm.relay.Expr

The legalized expr

tvm.topi.nn.batch_norm(data, gamma, beta, moving_mean, moving_var, axis=None, epsilon=None, center=None, scale=None, training=None, momentum=None)[源代码]#

Batch normalization layer (Ioffe and Szegedy, 2014).

Normalizes the input at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1.

Parameters#

datatvm.te.Tensor

Input to be batch-normalized.

gammatvm.te.Tensor

Scale factor to be applied to the normalized tensor.

betatvm.te.Tensor

Offset to be applied to the normalized tensor.

moving_meantvm.te.Tensor

Running mean of input.

moving_vartvm.te.Tensor

Running variance of input.

axisint, optional, default=1

Specify along which shape axis the normalization should occur.

epsilonfloat, optional, default=1e-5

Small float added to variance to avoid dividing by zero.

centerbool, optional, default=True

If True, add offset of beta to normalized tensor, If False, beta is ignored.

scalebool, optional, defualt=True

If True, scale normalized tensor by gamma. If False, gamma is ignored.

trainingbool, optional, defualt=False

Indicating whether it is in training mode. If True, update moving_mean and moving_var.

momentumfloat, optional, default=0.1

The value used for the moving_mean and moving_var update.

Returns#

outputlist of tvm.te.Tensor

Normalized data with same shape as input

moving_meantvm.te.Tensor

Running mean of input.

moving_vartvm.te.Tensor

Running variance of input.

参数:
返回类型:

List[Tensor]

tvm.topi.nn.batch_to_space_nd(data, block_shape, crop_begin_list, crop_end_list)[源代码]#

Perform space to batch transformation on the data

Parameters#

datatvm.te.Tensor

N-D Tensor with shape [batch, spatial_shape, remaining_shapes], where spatial_shape has M dimensions.

block_sizelist of ints

list of size [M] where M is number of spatial dims, specifies block size for each spatial dimension.

crop_begin_listlist of ints

list of shape [M] where M is number of spatial dims, specifies begin crop size for each spatial dimension.

crop_end_listlist of ints

list of shape [M] where M is number of spatial dims, specifies end crop size for each spatial dimension.

Returns#

output : tvm.te.Tensor

tvm.topi.nn.bias_add_legalize(_attrs, _inputs, _tinfos)[源代码]#

Legalize bias_add layout.

Bias add is not a QNN-specific function, but this generic exists so that empty channels can be excised from quantized conv2d operators and folded into bias adds.

Parameters#

attrstvm.ir.Attrs

Attributes of current convolution

inputstvm.relay.Expr

Grouped input symbols

tinfoslist

Input shape and dtype

tvm.topi.nn.binarize_pack(data, axis=None, name='PackedInput')[源代码]#

Binarization and bit-packing along a certain axis.

Parameters#

datatvm.te.Tensor

n-D input, can be any layout.

axisNone or int

The axis along which to do binarization and bit-packing, default is the last axis.

namestr, optional

The name prefix operators generate.

Returns#

outputtvm.te.Tensor

n-D, the same layout as input, dtype is uint32.

tvm.topi.nn.binary_dense(data, weight)[源代码]#

Binary matrix multiplication using xor and bit-count.

Parameters#

datatvm.te.Tensor

2-D with shape [batch, in_dim], dtype is uint32.

weighttvm.te.Tensor

2-D with shape [out_dim, in_dim], dtype is uint32.

Returns#

outputtvm.te.Tensor

2-D with shape [batch, out_dim], dtype is float32.

tvm.topi.nn.bitpack(data, bits, pack_axis, bit_axis, pack_type, name='QuantizeInput')[源代码]#

Packs data into format necessary for bitserial computation

Parameters#

pack_axisint

index of the axis to pack in data

bit_axisint

index of axis to place bit axis in resulting packed data

tvm.topi.nn.bitserial_conv2d_legalize(attrs, inputs, types)[源代码]#

Legalizes Bitserial Conv2D op.

Parameters#

attrstvm.ir.Attrs

Attributes of current convolution

inputslist of tvm.relay.Expr

The args of the Relay expr to be legalized

typeslist of types

List of input and output types

Returns#

resulttvm.relay.Expr

The legalized expr

tvm.topi.nn.bitserial_conv2d_nchw(data, kernel, stride, padding, activation_bits, weight_bits, pack_dtype='uint32', out_dtype='int16', unipolar=True)[源代码]#

Bitserial Conv2D operator.

Parameters#

datatvm.te.Tensor

4-D with shape [batch, in_channel, in_height, in_width]

kerneltvm.te.Tensor

4-D with shape [num_filter, in_channel, filter_height, filter_width]

strideint or a list/tuple of two ints

stride size, or [stride_height, stride_width]

paddingint or a list/tuple of two or four ints

padding size, [pad_height, pad_width], [pad_top, pad_left, pad_down, pad_right]

activation_bits: int

number of bits used for activations/input elements

weight_bits: int

number of bits used for weight elements

out_dtype: str

return type of convolution

pack_dtype: str

bit packing type

unipolar: bool

if binarization style is in unipolar 1/0 format, instead of bipolar -1/+1 format

Returns#

outputtvm.te.Tensor

4-D with shape [batch, out_channel, out_height, out_width]

tvm.topi.nn.bitserial_conv2d_nhwc(data, kernel, stride, padding, activation_bits, weight_bits, pack_dtype='uint32', out_dtype='int16', unipolar=True)[源代码]#

Bitserial Conv2D operator.

Parameters#

datatvm.te.Tensor

4-D with shape [batch, in_height, in_width, in_channel]

kerneltvm.te.Tensor

4-D with shape [filter_height, filter_width, in_channel, num_filter]

strideint or a list/tuple of two ints

stride size, or [stride_height, stride_width]

paddingint or a list/tuple of two or four ints

padding size, [pad_height, pad_width], [pad_top, pad_left, pad_down, pad_right]

activation_bits: int

number of bits used for activations/input elements

weight_bits: int

number of bits used for weight elements

out_dtype: str

return type of convolution

pack_dtype: str

bit packing type

unipolar: bool

if binarization style is in unipolar 1/0 format, instead of bipolar -1/+1 format

Returns#

outputtvm.te.Tensor

4-D with shape [batch, out_height, out_width, out_channel]

tvm.topi.nn.bitserial_dense(data, weight, data_bits, weight_bits, pack_dtype='uint32', out_dtype='int16', unipolar=True)[源代码]#

The default implementation of bitserial dense in topi.

Parameters#

datatvm.te.Tensor

2-D with shape [batch, in_dim]

weighttvm.te.Tensor

2-D with shape [out_dim, in_dim] or 3-D with shape [out_dim, weight_bits, in_dim]

Returns#

outputtvm.te.Tensor

2-D with shape [batch, out_dim]

tvm.topi.nn.concatenate(a_tuple, axis=0)[源代码]#

Join a sequence of arrays along an existing axis.

Parameters#

a_tupletuple of tvm.te.Tensor

The arrays to concatenate

axisint, optional

The axis along which the arrays will be joined. Default is 0.

Returns#

ret : tvm.te.Tensor

tvm.topi.nn.conv(inp, filt, stride, padding, dilation, groups, data_layout, kernel_layout='', out_dtype=None, auto_scheduler_rewritten_layout=None, meta_schedule_original_shape=None, auto_scheduler_should_rewrite_layout=False)[源代码]#

Convolution operator in NCHW or NHWC layout.

Supports 1D, 2D, 3D, … and grouping.

Parameters#

inptvm.te.Tensor

N-D with shape [batch, in_channel, in_height, in_width, …] in data_layout

filttvm.te.Tensor

N-D with shape [num_filter, in_channel // groups, filter_height, filter_width, …] in kernel_layout

strideint or a list/tuple of dim ints

(where dim=2 for NCHW, dim=1 for NCH, etc.) Stride size, or [stride_height, stride_width, …]

paddingint or a list/tuple of dim or 2*dim ints

(where dim=2 for NCHW, dim=1 for NCH, etc.) padding size, or [pad_height, pad_width, …] for dim ints, or [pad_top, pad_left, pad_bottom, pad_right] for 2*dim ints

dilationint or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

groupsint

number of groups

data_layoutstr

Layout of the input. N indicates batch dimension, C indicates channels, any other character indicates HW (or H or HWD for 1D and 3D).

kernel_layout: Optional[str]

Layout of the filter. I indicates input channels, O indicates output channels, any other character indicates HW dimension of the filter (or H or HWD for 1D and 3D). If kernel_layout is empty, use data_layout to infer the default kernel_layout. Default kernel_layout is OIHW for NCHW data layout, HWIO for NHWC data layout.

out_dtypestr

Elements are converted to this type before elementwise multiplication and summation.

auto_scheduler_rewritten_layout: str

Layout from autoscheduler’s layout rewritting.

meta_schedule_original_shapeOptional[List[PrimExpr]]

The original shape of the input tensor.

auto_scheduler_should_rewrite_layoutbool

Should auto scheduler be allowed to rewrite the layout of the filter tensor. Defaults to false. This can cause errors if used with grouped convs.

Returns#

Outputtvm.te.Tensor

N-D with shape [batch, out_channel, out_height, out_width, …] in data_layout

参数:
tvm.topi.nn.conv1d(data, kernel, strides=1, padding='VALID', dilation=1, groups=1, data_layout='NCW', kernel_layout='', out_dtype=None)[源代码]#

1D convolution forward operator.

Parameters#

datatvm.te.Tensor

3-D input shape [batch, in_channel, in_width] for data_layout == ‘NCW’ and [batch, in_width, in_channel] for data_layout == ‘NWC’

kerneltvm.te.Tensor

3-D kernel with shape [num_filter, in_channel, filter_size] for kernel_layout == ‘OIW’ and [filter_size, in_channel, num_filter] for kernel_layout == ‘WIO’

stridesint or tuple

The spatial stride along width

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

dilationint or tuple

Dilation rate if convolution should be dilated.

data_layoutstr

How input data is laid out, must be one of [‘NCW’, ‘NWC’]

kernel_layout: Optiona[str]

The layout of the kernel. If unspecified, use default layout. “OIW” if data_layout == “NCW”, “WIO” if data_layout == “NWC”.

out_dtypestr

The output data type. If None then output is same type as input.

tvm.topi.nn.conv1d_ncw(data, kernel, strides=1, padding='VALID', dilation=1, out_dtype=None)[源代码]#

1D convolution in NCW layout. See conv() for details on parameters

tvm.topi.nn.conv1d_nwc(data, kernel, strides=1, padding='VALID', dilation=1, out_dtype=None)[源代码]#

1D convolution in NWC layout. See conv() for details on parameters

tvm.topi.nn.conv1d_transpose_ncw(data, kernel, stride, padding, out_dtype, output_padding)[源代码]#

Transposed 1D convolution ncw forward operator.

Parameters#

datatvm.te.Tensor

3-D with shape [batch, in_channel, in_width]

kerneltvm.te.Tensor

3-D with shape [in_channel, num_filter, filter_width]

strideints

The spatial stride along width

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

out_dtypestr

The output data type. This is used for mixed precision.

output_paddingints

Used to recover the actual output shape in case there are more than one possible shape. Must be smaller than stride.

Returns#

outputtvm.te.Tensor

3-D with shape [batch, out_channel, out_width]

tvm.topi.nn.conv2d(input, filter, strides, padding, dilation, data_layout='NCHW', kernel_layout='', out_dtype=None)[源代码]#

Conv2D operator.

Parameters#

inputtvm.te.Tensor

4-D with shape [batch, in_channel, in_height, in_width] in data_layout

filtertvm.te.Tensor

4-D with shape [num_filter, in_channel, filter_height, filter_width] in kernel_layout

stridesint or a list/tuple of two ints

stride size, or [stride_height, stride_width]

paddingint or a list/tuple of 2 or 4 ints

padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

dilation: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

data_layoutstr

layout of data

kernel_layoutOptional[str]

layout of kernel. If unspecified, use default layout inferred from data_layout. “OIHW” if data_layout == “NCHW”, “HWIO” if data_layout == “NHWC”.

Returns#

outputtvm.te.Tensor

4-D with shape [batch, out_channel, out_height, out_width]

tvm.topi.nn.conv2d_NCHWc(data, kernel, stride, padding, dilation, layout, out_layout, out_dtype='float32')[源代码]#

Conv2D operator for nChw[x]c layout.

Parameters#

datatvm.te.Tensor

5-D with shape [batch, in_channel_chunk, in_height, in_width, in_channel_block]

kerneltvm.te.Tensor

6-D with shape [num_filter_chunk, in_channel_chunk, filter_height, filter_width, in_channel_block, num_filter_block]

strideint or a list/tuple of two ints

stride size, or [stride_height, stride_width]

paddingint or a list/tuple of 2 or 4 ints

padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

dilation: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

layoutstr

Input data layout

out_layoutstr

Output data layout

out_dtypestr

output data type

Returns#

outputtvm.te.Tensor

5-D with shape [batch, out_channel_chunk, out_height, out_width, out_channel_block]

tvm.topi.nn.conv2d_NCHWc_int8(data, kernel, stride, padding, dilation, layout, out_layout, out_dtype='int32', n_elems=4)[源代码]#

Conv2D operator for nChw[x]c layout.

Parameters#

datatvm.te.Tensor

5-D with shape [batch, in_channel_chunk, in_height, in_width, in_channel_block]

kerneltvm.te.Tensor

7-D with shape [num_filter_chunk, in_channel_chunk, filter_height, filter_width, in_channel_block/4, num_filter_block, 4]

strideint or a list/tuple of two ints

stride size, or [stride_height, stride_width]

paddingint or a list/tuple of 2 or 4 ints

padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

dilation: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

layoutstr

Input data layout

out_layoutstr

Output data layout

out_dtypestr

output data type

n_elemsint

numer of int8 elements accumulated

Returns#

outputtvm.te.Tensor

5-D with shape [batch, out_channel_chunk, out_height, out_width, out_channel_block]

tvm.topi.nn.conv2d_alter_layout(attrs, inputs, tinfos, out_type)[源代码]#

Change Conv2D layout.

Parameters#

attrstvm.ir.Attrs

Attributes of current convolution

inputstvm.relay.Expr

Grouped input symbols

tinfoslist

Input shape and dtype

out_type: type

The output type

Note#

Unlike other TOPI functions, this function operates on both graph level and operator level.

tvm.topi.nn.conv2d_gemm_weight_transform(kernel, tile_N, tile_K, use_scalable_vectors=False)[源代码]#

Weight transformation for winograd

Parameters#

kernel: Tensor

The raw kernel tensor with layout “NHWC”.

tile_N: int

Tile size across N axis of the weight transformation for ConvGemm. (N = OC)

tile_K: int

Tile size across K axis of the weight transformation for ConvGemm. (K = KW * KH * IC)

use_scalable_vectorsbool

determines if operations on scalable vectors are expected

Returns#

outputtvm.te.Tensor

2-D with shape [CI*KH*KW,CO]

tvm.topi.nn.conv2d_hwcn(Input, Filter, stride, padding, dilation, out_dtype=None)[源代码]#

Convolution operator in HWCN layout.

Parameters#

Inputtvm.te.Tensor

4-D with shape [in_height, in_width, in_channel, batch]

Filtertvm.te.Tensor

4-D with shape [filter_height, filter_width, in_channel, num_filter]

strideint or a list/tuple of two ints

Stride size, or [stride_height, stride_width]

paddingint or a list/tuple of 2 or 4 ints

padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

dilation: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

Returns#

outputtvm.te.Tensor

4-D with shape [out_height, out_width, out_channel, batch]

tvm.topi.nn.conv2d_infer_layout(workload, cfg)[源代码]#

Infer input/output shapes and layouts from a workload and cfg.

Parameters#

workloadtuple

conv2d workload

cfgtuple

tvm.autotvm config

Returns#

Output[tuple of tuple and str, tuple of tuple and str]

Input shapes and layouts, and output shapes and layouts

tvm.topi.nn.conv2d_legalize(attrs, inputs, types)[源代码]#

Legalizes Conv2D op.

Parameters#

attrstvm.ir.Attrs

Attributes of current convolution

inputslist of tvm.relay.Expr

The args of the Relay expr to be legalized

typeslist of types

List of input and output types

Returns#

resulttvm.relay.Expr

The legalized expr

tvm.topi.nn.conv2d_nchw(Input, Filter, stride, padding, dilation, out_dtype=None)[源代码]#

Convolution operator in NCHW layout.

Parameters#

Inputtvm.te.Tensor

4-D with shape [batch, in_channel, in_height, in_width]

Filtertvm.te.Tensor

4-D with shape [num_filter, in_channel, filter_height, filter_width]

strideint or a list/tuple of two ints

Stride size, or [stride_height, stride_width]

paddingint or a list/tuple of 2 or 4 ints

padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

dilation: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

Returns#

Outputtvm.te.Tensor

4-D with shape [batch, out_channel, out_height, out_width]

tvm.topi.nn.conv2d_nhwc(Input, Filter, stride, padding, dilation, out_dtype='float32', auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)[源代码]#

Convolution operator in NHWC layout.

Parameters#

Inputtvm.te.Tensor

4-D with shape [batch, in_height, in_width, in_channel]

Filtertvm.te.Tensor

4-D with shape [filter_height, filter_width, in_channel, num_filter]

strideint or a list/tuple of two ints

Stride size, or [stride_height, stride_width]

paddingint or a list/tuple of 2 or 4 ints

padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

dilation: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

out_dtype: str = “float32”,

The type of output tensor

auto_scheduler_rewritten_layout: str = “”

The layout after auto-scheduler’s layout rewrite pass.

meta_schedule_original_shape: Optional[List[PrimExpr]] = None

The original shape of the input tensor.

Returns#

outputtvm.te.Tensor

4-D with shape [batch, out_height, out_width, out_channel]

tvm.topi.nn.conv2d_transpose_alter_layout(attrs, inputs, tinfos, out_type)[源代码]#

Change Conv2D_Transpose layout.

Parameters#

attrstvm.ir.Attrs

Attributes of current convolution

inputstvm.relay.Expr

Grouped input symbols

tinfoslist

Input shape and dtype

out_type: type

The output type

Note#

Unlike other TOPI functions, this function operates on both graph level and operator level.

tvm.topi.nn.conv2d_transpose_legalize(attrs, inputs, types)[源代码]#

Legalizes Transposed 2D convolution op.

Parameters#

attrstvm.ir.Attrs

Attributes of current Transposed 2D convolution

inputslist of tvm.relay.Expr

The args of the Relay expr to be legalized

typeslist of types

List of input and output types

Returns#

resulttvm.relay.Expr

The legalized expr

tvm.topi.nn.conv2d_transpose_nchw(Input, Filter, strides, padding, out_dtype, output_padding)[源代码]#

Transposed 2D convolution nchw forward operator.

Parameters#

Inputtvm.te.Tensor

4-D with shape [batch, in_channel, in_height, in_width]

Filtertvm.te.Tensor

4-D with shape [in_channel, num_filter, filter_height, filter_width]

stridestuple of two ints

The spatial stride along height and width

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

out_dtypestr

The output data type. This is used for mixed precision.

output_paddingtuple of ints

Used to get the right output shape for gradients

Returns#

Outputtvm.te.Tensor

4-D with shape [batch, out_channel, out_height, out_width]

tvm.topi.nn.conv2d_transpose_nchw_preprocess(data, kernel, strides, padding, out_dtype, output_padding)[源代码]#

Preprocess data and kernel to make the compute pattern of conv2d_transpose the same as conv2d

tvm.topi.nn.conv2d_winograd_nchw(data, weight, strides, padding, dilation, out_dtype, pre_computed=False, auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)[源代码]#

Conv2D Winograd in NCHW layout. This is a clean version to be used by the auto-scheduler for both CPU and GPU.

Parameters#

datatvm.te.Tensor

4-D with shape [batch, in_channel, in_height, in_width]

weighttvm.te.Tensor

4-D with shape [filter_height, filter_width, in_channel, num_filter]

stridesint or a list/tuple of two ints

stride size, or [stride_height, stride_width]

paddingint or a list/tuple of two ints

padding size, or [pad_height, pad_width]

dilation: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

out_dtypestr, optional

Specifies the output data type.

pre_computed: bool

Whether the kernel is precomputed

auto_scheduler_rewritten_layout: str = “”

The layout after auto-scheduler’s layout rewrite pass.

meta_schedule_original_shape: Optional[List[PrimExpr]] = None

The original shape of the input tensor.

Returns#

outputtvm.te.Tensor

4-D with shape [batch, out_height, out_width, out_channel]

tvm.topi.nn.conv2d_winograd_nchw_without_weight_transform(data, weight, strides, padding, dilation, out_dtype, auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)[源代码]#

Conv2D Winograd without layout transform in NCHW layout. This is a clean version to be used by meta-schedule for both CPU and GPU.

Parameters#

datatvm.te.Tensor

4-D with shape [batch, in_height, in_width, in_channel]

weighttvm.te.Tensor

4-D with shape [filter_height, filter_width, in_channel, num_filter]

stridesint or a list/tuple of two ints

stride size, or [stride_height, stride_width]

paddingint or a list/tuple of two ints

padding size, or [pad_height, pad_width]

dilation: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

out_dtypestr, optional

Specifies the output data type.

auto_scheduler_rewritten_layout: str = “”

The layout after auto-scheduler’s layout rewrite pass.

meta_schedule_original_shape: Optional[List[PrimExpr]] = None

The original shape of the input tensor.

Returns#

outputtvm.te.Tensor

4-D with shape [batch, out_height, out_width, out_channel]

tvm.topi.nn.conv2d_winograd_nhwc(data, weight, strides, padding, dilation, out_dtype, pre_computed=False, auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)[源代码]#

Conv2D Winograd in NHWC layout. This is a clean version to be used by the auto-scheduler for both CPU and GPU.

Parameters#

datatvm.te.Tensor

4-D with shape [batch, in_height, in_width, in_channel]

weighttvm.te.Tensor

4-D with shape [filter_height, filter_width, in_channel, num_filter]

stridesint or a list/tuple of two ints

stride size, or [stride_height, stride_width]

paddingint or a list/tuple of two ints

padding size, or [pad_height, pad_width]

dilation: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

out_dtypestr, optional

Specifies the output data type.

pre_computed: bool

Whether the kernel is precomputed

auto_scheduler_rewritten_layout: str = “”

The layout after auto-scheduler’s layout rewrite pass.

meta_schedule_original_shape: Optional[List[PrimExpr]] = None

The original shape of the input tensor.

Returns#

outputtvm.te.Tensor

4-D with shape [batch, out_height, out_width, out_channel]

tvm.topi.nn.conv2d_winograd_nhwc_without_weight_transform(data, weight, strides, padding, dilation, out_dtype, auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)[源代码]#

Conv2D Winograd without layout transform in NHWC layout. This is a clean version to be used by the auto-scheduler for both CPU and GPU.

Parameters#

datatvm.te.Tensor

4-D with shape [batch, in_height, in_width, in_channel]

weighttvm.te.Tensor

4-D with shape [filter_height, filter_width, in_channel, num_filter]

stridesint or a list/tuple of two ints

stride size, or [stride_height, stride_width]

paddingint or a list/tuple of two ints

padding size, or [pad_height, pad_width]

dilation: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

out_dtypestr, optional

Specifies the output data type.

auto_scheduler_rewritten_layout: str = “”

The layout after auto-scheduler’s layout rewrite pass.

meta_schedule_original_shape: Optional[List[PrimExpr]] = None

The original shape of the input tensor.

Returns#

outputtvm.te.Tensor

4-D with shape [batch, out_height, out_width, out_channel]

tvm.topi.nn.conv2d_winograd_nnpack_weight_transform(kernel, convolution_algorithm, out_dtype)[源代码]#

Weight transformation for winograd

Parameters#

kernel: Tensor

The raw kernel tensor with layout “NCHW”. Only 3x3 kernel is supported for now.

convolution_algorithm: int

The convolution algorithm for Winograd NNPACK.

Returns#

outputtvm.te.Tensor

4-D with shape [alpha, alpha, CO, CI]

tvm.topi.nn.conv2d_winograd_weight_transform(kernel, tile_size)[源代码]#

Weight transformation for winograd

Parameters#

kernel: Tensor

The raw kernel tensor with layout “NCHW”.

tile_size: int

Tile size of winograd transform. e.g. 2 for F(2x2, 3x3) and 4 for F(4x4, 3x3)

Returns#

outputtvm.te.Tensor

4-D with shape [alpha, alpha, CO, CI]

tvm.topi.nn.conv3d_alter_layout(attrs, inputs, tinfos, out_type)[源代码]#

Change Conv3D layout.

Parameters#

attrstvm.ir.Attrs

Attributes of current convolution

inputstvm.relay.Expr

Grouped input symbols

tinfoslist

Input shape and dtype

out_type: type

The output type

Note#

Unlike other TOPI functions, this function operates on both graph level and operator level.

tvm.topi.nn.conv3d_ncdhw(Input, Filter, stride, padding, dilation, groups, out_dtype=None)[源代码]#

Conv3D operator in NCDHW layout.

Parameters#

Inputtvm.te.Tensor

5-D with shape [batch, in_channel, in_depth, in_height, in_width]

Filtertvm.te.Tensor

5-D with shape [num_filter, in_channel, filter_depth, filter_height, filter_width]

strideint or a list/tuple of three ints

Stride size, or [strid_depth, stride_height, stride_width]

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

dilation: int or a list/tuple of three ints

dilation size, or [dilation_depth, dilation_height, dilation_width]

groups: int

Number of groups.

Returns#

Outputtvm.te.Tensor

5-D with shape [batch, out_channel, out_depth, out_height, out_width]

tvm.topi.nn.conv3d_ndhwc(Input, Filter, stride, padding, dilation, groups, out_dtype='float32', auto_scheduler_rewritten_layout='', meta_schedule_origin_shape=None)[源代码]#

Convolution operator in NDHWC layout.

Parameters#

Inputtvm.te.Tensor

5-D with shape [batch, in_depth, in_height, in_width, in_channel]

Filtertvm.te.Tensor

5-D with shape [filter_depth, filter_height, filter_width, in_channel, num_filter]

strideint or a list/tuple of three ints

Stride size, or [stride_depth, stride_height, stride_width]

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

dilation: int or a list/tuple of three ints

dilation size, or [dilation_depth, dilation_height, dilation_width]

groups: int

Number of groups.

out_dtype: str = “float32”,

The type of output tensor

auto_scheduler_rewritten_layout: str = “”

The layout after auto-scheduler’s layout rewrite pass.

meta_schedule_origin_shape: Optional[List[PrimExpr]] = None

The original shape of the input tensor.

Returns#

Outputtvm.te.Tensor

5-D with shape [batch, out_depth, out_height, out_width, out_channel]

tvm.topi.nn.conv3d_transpose_legalize(attrs, inputs, types)[源代码]#

Legalizes Transposed 3D convolution op.

Parameters#

attrstvm.ir.Attrs

Attributes of current Transposed 3D convolution

inputslist of tvm.relay.Expr

The args of the Relay expr to be legalized

typeslist of types

List of input and output types

Returns#

resulttvm.relay.Expr

The legalized expr

tvm.topi.nn.conv3d_transpose_ncdhw(Input, Filter, strides, padding, out_dtype, output_padding)[源代码]#

Transposed 3D convolution ncdhw forward operator.

Parameters#

Inputtvm.te.Tensor

5-D with shape [batch, in_channel, in_depth, in_height, in_width]

Filtertvm.te.Tensor

5-D with shape [in_channel, num_filter, filter_depth, filter_height, filter_width]

stridesint or a list/tuple of three ints

The spatial stride along depth,height and width

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

out_dtypestr

The output data type. This is used for mixed precision.

output_paddingtuple of ints

Used to get the right output shape for gradients

Returns#

Outputtvm.te.Tensor

5-D with shape [batch, out_channel, out_depth, out_height, out_width]

tvm.topi.nn.conv3d_transpose_ncdhw_preprocess(data, kernel, strides, padding, out_dtype, output_padding)[源代码]#

Preprocess data and kernel to make the compute pattern of conv3d_transpose the same as conv3d

tvm.topi.nn.conv3d_winograd_weight_transform(kernel, tile_size)[源代码]#

Weight transformation for 3D winograd

Parameters#

kernel: Tensor

The raw kernel tensor with layout “NCDHW”.

tile_size: int

Tile size of winograd transform. e.g. 2 for F(2x2, 3x3) and 4 for F(4x4, 3x3)

Returns#

outputtvm.te.Tensor

5-D with shape [alpha, alpha, alpha, CO, CI]

tvm.topi.nn.correlation_nchw(data1, data2, kernel_size, max_displacement, stride1, stride2, padding, is_multiply)[源代码]#

Correlation operator in NCHW layout.

Parameters#

data1tvm.te.Tensor

4-D with shape [batch, channel, height, width]

data2tvm.te.Tensor

4-D with shape [batch, channel, height, width]

kernel_size: int

Kernel size for correlation, must be an odd number

max_displacement: int

Max displacement of Correlation

stride1: int

Stride for data1

stride2: int

Stride for data2 within the neightborhood centered around data1

paddingint or a list/tuple of 2 or 4 ints

Padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

is_multiply: bool

operation type is either multiplication or substraction

Returns#

Outputtvm.te.Tensor

4-D with shape [batch, out_channel, out_height, out_width]

tvm.topi.nn.declaration_conv2d_transpose_impl(data, kernel, strides, padding, out_dtype, output_padding)[源代码]#

Implementation of conv2d transpose

tvm.topi.nn.declaration_conv3d_transpose_impl(data, kernel, strides, padding, out_dtype, output_padding)[源代码]#

Implementation of conv3d transpose

tvm.topi.nn.deformable_conv2d_nchw(data, offset, kernel, strides, padding, dilation, deformable_groups, groups, out_dtype)[源代码]#

Deformable conv2D operator in NCHW layout.

The deformable convolution operation is described in https://arxiv.org/abs/1703.06211

Parameters#

datatvm.te.Tensor

4-D with shape [batch, in_channel, in_height, in_width]

offsettvm.te.Tensor

4-D with shape [batch, deformable_groups * filter_height * filter_width * 2, out_height, out_width].

kerneltvm.te.Tensor

4-D with shape [num_filter, in_channel, filter_height, filter_width]

stridesint or a list/tuple of two ints

stride size, or [stride_height, stride_width]

paddingint or a list/tuple of two ints

padding size, or [pad_height, pad_width]

dilationint or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

deformable_groupsint

number of deformable groups

groupsint

number of groups

Returns#

outputtvm.te.Tensor

4-D with shape [batch, out_channel, out_height, out_width]

tvm.topi.nn.deformable_conv2d_nhwc(data, offset, kernel, strides, padding, dilation, deformable_groups, groups, out_dtype)[源代码]#

Deformable conv2D operator in NHWC layout.

The deformable convolution operation is described in https://arxiv.org/abs/1703.06211

Parameters#

datatvm.te.Tensor

4-D with shape [batch, in_height, in_width, in_channel]

offsettvm.te.Tensor
4-D with shape [batch, out_height, out_width,

deformable_groups * filter_height * filter_width * 2].

kerneltvm.te.Tensor

4-D with shape [filter_height, filter_width, in_channel, num_filter]

stridesint or a list/tuple of two ints

stride size, or [stride_height, stride_width]

paddingint or a list/tuple of two ints

padding size, or [pad_height, pad_width]

dilationint or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

deformable_groupsint

number of deformable groups

groupsint

number of groups

Returns#

outputtvm.te.Tensor

4-D with shape [batch, out_height, out_width, out_channel]

tvm.topi.nn.dense(data, weight, bias=None, out_dtype=None, auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)[源代码]#

The default implementation of dense in topi. This is an alias of matmul_nt operator for data tensor in non-transposed format and weight tensor in transposed format.

Parameters#

datatvm.te.Tensor

2-D with shape [batch, in_dim]

weighttvm.te.Tensor

2-D with shape [out_dim, in_dim]

biasOptional[tvm.te.Tensor]

1-D with shape [out_dim]

out_dtypeOptional[str]

The output type. This is used for mixed precision.

auto_scheduler_rewritten_layout: str = “”

The layout after auto-scheduler’s layout rewrite pass.

meta_schedule_original_shape: Optional[List[PrimExpr]] = None

The original shape of the input tensor.

Returns#

outputtvm.te.Tensor

2-D with shape [batch, out_dim]

tvm.topi.nn.dense_alter_layout(attrs, inputs, tinfos, out_type)[源代码]#

Change dense layout.

Parameters#

attrstvm.ir.Attrs

Attributes of current convolution

inputstvm.relay.Expr

Grouped input symbols

tinfoslist

Input shape and dtype

out_type: type

The output type

Note#

Unlike other TOPI functions, this function operates on both graph level and operator level.

tvm.topi.nn.dense_legalize(attrs, inputs, types)[源代码]#

Legalizes dense op.

Parameters#

attrstvm.ir.Attrs

Attributes of current dense

inputslist of tvm.relay.Expr

The args of the Relay expr to be legalized

typeslist of types

List of input and output types

Returns#

resulttvm.relay.Expr

The legalized expr

tvm.topi.nn.dense_pack(data, weight, bias=None, out_dtype=None)[源代码]#

The default implementation of dense_pack in topi.

Parameters#

datatvm.te.Tensor

2-D with shape [batch, in_dim]

weighttvm.te.Tensor

2-D with shape [out_dim, in_dim]

biasOptional[tvm.te.Tensor]

1-D with shape [out_dim]

out_dtypeOptional[str]

The output type. This is used for mixed precision.

Returns#

outputtvm.te.Tensor

2-D with shape [batch, out_dim]

tvm.topi.nn.depth_to_space(data, block_size, layout='NCHW', mode='DCR')[源代码]#

Perform depth to space transformation on the data

Parameters#

datatvm.te.Tensor

4-D tensor in either NCHW or NHWC layout.

block_sizeint

Size of blocks to compose from channel dimension.

layoutstring

Either NCHW or NHWC, indicating data layout.

modestring

Either DCR or CDR, indicates how channels should be accessed. In DCR, channels are interwoven in the Tensorflow style while in CDR channels are accessed sequentially as in Pytorch.

Returns#

outputtvm.te.Tensor

Output of shape [N, C / block_size**2, H * block_size, W * block_size]

tvm.topi.nn.depthwise_conv2d_NCHWc(Input, Filter, stride, padding, dilation, layout, out_layout, out_dtype=None)[源代码]#

Depthwise convolution NCHW[x]c forward operator.

Parameters#

Inputtvm.te.Tensor

5-D with shape [batch, in_channel_chunk, in_height, in_width, in_channel_block]

Filtertvm.te.Tensor

6-D with shape [out_channel_chunk, 1, filter_height, filter_width, 1, out_channel_block] In NCHWc depthwise convolution, we group kernel’s in_channel and channel_multiplier together then do the tiling.

stridetuple of two ints

The spatial stride along height and width

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

dilation: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

layoutstr

Input data layout

out_layoutstr

Output data layout

out_dtype: str, optional

Output data type

Returns#

Outputtvm.te.Tensor

5-D with shape [batch, out_channel_chunk, out_height, out_width, out_channel_block]

tvm.topi.nn.depthwise_conv2d_backward_input_nhwc(Filter, Out_grad, oshape, ishape, stride, padding)[源代码]#

Depthwise convolution nhwc backward wrt input operator.

Parameters#

Filtertvm.te.Tensor

4-D with shape [filter_height, filter_width, in_channel, channel_multiplier]

Out_gradtvm.te.Tensor

4-D with shape [batch, out_height, out_width, out_channel]

stridetuple of two ints

The spatial stride along height and width

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

Returns#

Outputtvm.te.Tensor

4-D with shape [batch, in_height, in_width, in_channel]

tvm.topi.nn.depthwise_conv2d_backward_weight_nhwc(Input, Out_grad, oshape, fshape, stride, padding)[源代码]#

Depthwise convolution nhwc backward wrt weight operator.

Parameters#

Inputtvm.te.Tensor

4-D with shape [batch, in_height, in_width, in_channel]

Out_gradtvm.te.Tensor

4-D with shape [batch, out_height, out_width, out_channel]

stridetuple of two ints

The spatial stride along height and width

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

Returns#

Outputtvm.te.Tensor

4-D with shape [filter_height, filter_width, in_channel, channel_multiplier]

tvm.topi.nn.depthwise_conv2d_infer_layout(workload, cfg)[源代码]#

Infer input/output shapes and layouts from a workload and cfg.

Parameters#

workloadtuple

conv2d workload

cfgtuple

tvm.autotvm config

Returns#

Output[tuple of tuple and str, tuple of tuple and str]

Input shapes and layouts, and output shapes and layouts

tvm.topi.nn.depthwise_conv2d_nchw(Input, Filter, stride, padding, dilation, out_dtype=None)[源代码]#

Depthwise convolution nchw forward operator.

Parameters#

Inputtvm.te.Tensor

4-D with shape [batch, in_channel, in_height, in_width]

Filtertvm.te.Tensor

4-D with shape [in_channel, channel_multiplier, filter_height, filter_width]

strideint or a list/tuple of two ints

The spatial stride, or (stride_height, stride_width).

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

dilation: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

out_dtype: str, optional

Output data type

Returns#

Outputtvm.te.Tensor

4-D with shape [batch, out_channel, out_height, out_width]

tvm.topi.nn.depthwise_conv2d_nhwc(Input, Filter, stride, padding, dilation, kernel_layout='HWOI', out_dtype=None)[源代码]#

Depthwise convolution nhwc forward operator.

Parameters#

Inputtvm.te.Tensor

4-D with shape [batch, in_height, in_width, in_channel]

Filtertvm.te.Tensor

4-D with shape [filter_height, filter_width, in_channel, channel_multiplier]

stridetuple of two ints

The spatial stride along height and width

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

dilation: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

out_dtype: str, optional

Output data type

Returns#

Outputtvm.te.Tensor

4-D with shape [batch, out_height, out_width, out_channel]

tvm.topi.nn.dilate(data, strides, dilation_value=0.0, name='DilatedInput')[源代码]#

Dilate data with given dilation value (0 by default).

Parameters#

datatvm.te.Tensor

n-D, can be any layout.

strideslist / tuple of n ints

Dilation stride on each dimension, 1 means no dilation.

dilation_valueint/float, optional

Value used to dilate the input.

namestr, optional

The name prefix operators generated

Returns#

Outputtvm.te.Tensor

n-D, the same layout as data.

tvm.topi.nn.equal_const_int(expr, value)[源代码]#

Returns if expr equals value.

Parameters#

exprtvm.Expr

The input expression.

Returns#

equalbool

Whether they equals.

tvm.topi.nn.fast_softmax(x, axis=-1)[源代码]#

Perform softmax activation on the data. Use approximation to compute exponent for faster speed.

Parameters#

datatvm.te.Tensor

can be any dimension

axisint

channel axis

Returns#

outputtvm.te.Tensor

output shape is the same as input

tvm.topi.nn.fifo_buffer(data, buffer, axis)[源代码]#

FIFO buffer to enable computation reuse in CNNs with sliding indow input

Compute equivalent of

concat(buffer, data, axis=axis)
.slice_axis(axis=axis,
            begin=data.shape[axis],
            end=data.shape[axis]+buffer.shape[axis])

Useful for

  • Encoding explicit re-use of computation in convolution ops operated on a sliding window input

  • Implementing a FIFO queue to cache intermediate results, e.g. as in Fast WaveNet.

Parameters#

datatvm.te.Tensor

The input data

buffertvm.te.Tensor

Previous value of the FIFO buffer

axisint

Specify which axis should be used for buffering

Returns#

resulttvm.te.Tensor

Updated value for the buffer

tvm.topi.nn.flatten(data)[源代码]#

Flattens the input array into a 2-D array by collapsing the higher dimensions.

Parameters#

datatvm.te.Tensor

Input array.

Returns#

outputtvm.te.Tensor

2-D array with collapsed higher dimensions.

tvm.topi.nn.get_const_int(expr)[源代码]#

Verifies expr is integer and get the constant value.

Parameters#

exprtvm.Expr or int

The input expression.

Returns#

out_valueint

The output.

tvm.topi.nn.get_const_tuple(in_tuple)[源代码]#

Verifies input tuple is IntImm or Var, returns tuple of int or Var.

Parameters#

in_tupletuple of Expr

The input.

Returns#

out_tupletuple of int

The output.

tvm.topi.nn.get_pad_tuple(padding, kernel)[源代码]#

Common code to get the pad option

Parameters#

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

kerneltuple of int

Conv kernel size

Returns#

pad_topint

Padding size on top

pad_leftint

Padding size on left

pad_downint

Padding size on down.

pad_rightint

Padding size on right.

tvm.topi.nn.get_pad_tuple1d(padding, kernel)[源代码]#

Common code to get the pad option

Parameters#

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

kerneltuple of int

Conv kernel size

Returns#

pad_leftint

Padding size on left

pad_rightint

Padding size on right.

tvm.topi.nn.get_pad_tuple3d(padding, kernel)[源代码]#

Common code to get the pad option

Parameters#

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

kerneltuple of int

Conv kernel size

Returns#

pad_frontint

Padding size on front.

pad_topint

Padding size on top

pad_leftint

Padding size on left

pad_backint

Padding size on back.

pad_downint

Padding size on down.

pad_rightint

Padding size on right.

tvm.topi.nn.get_pad_tuple_generic(padding, kernel)[源代码]#

Common code to get the pad option

Parameters#

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

kerneltuple of int

Conv kernel size

Returns#

pad_topint

Padding size on top

pad_downint

Padding size on down.

pad_leftint

Padding size on left

pad_rightint

Padding size on right.

tvm.topi.nn.global_pool(data, pool_type, layout='NCHW')[源代码]#
Perform global pooling on height and width dimension of data.

It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.

Parameters#

datatvm.te.Tensor

n-D with shape of layout

pool_typestr

Pool type, ‘max’ or ‘avg’

layoutstr

Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.

Returns#

outputtvm.te.Tensor

n-D in same layout with height and width dimension size of 1. e.g., for NCHW, the output shape will be [batch, channel, 1, 1]

tvm.topi.nn.group_conv1d_ncw(data, kernel, strides=1, padding='VALID', dilation=1, groups=1, out_dtype=None)[源代码]#

1D convolution forward operator for NCW layout.

Parameters#

datatvm.te.Tensor

3-D with shape [batch, in_channel, in_width]

kerneltvm.te.Tensor

3-D with shape [num_filter, in_channel, filter_size]

stridesint or tuple

The spatial stride along width

paddingint, tuple, or str

Padding size can be an integer for equal padding, a tuple of (left, right) or a string in [‘VALID’, ‘SAME’].

dilationint or tuple

Dilation rate if convolution should be dilated.

groupsint

Number of groups

out_dtypestr

The output data type. If None then output is same type as input.

tvm.topi.nn.group_conv1d_nwc(data, kernel, strides=1, padding='VALID', dilation=1, groups=1, out_dtype=None)[源代码]#

1D convolution forward operator for NWC layout.

Parameters#

datatvm.te.Tensor

3-D with shape [batch, in_width, in_channel]

kerneltvm.te.Tensor

3-D with shape [filter_size, in_channel, num_filter]

stridesint or tuple

The spatial stride along width

paddingint, tuple, or str

Padding size can be an integer for equal padding, a tuple of (left, right) or a string in [‘VALID’, ‘SAME’].

dilationint or tuple

Dilation rate if convolution should be dilated.

groupsint

Number of groups

out_dtypestr

The output data type. If None then output is same type as input.

tvm.topi.nn.group_conv1d_transpose_ncw(data, kernel, stride, padding, out_dtype, output_padding, groups)[源代码]#

Transposed 1D group convolution ncw forward operator.

Parameters#

datatvm.te.Tensor

3-D with shape [batch, in_channel, in_width]

kerneltvm.te.Tensor

3-D with shape [in_channel, num_filter, filter_width]

strideints

The spatial stride along width

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

out_dtypestr

The output data type. This is used for mixed precision.

output_paddingints

Used to recover the actual output shape in case there are more than one possible shape. Must be smaller than stride.

groupsint

number of groups

Returns#

outputtvm.te.Tensor

3-D with shape [batch, out_channel, out_width]

tvm.topi.nn.group_conv2d_nchw(Input, Filter, stride, padding, dilation, groups, out_dtype=None)[源代码]#

Group convolution operator in NCHW layout.

Parameters#

Inputtvm.te.Tensor

4-D with shape [batch, in_channel, in_height, in_width]

Filtertvm.te.Tensor

4-D with shape [num_filter, in_channel // groups, filter_height, filter_width]

strideint or a list/tuple of two ints

Stride size, or [stride_height, stride_width]

paddingint or a list/tuple of 2 or 4 ints

padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

dilationint or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

groupsint

number of groups

out_dtypestr

The output type. This is used for mixed precision.

Returns#

Outputtvm.te.Tensor

4-D with shape [batch, out_channel, out_height, out_width]

tvm.topi.nn.group_conv2d_nhwc(Input, Filter, stride, padding, dilation, groups, out_dtype=None)[源代码]#

Group convolution operator in NHWC layout.

Parameters#

Inputtvm.te.Tensor

4-D with shape [batch, in_height, in_width, in_channel, …]

Filtertvm.te.Tensor

4-D with shape [filter_height, filter_width, in_channel // groups, num_filter]

strideint or a list/tuple of two ints

Stride size, or [stride_height, stride_width]

paddingint or a list/tuple of 2 or 4 ints

padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

dilationint or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

groupsint

number of groups

out_dtypestr

The output type. This is used for mixed precision.

Returns#

Outputtvm.te.Tensor

4-D with shape [batch, out_height, out_width, out_channel]

tvm.topi.nn.group_conv2d_transpose_nchw(data, kernel, stride, padding, out_dtype, output_padding, groups)[源代码]#

Group convolution operator in NCHW layout.

Parameters#

datatvm.te.Tensor

4-D with shape [batch, in_channel, in_height, in_width]

kerneltvm.te.Tensor

4-D with shape [in_channel, out_channel // groups, filter_height, filter_width]

strideint or a list/tuple of two ints

Stride size, or [stride_height, stride_width]

paddingint or a list/tuple of 2 or 4 ints

padding size, or [pad_height, pad_width] for 2 ints, or [pad_top, pad_left, pad_bottom, pad_right] for 4 ints

out_dtypestr

The output data type. This is used for mixed precision.

output_paddingtuple of ints

Used to get the right output shape for gradients

groupsint

number of groups

out_dtypestr

The output type. This is used for mixed precision.

Returns#

Outputtvm.te.Tensor

4-D with shape [batch, out_channel, out_height, out_width]

tvm.topi.nn.group_conv3d_transpose_ncdhw(data, kernel, strides, padding, out_dtype, output_padding, groups)[源代码]#

Transposed group 3D convolution ncdhw forward operator.

Parameters#

datatvm.te.Tensor

5-D with shape [batch, in_channel, in_depth, in_height, in_width]

kerneltvm.te.Tensor

5-D with shape [in_channel, num_filter, filter_depth, filter_height, filter_width]

stridesint or a list/tuple of three ints

The spatial stride along depth,height and width

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

out_dtypestr

The output data type. This is used for mixed precision.

output_paddingtuple of ints

Used to get the right output shape for gradients

groupsint

number of groups

Returns#

Outputtvm.te.Tensor

5-D with shape [batch, out_channel, out_depth, out_height, out_width]

tvm.topi.nn.group_norm(data, gamma, beta, num_groups, channel_axis, axes, epsilon=1e-05)[源代码]#

Group normalization operator. It accepts fp16 and fp32 as input data type. It will cast the input to fp32 to perform the computation. The output will have the same data type as input.

Parameters#

datatvm.te.Tensor

N-D with shape (d_0, d_1, …, d_{N-1})

gamma: tvm.te.Tensor

1-D with shape (r_0) where r_0 == d_{channel_axis}

beta: tvm.te.Tensor

Optional, 1-D with shape (r_0) where r_0 == d_{channel_axis}

num_groupsint

The number of groups

channel_axisint

The channel axis

axeslist of int

Axis over the normalization applied, excluding the channel axis

epsilonfloat

The epsilon value to avoid division by zero.

Returns#

resulttvm.te.Tensor

N-D with shape (d_0, d_1, …, d_{N-1})

tvm.topi.nn.instance_norm(data, gamma, beta, axis, epsilon=1e-05)[源代码]#

Instance normalization operator.

Parameters#

datatvm.te.Tensor

N-D with shape (d_0, d_1, …, d_{N-1})

gamma: tvm.te.Tensor

K-D with shape (r_0, r_1, …, r_{K-1}) where K == len(axis) and d_{axis_k} == r_k

beta: tvm.te.Tensor

Optional, K-D with shape (r_0, r_1, …, r_{K-1}) where K == len(axis) and d_{axis_k} == r_k

axislist of int

Axis over the normalization applied (the axis along which the mean and variance are computed)

epsilonfloat

The epsilon value to avoid division by zero.

Returns#

resulttvm.te.Tensor

N-D with shape (d_0, d_1, …, d_{N-1})

tvm.topi.nn.layer_norm(data, gamma, beta, axis, epsilon=1e-05)[源代码]#

Layer normalization operator. It accepts fp16 and fp32 as input data type. It will cast the input to fp32 to perform the computation. The output will have the same data type as input.

Parameters#

datatvm.te.Tensor

N-D with shape (d_0, d_1, …, d_{N-1})

gamma: tvm.te.Tensor

K-D with shape (r_0, r_1, …, r_{K-1}) where K == len(axis) and d_{axis_k} == r_k

beta: tvm.te.Tensor

Optional, K-D with shape (r_0, r_1, …, r_{K-1}) where K == len(axis) and d_{axis_k} == r_k

axislist of int

Axis over the normalization applied

epsilonfloat

The epsilon value to avoid division by zero.

Returns#

resulttvm.te.Tensor

N-D with shape (d_0, d_1, …, d_{N-1})

tvm.topi.nn.layout_transform(tensor, current_layout, desired_layout)[源代码]#

Transform a tensor with the current layout to the desired layout.

E.g. layout_transform(t, “NCHW”, “CNHW”) –> relay.transpose(t, [1, 0, 2, 3])

Parameters#

tensor: relay.Expr

The Tensor to transpose

current_layout: str

The current layout e.g. NCHW or OIHW

desired_layout: str

The desired layout, must be compatible with current_layout

Returns#

The layout_transformed tensor.

参数:
tvm.topi.nn.leaky_relu(x, alpha)[源代码]#

Take leaky relu of input x.

Parameters#

xtvm.te.Tensor

Input argument.

alphafloat

The slope for the small gradient when x < 0

Returns#

ytvm.te.Tensor

The result.

tvm.topi.nn.log_softmax(x, axis=-1)[源代码]#

Perform log softmax activation on the data

Parameters#

datatvm.te.Tensor

N-D input data

Returns#

outputtvm.te.Tensor

N-D output with same shape

tvm.topi.nn.lrn(data, size, axis=1, alpha=0.0001, beta=0.75, bias=2)[源代码]#

Perform the across channels local response normalisation on the input data.

sum_sqr_up^i{x, y} = (bias+((alpha/size)* {sum_{j=max(0, i-size/2)}^{min(N-1,i+size/2)} (data^j{x,y})^2}))^beta output^i{x, y} = data^i{x, y}/sum_sqr_up^i{x, y} N is the number for input channels

Parameters#

datatvm.te.Tensor

4-D with shape [batch, channel, height, width]

sizeint

normalisation window size

axisint

input data layout channel axis default value is 1 for NCHW format

biasfloat

offset to avoid dividing by 0

alphafloat

to be divided

betafloat

exponent

Returns#

outputtvm.te.Tensor

4-D output with same shape

tvm.topi.nn.lstm(Xs, Wi, Wh, Bi=None, Bh=None, h_init=None, c_init=None, proj=None, p_i=None, p_f=None, p_o=None, f_act=<function sigmoid>, g_act=<function tanh>, h_act=<function tanh>, reverse=False, weight_layout='IFGO')[源代码]#

General LSTM implemented using TE scan.

Parameters#

Xste.Tensor

Input sequence with shape (seq_len, batch_size, in_dim)

Wite.Tensor

Input weight matrix with shape (4 * hidden_dim, in_dim). The weights are packed according to weight_layout.

Whte.Tensor

Hidden weight matrix with shape (4 * hidden_dim, hidden_dim or proj_dim). Packed as Wh.

Bite.Tensor, optional

Input bias with shape (4 * hidden_dim,), by default None. Packed as Wh.

Bhte.Tensor, optional

Hidden bias with shape as Bi, by default None. Packed as Wh.

h_initte.Tensor, optional

Initial hidden state with shape (batch_size, hidden_dim or proj_dim), zero if None

c_initte.Tensor, optional

Initial cell state with same shape as h_init, zero if None

projte.Tensor, optional

Projection matrix with shape (proj_dim, hidden_dim), by default None

p_i, p_f, p_ote.Tensor, optional

Peephole LSTM matrices with shape (batch_size, hidden_dim), by default None

f_act, g_act, h_actF, optional

Gate activation functions

reversebool, optional

Whether to process Xs in reverse, by default False

weight_layoutstr, optional

The packed weight layout for gates, by default “IFGO”. Note: I = input, F = forget, G = cell, O = output.

Returns#

resultte.Tensor, te.Tensor

Tuple of hidden states (with shape (seq_len, batch_size, hidden_dim or proj_dim)), and cell states (with shape (seq_len, batch_size, hidden_dim)).

参数:

weight_layout (str)

tvm.topi.nn.matmul(tensor_a, tensor_b, bias=None, out_dtype=None, transpose_a=False, transpose_b=False, auto_scheduler_rewritten_layout='', meta_schedule_original_shape=None)[源代码]#

The default implementation of matmul in topi.

Parameters#

tensor_atvm.te.Tensor

2-D with shape [batch, in_dim]

tensor_btvm.te.Tensor

2-D with shape [out_dim, in_dim]

biasOptional[tvm.te.Tensor]

1-D with shape [out_dim]

out_dtypeOptional[str]

The output type. This is used for mixed precision.

transpose_aOptional[bool] = False

Whether the tensor_a is in transposed format.

transpose_bOptional[bool] = False

Whether the tensor_b is in transposed format.

auto_scheduler_rewritten_layout: Optional[str] = “”

The layout after auto-scheduler’s layout rewrite pass.

meta_schedule_original_shape: Optional[List[PrimExpr]] = None

The original shape of the input tensor.

Returns#

outputtvm.te.Tensor

2-D with shape [batch, out_dim]

tvm.topi.nn.matmul_legalize(attrs, inputs, types)[源代码]#

Legalizes matmul op.

Parameters#

attrstvm.ir.Attrs

Attributes of current matmul

inputslist of tvm.relay.Expr

The args of the Relay expr to be legalized

typeslist of types

List of input and output types

Returns#

resulttvm.relay.Expr

The legalized expr

tvm.topi.nn.mirror_pad(data, pad_before, pad_after=None, mode='SYMMETRIC', name='MirrorPadInput')[源代码]#

Pad Input with mirroring either symmetric or reflected.

Parameters#

datatvm.te.Tensor

n-D input, can be any layout.

pad_beforelist / tuple of n ints

Pad width on each dimension to pad the before the axis begin.

pad_afterlist / tuple of n ints, optional

Pad width each dimension to pad the after the axis end.

mode: str, optional

Type of mirror padding to apply. Must be SYMMETRIC or REFLECT

namestr, optional

The name prefix operators generated

Returns#

Outputtvm.te.Tensor

n-D, the same layout as Input.

tvm.topi.nn.namedtuple(typename, field_names, *, rename=False, defaults=None, module=None)[源代码]#

Returns a new subclass of tuple with named fields.

>>> Point = namedtuple('Point', ['x', 'y'])
>>> Point.__doc__                   # docstring for the new class
'Point(x, y)'
>>> p = Point(11, y=22)             # instantiate with positional args or keywords
>>> p[0] + p[1]                     # indexable like a plain tuple
33
>>> x, y = p                        # unpack like a regular tuple
>>> x, y
(11, 22)
>>> p.x + p.y                       # fields also accessible by name
33
>>> d = p._asdict()                 # convert to a dictionary
>>> d['x']
11
>>> Point(**d)                      # convert from a dictionary
Point(x=11, y=22)
>>> p._replace(x=100)               # _replace() is like str.replace() but targets named fields
Point(x=100, y=22)
tvm.topi.nn.nll_loss(predictions, targets, weights, reduction, ignore_index)[源代码]#

Negative log likelihood loss on the input data.

output{n, i_1, i_2, …, i_k} = -p * w
where t = target{n, i_1, i_2, …, i_k}

p = predictions{n, t, i_1, i_2, i_k} w = weights{n, i_1, i_2, …, i_k} if t != ignore_index else 0

result = reduction(output)

Parameters#

predictionstvm.te.Tensor

(k+2)-D with shape (N, C, d_1, d_2, …, d_k), where C is the number of target classes

targetstvm.te.Tensor

(k+1)-D with shape (N, d_1, d_2, …, d_k) The target value of the input.

weightstvm.te.Tensor

1-D with shape (C,) The weight of each target value.

reductionstring

The reduction method to apply to output. Can be “mean”, “sum” or “none”.

ignore_indexint

The target value to ignore.

Returns#

outputtvm.te.Tensor

a scalar if the reduction type is “mean” or “sum”, otherwise the same shape as target.

tvm.topi.nn.pad(data, pad_before, pad_after=None, pad_value=0.0, name='PadInput', attrs=None)[源代码]#

Pad Input with zeros.

Parameters#

datatvm.te.Tensor

n-D input, can be any layout.

pad_beforelist / tuple of n ints

Pad width on each dimension to pad the before the axis begin.

pad_afterlist / tuple of n ints, optional

Pad width each dimension to pad the after the axis end.

pad_valuefloat, optional

The value to be padded.

namestr, optional

The name prefix operators generated

Returns#

Outputtvm.te.Tensor

n-D, the same layout as Input.

tvm.topi.nn.pool1d(data, kernel, stride, dilation, padding, pool_type, ceil_mode=False, layout='NCW', count_include_pad=True)[源代码]#
Perform pooling on width dimension of data.

Width axis is determined according to the layout string. in which ‘w’ means width. Width dimension cannot be split. For example, NCW, NCW16c, etc. are valid for pool, while NCW16w is not. See parameter layout for more information of the layout string convention.

Parameters#

datatvm.te.Tensor

n-D with shape of layout

kernellist/tuple of one int or int

Kernel size, [kernel_width]

stridelist/tuple of one int or int

Stride size, [stride_width]

dilation: list/tuple of two ints

Dilation size, [dilation_height, dilation_width]

paddinglist/tuple of two ints

Pad size, [pad_left, pad_right]

pool_typestr

Pool type, ‘max’ or ‘avg’

ceil_modebool

Whether to use ceil when calculating output size.

layout: string

Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCW16c can describe a 4-D tensor of [batch_size, channel, width, channel_block], in which channel_block=16 is a split of dimension channel.

count_include_pad: bool

Whether include padding in the calculation when pool_type is ‘avg’

Returns#

outputtvm.te.Tensor

n-D in the same layout

tvm.topi.nn.pool2d(data, kernel, stride, dilation, padding, pool_type, ceil_mode=False, layout='NCHW', count_include_pad=True)[源代码]#
Perform pooling on height and width dimension of data.

It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.

Parameters#

datatvm.te.Tensor

n-D with shape of layout

kernellist/tuple of two ints

Kernel size, [kernel_height, kernel_width]

stridelist/tuple of two ints

Stride size, [stride_height, stride_width]

dilation: list/tuple of two ints

Dilation size, [dilation_height, dilation_width]

paddinglist/tuple of four ints

Pad size, [pad_top, pad_left, pad_bottom, pad_right]]

pool_typestr

Pool type, ‘max’ or ‘avg’

ceil_modebool

Whether to use ceil when calculating output size.

layout: string

Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.

count_include_pad: bool

Whether include padding in the calculation when pool_type is ‘avg’

Returns#

outputtvm.te.Tensor

n-D in the same layout

tvm.topi.nn.pool3d(data, kernel, stride, dilation, padding, pool_type, ceil_mode=False, layout='NCDHW', count_include_pad=True)[源代码]#
Perform pooling on depth, height and width dimension of data.

It decides the depth, height and width dimension according to the layout string, in which ‘D’, ‘W’ and ‘H’ means depth, width and height respectively. Depth, width and height dimension cannot be split. For example, NCDHW, NCDHW16c, etc. are valid for pool, while NCDHW16d, NCDHW16w, NCDHW16h are not. See parameter layout for more information of the layout string convention.

Parameters#

datatvm.te.Tensor

n-D with shape of layout

kernellist/tuple of three ints

Kernel size, [kernel_depth, kernel_height, kernel_width]

stridelist/tuple of three ints

Stride size, [stride_depth, stride_height, stride_width]

dilation: list/tuple of two ints

Dilation size, [dilation_height, dilation_width]

paddinglist/tuple of six ints

Pad size, [pad_front, pad_top, pad_left, pad_back, pad_bottom, pad_right]

pool_typestr

Pool type, ‘max’ or ‘avg’

ceil_modebool

Whether to use ceil when calculating output size.

layout: string

Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCDHW16c can describe a 6-D tensor of [batch_size, channel, depth, height, width, channel_block], in which channel_block=16 is a split of dimension channel.

count_include_pad: bool

Whether include padding in the calculation when pool_type is ‘avg’

Returns#

outputtvm.te.Tensor

n-D in the same layout

tvm.topi.nn.pool_grad(grads, data, kernel, stride, padding, pool_type, ceil_mode=False, count_include_pad=True, layout='NCHW')[源代码]#
Gradient of pooling on height and width dimension of data.

It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.

Parameters#

gradstvm.te.Tensor

n-D with shape of layout

datatvm.te.Tensor

n-D with shape of layout

kernellist/tuple of two ints

Kernel size, [kernel_height, kernel_width]

stridelist/tuple of two ints

Stride size, [stride_height, stride_width]

paddinglist/tuple of four ints

Pad size, [pad_top, pad_left, pad_bottom, pad_right]]

pool_typestr

Pool type, ‘max’ or ‘avg’

ceil_modebool

Whether to use ceil when calculating output size.

count_include_pad: bool

Whether include padding in the calculation when pool_type is ‘avg’

layout: string

Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.

Returns#

outputtvm.te.Tensor

n-D in the same layout

tvm.topi.nn.prelu(x, slope, axis=1)[源代码]#

PReLU. It accepts two arguments: an input x and a weight array W and computes the output as \(PReLU(x) y = x > 0 ? x : W * x\), where \(*\) is an elementwise multiplication for each sample in the batch.

Parameters#

xtvm.te.Tensor

Input argument.

slopetvm.te.Tensor

Channelised slope tensor for prelu

axisint

The axis where the channel data needs to be applied

Returns#

ytvm.te.Tensor

The result.

tvm.topi.nn.qnn_conv2d_alter_layout(_attrs, _inputs, _tinfos, _out_type)[源代码]#

Change qnn.conv2d layout.

Parameters#

attrstvm.ir.Attrs

Attributes of current convolution

inputstvm.relay.Expr

Grouped input symbols

tinfoslist

Input shape and dtype

out_type: type

The output type

Note#

Unlike other TOPI functions, this function operates on both graph level and operator level.

tvm.topi.nn.qnn_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type)[源代码]#

Change qnn.dense layout. Not to change by default

Parameters#

attrstvm.ir.Attrs

Attributes of current dense op

inputstvm.relay.Expr

Grouped input symbols

tinfoslist

Input shape and dtype

out_type: type

The output type

tvm.topi.nn.qnn_requantize_alter_layout(_attrs, _inputs, _tinfos, _out_type)[源代码]#

Change requantize layout.

Parameters#

attrstvm.ir.Attrs

Attributes of current convolution

inputstvm.relay.Expr

Grouped input symbols

tinfoslist

Input shape and dtype

out_type: type

The output type

Note#

Unlike other TOPI functions, this function operates on both graph level and operator level.

tvm.topi.nn.reduce(function, iterable[, initial]) value#

Apply a function of two arguments cumulatively to the items of a sequence or iterable, from left to right, so as to reduce the iterable to a single value. For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates ((((1+2)+3)+4)+5). If initial is present, it is placed before the items of the iterable in the calculation, and serves as a default when the iterable is empty.

tvm.topi.nn.relu(x)[源代码]#

Take relu of input x.

Parameters#

xtvm.te.Tensor

Input argument.

Returns#

ytvm.te.Tensor

The result.

tvm.topi.nn.rms_norm(data, weight, axis, epsilon=1e-05)[源代码]#

Root mean square normalization operator. The output will have the same data type as input.

Parameters#

datatvm.te.Tensor

N-D with shape (d_0, d_1, …, d_{N-1})

weight: tvm.te.Tensor

K-D with shape (r_0, r_1, …, r_{K-1}) where K == len(axis) and d_{axis_k} == r_k

axislist of int

Axis over the normalization applied

epsilonfloat

The epsilon value to avoid division by zero.

Returns#

resulttvm.te.Tensor

N-D with shape (d_0, d_1, …, d_{N-1})

tvm.topi.nn.scale_shift_nchw(Input, Scale, Shift)[源代码]#

Batch normalization operator in inference.

Parameters#

Inputtvm.te.Tensor

4-D input tensor, NCHW layout [batch, channel, height, width]

Scaletvm.te.Tensor

Scale tensor, 1-D of size channel number

Shifttvm.te.Tensor

Shift tensor, 1-D of size channel number

Returns#

Outputtvm.te.Tensor

Output tensor, layout is NCHW

tvm.topi.nn.scale_shift_nchwc(Input, Scale, Shift)[源代码]#

Batch normalization operator in inference.

Parameters#

Inputtvm.te.Tensor

5-D input tensor, NCHWc layout [batch, channel_chunk, height, width, channel_block]

Scaletvm.te.Tensor

Scale tensor, 2-D of size [channel_chunk, channel_block]

Shifttvm.te.Tensor

Shift tensor, 2-D of size [channel_chunk, channel_block]

Returns#

Outputtvm.te.Tensor

Output tensor, layout is NHWC

tvm.topi.nn.scale_shift_nhwc(Input, Scale, Shift)[源代码]#

Batch normalization operator in inference.

Parameters#

Inputtvm.te.Tensor

4-D input tensor, NHWC layout [batch, height, width, channel]

Scaletvm.te.Tensor

Scale tensor, 1-D of size channel number

Shifttvm.te.Tensor

Shift tensor, 1-D of size channel number

Returns#

Outputtvm.te.Tensor

Output tensor, layout is NHWC

tvm.topi.nn.simplify(expr)[源代码]#

Simplify the expression if it is Expr, directly return if it is int.

Parameters#

exprExpr or int

The input.

Returns#

outExpr or int

The simplified output

tvm.topi.nn.simulated_dequantize(data, in_dtype, input_scale=None, input_zero_point=None, axis=-1)[源代码]#

Simulated QNN dequantize operator that mimics QNN outputs without changing datatype. The benefit of this operator over true QNN dequantize is that this operator allows dynamic datatype selection and can operate on both per-channel and scalar scales and zero points while QNN dequantize requires both of these to be fixed at compile time.

Parameters#

data: tvm.te.Tensor

An N-D input tensor to the operator.

in_dtype: tvm.te.Tensor

A scalar variable that indicates which datatype to simulate dequantization with. Use SQNN_DTYPE_TO_CODE to convert a dtype string into the corresponding variable value.

input_scale: tvm.te.Tensor, optional

A scalar tensor representing the scale to use when dequantizing from integer datatypes. When it contains more than a single value, N must match the number of channels in data.

input_zero_point: tvm.te.Tensor, optional

A 1-D tensor representing the zero point to use when dequantizing from integer datatypes. When it contains more than a single value, N must match the number of channels in data.

axis: int, optional

The channel axis for quantization. Default value is -1 which corresponds to the last axis.

tvm.topi.nn.simulated_quantize(data, out_dtype, output_scale=None, output_zero_point=None, axis=-1)[源代码]#

Simulated QNN quantize operator that mimics QNN outputs without changing datatype. The benefit of this operator over true QNN quantize is that this operator allows dynamic datatype selection and can operate on both per-channel and scalar scales and zero points while QNN quantize requires both of these to be fixed at compile time.

Parameters#

data: tvm.te.Tensor

An N-D input tensor to the operator.

out_dtype: tvm.te.Tensor

A scalar variable that indicates which datatype to simulate quantization with. Use SQNN_DTYPE_TO_CODE to convert a dtype string into the corresponding variable value.

output_scale: tvm.te.Tensor, optional

A scalar tensor representing the scale to use when quantizing to integer datatypes. When it contains more than a single value, N must match the number of channels in data.

output_zero_point: tvm.te.Tensor, optional

A 1-D tensor representing the zero point to use when quantizing to integer datatypes. When it contains more than a single value, N must match the number of channels in data.

axis: int, optional

The channel axis for quantization. Default value is -1 which corresponds to the last axis.

tvm.topi.nn.softmax(x, axis=-1)[源代码]#

Perform softmax activation on the data.

Parameters#

datatvm.te.Tensor

can be any dimension

axisint

channel axis

Returns#

outputtvm.te.Tensor

output shape is the same as input

tvm.topi.nn.softmax_common(x, axis, use_fast_exp)[源代码]#

The common part of softmax and fast_softmax

tvm.topi.nn.space_to_batch_nd(data, block_shape, pad_before, pad_after, pad_value=0.0)[源代码]#

Perform batch to space transformation on the data

Parameters#

datatvm.te.Tensor

N-D Tensor with shape [batch, spatial_shape, remaining_shapes], where spatial_shape has M dimensions.

block_shapelist of ints

list of size [M] where M is number of spatial dims, specifies block size for each spatial dimension.

pad_beforelist of ints

list of shape [M] where M is number of spatial dims, specifies zero-padding size before each spatial dimension.

pad_afterlist of ints

list of shape [M] where M is number of spatial dims, specifies zero-padding size after each spatial dimension.

pad_valuefloat, optional

The value used for padding.

Returns#

output : tvm.te.Tensor

tvm.topi.nn.space_to_depth(data, block_size, layout='NCHW')[源代码]#

Perform space to depth transformation on the data

Parameters#

datatvm.te.Tensor

4-D tensor in either NCHW or NHWC layout.

block_sizeint

Size of blocks to decompose into channel dimension.

layoutstring

Either NCHW or NHWC, indicating data layout.

Returns#

outputtvm.te.Tensor

Output of shape [N, C * block_size**2, H / block_size, W / block_size]

tvm.topi.nn.sparse_add(dense_data, sparse_data, sparse_indices, sparse_indptr)[源代码]#

Computes sparse-dense addition

Parameters#

dense_datatvm.te.Tensor

2-D with shape [M, N]

sparse_datatvm.te.Tensor

1-D with shape [nnz] (CSR)

sparse_indicestvm.te.Tensor

1-D with shape [nnz] (CSR)

sparse_indptrtvm.te.Tensor

1-D with shape [M + 1] (CSR)

Returns#

outputtvm.te.Tensor

2-D with shape [M, N]

tvm.topi.nn.sparse_conv2d(dense_data, sparse_data, sparse_indices, sparse_indptr, layout='NHWC', kernel_size=1)[源代码]#

Computes sparse-conv2d(1*1) of data and (weight_data, weight_indices, weight_indptr)

Parameters#

dense_datatvm.te.Tensor

4-D with shape [M, H, W, K] (layout=NHWC)

4-D with shape [M, K, H, W] (layout=NCHW)

sparse_datatvm.te.Tensor

2-D with shape [num_blocks, bs_r] (BSR)

3-D with shape [num_blocks, bs_r, bs_c] (BSR)

sparse_indicestvm.te.Tensor

1-D with shape [num_blocks] (BSR)

sparse_indptrtvm.te.Tensor

1-D with shape [(N + 1) // bs_r] (BSR)

layoutstr

layout of data

Returns#

outputtvm.te.Tensor

4-D with shape [M, H, W, N] (layout=NHWC) 4-D with shape [M, N, H ,W] (layout=NCHW)

tvm.topi.nn.sparse_dense(dense_data, sparse_data, sparse_indices, sparse_indptr, sparse_lhs=False)[源代码]#

Computes sparse-dense matrix multiplication of data and (weight_data, weight_indices, weight_indptr).T, if sparse_lhs=False or Computes sparse-dense matrix multiplication of (data_data, data_indices, data_indptr) and weight.T, if sparse_lhs=True

Parameters#

dense_datatvm.te.Tensor

2-D with shape [M, K]

sparse_datatvm.te.Tensor

1-D with shape [nnz] (CSR) or 3-D with shape [num_blocks, bs_r, bs_c] (BSR)

sparse_indicestvm.te.Tensor

1-D with shape [nnz] (CSR) or 1-D with shape [num_blocks] (BSR)

sparse_indptrtvm.te.Tensor

1-D with shape [N + 1] (CSR) or 1-D with shape [(N + 1) // bs_r] (BSR)

sparse_lhsbool, optional

Indicates whether lhs or rhs matrix is sparse. Default value is False.

Returns#

outputtvm.te.Tensor

2-D with shape [M, N]

tvm.topi.nn.sparse_dense_alter_layout(_attrs, _inputs, _tinfos, _out_type)[源代码]#

Change Sparse Dense layout.

This is used for modifying the inputs weights so they are more amenable for the target.

Parameters#

attrstvm.ir.Attrs

Attributes of current convolution

inputstvm.relay.Expr

Grouped input symbols

tinfoslist

Input shape and dtype

out_type: type

The output type

Note#

Unlike other TOPI functions, this function operates on both graph level and operator level.

tvm.topi.nn.sparse_dense_sp_lhs(data_data, data_indices, data_indptr, weight)[源代码]#

Computes sparse-dense matrix multiplication of (data_data, data_indices, data_indptr) and weight.T

Parameters#

data_data:

1-D with shape [nnz] (CSR) or 3-D with shape [num_blocks, bs_r, bs_c] (BSR)

data_indices:

1-D with shape [nnz] (CSR) or 1-D with shape [num_blocks] (BSR)

data_indptr:

1-D with shape [M + 1] (CSR) or 1-D with shape [(M + 1) // bs_r] (BSR)

weight:

2-D with shape [N, K]

Returns#

outputtvm.te.Tensor

2-D with shape [M, N]

tvm.topi.nn.sparse_dense_sp_rhs(data, weight_data, weight_indices, weight_indptr)[源代码]#

Computes sparse-dense matrix multiplication of data and (weight_data, weight_indices, weight_indptr).T

Parameters#

datatvm.te.Tensor

2-D with shape [M, K]

weight_datatvm.te.Tensor

1-D with shape [nnz] (CSR) or 3-D with shape [num_blocks, bs_r, bs_c] (BSR)

weight_indicestvm.te.Tensor

1-D with shape [nnz] (CSR) or 1-D with shape [num_blocks] (BSR)

weight_indptrtvm.te.Tensor

1-D with shape [N + 1] (CSR) or 1-D with shape [(N + 1) // bs_r] (BSR)

Returns#

outputtvm.te.Tensor

2-D with shape [M, N]

tvm.topi.nn.sparse_transpose(sparse_data, sparse_indices, sparse_indptr)[源代码]#

Transpose a square sparse matrix, A is an n-by-n sparse matrix in the CSR format. ** Currently only support Square Matrices **

Parameters#

sparse_datatvm.te.Tensor

1-D with shape [nonzeros]

sparse_indicestvm.te.Tensor

1-D with shape [nonzeros], dtype of ‘int32’

sparse_indptrtvm.te.Tensor

1-D with shape [n+1], dtype of ‘int32’

Returns#

out_datatvm.te.Tensor

1-D with shape [nonzeros]

out_indicestvm.te.Tensor

1-D with shape [nonzeros], dtype of ‘int32’

out_indptrtvm.te.Tensor

1-D with shape [n+1], dtype of ‘int32’

tvm.topi.nn.strided_slice(a, begin, end, strides=None, axes=None, slice_mode='end')[源代码]#

Slice of an array.

Parameters#

atvm.te.Tensor

The tensor to be sliced.

beginlist of int

The indices to begin with in the slicing.

endlist of int

Indices indicating end of the slice.

strideslist of int, optional

Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis.

axeslist of int, optional

Axes along which slicing is applied. When it is specified, begin, end strides, and axes need to a list of integers of the same length.

slice_modestr, optional

The slice mode [end, size]. end - The ending indices for the slice [default]. size - The input strides will be ignored, input end in this mode indicates the sizeof a slice starting at the location specified by begin. If end[i] is -1, all remaining elements in that dimension are included in the slice.

Returns#

ret : tvm.te.Tensor

tvm.topi.nn.try_get_conv2d_sparse_input(args)[源代码]#

Analyze the input data from the given args.

Parameters#

argsList[Tensor]

Input/output Tensor of a TVM subgraph.

Returns#

Dict[Tensor, str] :

Map from the input Tensor to its buffer name.

Notes#

The buffer name is specially designed, and these buffer should be provided in SearchTask(…, task_inputs={…}).

tvm.topi.nn.try_get_sparse_input(args)[源代码]#

Analyze the input data from the given args.

Parameters#

argsList[Tensor]

Input/output Tensor of a TVM subgraph.

Returns#

Dict[Tensor, str] :

Map from the input Tensor to its buffer name.

Notes#

The buffer name is specially designed, and these buffer should be provided in SearchTask(…, task_inputs={…}).

tvm.topi.nn.unpack_NCHWc_to_nchw(packed_out, out_dtype)[源代码]#

Unpack conv2d_NCHWc output from layout NCHWc to NCHW

Parameters#

packed_outtvm.te.Tensor

The output tensor of conv2d_NCHWc.

out_dtypestr

The output dtype.

Returns#

unpacked_outtvm.te.Tensor

The unpacked output tensor in NCHW layout.

tvm.topi.nn.upsampling(data, scale_h, scale_w, layout='NCHW', method='nearest_neighbor', align_corners=False, output_shape=None)[源代码]#
Perform upsampling on the data.

Nearest neighbor and bilinear upsampling are supported.

Parameters#

inputstvm.te.Tensor

inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]

scale_hfloat

Scaling factor for height

scale_wfloat

Scaling factor for width

layoutstring, optional

either “NCHW” or “NHWC”

method{“bilinear”, “nearest_neighbor”, “bicubic”}

Method to be used for upsampling.

output_shape: tvm.tir.container.Array, optional

Shape to return. If left None will be inferred (If shape is determined dynamically, pass out_dtype.shape as output_shape)

Returns#

outputtvm.te.Tensor

4-D with shape [batch, channel, in_height*scale_h, in_width*scale_w] or [batch, in_height*scale, in_width*scale, channel]

tvm.topi.nn.upsampling3d(data, scale_d, scale_h, scale_w, layout='NCDHW', method='nearest_neighbor', coordinate_transformation_mode='half_pixel', output_shape=None)[源代码]#
Perform upsampling on the data.

Nearest neighbor and bilinear upsampling are supported.

Parameters#

inputstvm.te.Tensor

inputs is a 5-D tensor with shape [batch, channel, in_depth, in_height, in_width] or [batch, in_depth, in_height, in_width, channel]

scale_dfloat

Scaling factor for depth

scale_hfloat

Scaling factor for height

scale_wfloat

Scaling factor for width

layoutstring, optional

either “NCDHW” or “NDHWC”

method{“trilinear”, “nearest_neighbor”}

Method to be used for upsampling.

coordinate_transformation_mode: string, optional

Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. Refer to the ONNX Resize operator specification for details. Available options are “half_pixel”, “align_corners” and “asymmetric”.

output_shape: tvm.tir.container.Array, optional

Shape to return. If left None will be inferred (If shape is determined dynamically, pass out_dtype.shape as output_shape)

Returns#

outputtvm.te.Tensor

5-D with shape [batch, channel, in_depth*scale, in_height*scale, in_width*scale] or [batch, in_depth*scale, in_height*scale, in_width*scale, channel]

tvm.topi.nn.winograd_transform_matrices(tile_size, kernel_size, out_dtype)[源代码]#

Compute the A, B, and G transform matrices for tile_size as a tvm.Expr.

tvm.topi.image#

IMAGE network operators

Functions:

affine_grid(data, target_shape)

affine_grid operator that generates 2D sampling grid.

can_convert_multiply_to_intdiv(origin_size, ...)

Check whether can convert multiplication to division

crop_and_resize(data, boxes, box_indices, ...)

Perform crop and resize operation on the data.

dilation2d_nchw(input, filter, stride, ...)

Morphological dilation operator in NCHW layout.

dilation2d_nhwc(input, filter, stride, ...)

Morphological 2d dilation NHWC layout.

get_1d_indices(indices[, layout])

Get 1d indices

get_1d_pixel(data, layout, image_width, n, ...)

Get 1d pixel

get_2d_indices(indices[, layout])

Get 2d indices

get_2d_pixel(data, layout, image_height, ...)

Get 2d pixel

get_3d_indices(indices[, layout])

Get 3d indices

get_3d_pixel(data, layout, image_depth, ...)

Get 3d pixel

get_closest_index(in_x, rounding_method, boxes)

get the closest index to a value based on a certain rounding method

get_inx(x, image_width, target_width, ...[, ...])

Infer input x from output x with various coordinate transformation methods

get_pad_tuple(padding, kernel)

Common code to get the pad option

grid_sample(data, grid[, method, layout, ...])

Applies grid sampling to input feature map.

nchw_pack_layout(layout_info)

Check whether the layout type is NCHWinic

nchw_xc_layout(layout_info)

Check whether the layout type is NCHWxc

pad(data, pad_before[, pad_after, ...])

Pad Input with zeros.

resize1d(data, roi, size[, layout, method, ...])

Perform resize operation on the data.

resize2d(data, roi, size[, layout, method, ...])

Perform resize operation on the data.

resize3d(data, roi, size[, layout, method, ...])

Perform resize operation on the data.

simplify(expr)

Simplify the expression if it is Expr, directly return if it is int.

tvm.topi.image.affine_grid(data, target_shape)[源代码]#

affine_grid operator that generates 2D sampling grid.

This operation is described in https://arxiv.org/pdf/1506.02025.pdf. It generates a uniform sampling grid within the target shape and normalizes it to [-1, 1]. The provided affine transformation is then applied on the sampling grid.

Parameters#

datatvm.Tensor

3-D with shape [batch, 2, 3]. The affine matrix.

target_shape: list/tuple of two int

Specifies the output shape (H, W).

Returns#

Outputtvm.Tensor

4-D with shape [batch, 2, target_height, target_width]

tvm.topi.image.can_convert_multiply_to_intdiv(origin_size, scaled_size)[源代码]#

Check whether can convert multiplication to division

tvm.topi.image.crop_and_resize(data, boxes, box_indices, crop_size, layout='NCHW', method='bilinear', extrapolation_value=None, out_dtype=None)[源代码]#

Perform crop and resize operation on the data.

Parameters#

datatvm.te.Tensor

inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]

boxestvm.te.Tensor

A 2-D tensor of shape [num_boxes, 4]. Each row of the tensor specifies the coordinates of a box.

box_indicestvm.te.Tensor

A 1-D tensor of shape [num_boxes], box_indices[i] specifies the data that the i-th box refers to.

crop_sizeTuple

The target size of each box.

layoutstring, optional

“NCHW”, “NHWC”

method{“bilinear”, “nearest_neighbor”}

Method to be used for resizing.

extrapolation_value: float, optional

Value used for extrapolation, when applicable.

out_dtypestring, optional

Type to return. If left None will be same as input type.

Returns#

outputtvm.te.Tensor

4-D with shape [num_boxes, channel, crop_height, crop_width] or [num_boxes, crop_height, crop_width, channel]

tvm.topi.image.dilation2d_nchw(input, filter, stride, padding, dilations, out_dtype=None)[源代码]#

Morphological dilation operator in NCHW layout.

Parameters#

inputtvm.te.Tensor

4-D with shape [batch, in_channel, in_height, in_width]

filtertvm.te.Tensor

3-D with shape [ in_channel, filter_height, filter_width]

strideint or a list/tuple of two ints

Stride size, or [stride_height, stride_width]

paddingint or str

Padding size

dilations: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

out_dtypeOptional[str]

Specifies the output data type.

Returns#

Outputtvm.te.Tensor

4-D with shape [batch, in_channel, out_height, out_width]

tvm.topi.image.dilation2d_nhwc(input, filter, stride, padding, dilations, out_dtype=None)[源代码]#

Morphological 2d dilation NHWC layout.

Parameters#

inputtvm.te.Tensor

4-D with shape [batch, in_height, in_width, in_channel]

filtertvm.te.Tensor

3-D with shape [filter_height, filter_width, in_channel]

strideint or a list/tuple of two ints

Stride size, or [stride_height, stride_width]

paddingint

Padding size

dilations: int or a list/tuple of two ints

dilation size, or [dilation_height, dilation_width]

out_dtypeOptional[str]

Specifies the output data type.

Returns#

Outputtvm.te.Tensor

4-D with shape [batch, out_height, out_width, in_channel]

tvm.topi.image.get_1d_indices(indices, layout='NCW')[源代码]#

Get 1d indices

tvm.topi.image.get_1d_pixel(data, layout, image_width, n, c, x, cc, ib, ic)[源代码]#

Get 1d pixel

tvm.topi.image.get_2d_indices(indices, layout='NCHW')[源代码]#

Get 2d indices

tvm.topi.image.get_2d_pixel(data, layout, image_height, image_width, n, c, y, x, cc, ib, ic)[源代码]#

Get 2d pixel

tvm.topi.image.get_3d_indices(indices, layout='NCDHW')[源代码]#

Get 3d indices

tvm.topi.image.get_3d_pixel(data, layout, image_depth, image_height, image_width, n, c, z, y, x, cc)[源代码]#

Get 3d pixel

tvm.topi.image.get_closest_index(in_x, rounding_method, boxes, use_int_div=False)[源代码]#

get the closest index to a value based on a certain rounding method

tvm.topi.image.get_inx(x, image_width, target_width, coordinate_transformation_mode, start_x=0, end_x=-1, use_int_div=False)[源代码]#

Infer input x from output x with various coordinate transformation methods

tvm.topi.image.get_pad_tuple(padding, kernel)[源代码]#

Common code to get the pad option

Parameters#

paddingint or str

Padding size, or [‘VALID’, ‘SAME’]

kerneltuple of int

Conv kernel size

Returns#

pad_topint

Padding size on top

pad_leftint

Padding size on left

pad_downint

Padding size on down.

pad_rightint

Padding size on right.

tvm.topi.image.grid_sample(data, grid, method='bilinear', layout='NCHW', padding_mode='zeros', align_corners=True)[源代码]#

Applies grid sampling to input feature map.

Given \(data\) and \(grid\), then for 4-D the output is computed by

\[x_{src} = grid[batch, 0, y_{dst}, x_{dst}] \ y_{src} = grid[batch, 1, y_{dst}, x_{dst}] \ output[batch, channel, y_{dst}, x_{dst}] = G(data[batch, channel, y_{src}, x_{src}])\]

\(x_{dst}\), \(y_{dst}\) enumerate all spatial locations in \(output\), and \(G()\) denotes the interpolation function.

The out-boundary points will be padded with zeros if padding_mode is “zeros”, or border pixel value if padding_mode is “border”, or inner pixel value if padding_mode is “reflection”.

The left-top corner (-1, -1) and right-bottom corner (1, 1) in grid will be map to (0, 0) and (h - 1, w - 1) of data if align_corners is “True”, or (-0.5, -0.5) and (h - 0.5, w - 0.5) of data if align_corners is “False”.

The shape of the output will be 4-D (data.shape[0], data.shape[1], grid.shape[2], grid.shape[3]), or 5-D (data.shape[0], data.shape[1], grid.shape[2], grid.shape[3], grid.shape[4]).

The operator assumes that \(grid\) has been normalized to [-1, 1].

grid_sample often cooperates with affine_grid which generates sampling grids for grid_sample.

Parameters#

datatvm.Tensor

4-D with shape [batch, in_channel, in_height, in_width], or 5-D with shape [batch, in_channel, in_depth, in_height, in_width]

gridtvm.Tensor

4-D with shape [batch, 2, out_height, out_width], or 5-D with shape [batch, 3, out_depth, out_height, out_width]

methodstr

The interpolation method, 4-D “nearest”, “bilinear”, “bicubic” and 5-D “nearest”, “bilinear”(“trilinear”) are supported.

layoutstr

The layout of input data and the output.

padding_modestr

The padding mode for outside grid values, “zeros”, “border”, “reflection” are supported.

align_corners: bool

Geometrically, we consider the pixels of the input as squares rather than points. If set to “True”, the extrema (“-1” and “1”) are considered as referring to the center points of the input corner pixels. If set to “False”, they are instead considered as referring to the corner points of the input corner pixels, making the sampling more resolution agnostic.

Returns#

Outputtvm.Tensor

4-D with shape [batch, in_channel, out_height, out_width], or 5-D with shape [batch, in_channel, out_depth, out_height, out_width]

tvm.topi.image.nchw_pack_layout(layout_info)[源代码]#

Check whether the layout type is NCHWinic

tvm.topi.image.nchw_xc_layout(layout_info)[源代码]#

Check whether the layout type is NCHWxc

tvm.topi.image.pad(data, pad_before, pad_after=None, pad_value=0.0, name='PadInput', attrs=None)[源代码]#

Pad Input with zeros.

Parameters#

datatvm.te.Tensor

n-D input, can be any layout.

pad_beforelist / tuple of n ints

Pad width on each dimension to pad the before the axis begin.

pad_afterlist / tuple of n ints, optional

Pad width each dimension to pad the after the axis end.

pad_valuefloat, optional

The value to be padded.

namestr, optional

The name prefix operators generated

Returns#

Outputtvm.te.Tensor

n-D, the same layout as Input.

tvm.topi.image.resize1d(data, roi, size, layout='NCW', method='linear', coordinate_transformation_mode='half_pixel', rounding_method='', bicubic_alpha=-0.5, bicubic_exclude=0, extrapolation_value=0.0, out_dtype=None, output_shape=None)[源代码]#

Perform resize operation on the data.

Parameters#

datatvm.te.Tensor

inputs is a 3-D tensor with shape [batch, channel in_width] or [batch in_width, channel]

roi: Tuple of Float or Expr

The region of interest for cropping the input image. Expected to be of size 2, and format [start_w, end_w]. Only used if coordinate_transformation_mode is tf_crop_and_resize.

size: Tuple

Output resolution scale to

layout: string, optional

“NCW”, “NWC”, or “NCWc”.

coordinate_transformation_mode: string, optional

Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. Refer to the ONNX Resize operator specification for details. Available options are “half_pixel”, “align_corners” and “asymmetric”.

method: string, optional

method of interpolation (“nearest”, “linear”, “bicubic”)

coordinate_transformation_modestring, optional

Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. [half_pixel, align_corners, asymmetric, pytorch_half_pixel, tf_half_pixel_for_nn, and tf_crop_and_resize].

rounding_method:

Method for rounding coordinate locations

bicubic_alpha: float, optional

Bicubic spline coefficient

bicubic_exclude: bool, optional:

Exclude values outside the image fdor bicubic interpolation

extrapolation_value: float, optional

Value used for extrapolation, when applicable.

out_dtype: string, optional

Type to return. If left None will be same as input type.

output_shape: tvm.tir.container.Array, optional

Shape to return. If left None will be inferred (If shape is determined dynamically, pass out_dtype.shape as output_shape)

Returns#

outputtvm.te.Tensor

4-D with shape [batch, chananel, in_width*scale] or [batch, in_width*scale, channel] or 5-D with shape [batch, channel-major, in_width*scale, channel-minor]

tvm.topi.image.resize2d(data, roi, size, layout='NCHW', method='linear', coordinate_transformation_mode='half_pixel', rounding_method='', bicubic_alpha=-0.5, bicubic_exclude=0, extrapolation_value=0.0, out_dtype=None, output_shape=None)[源代码]#

Perform resize operation on the data.

Parameters#

datatvm.te.Tensor

inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]

roi: Tuple of Float or Expr

The region of interest for cropping the input image. Expected to be of size 4, and format [start_h, start_w, end_h, end_w]. Only used if coordinate_transformation_mode is tf_crop_and_resize.

size: Tuple

Output resolution scale to

layout: string, optional

“NCHW”, “NHWC”, or “NCHWc”.

method: string, optional

method of interpolation (“nearest”, “linear”, “bicubic”)

coordinate_transformation_modestring, optional

Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. [half_pixel, align_corners, asymmetric, pytorch_half_pixel, tf_half_pixel_for_nn, and tf_crop_and_resize].

rounding_method:

Method for rounding coordinate locations

bicubic_alpha: float, optional

Bicubic spline coefficient

bicubic_exclude: bool, optional:

Exclude values outside the image fdor bicubic interpolation

extrapolation_value: float, optional

Value used for extrapolation, when applicable.

out_dtype: string, optional

Type to return. If left None will be same as input type.

output_shape: tvm.tir.container.Array, optional

Shape to return. If left None will be inferred (If shape is determined dynamically, pass out_dtype.shape as output_shape)

Returns#

outputtvm.te.Tensor

4-D with shape [batch, channel, in_height*scale, in_width*scale] or [batch, in_height*scale, in_width*scale, channel] or 5-D with shape [batch, channel-major, in_height*scale, in_width*scale, channel-minor]

tvm.topi.image.resize3d(data, roi, size, layout='NCDHW', method='linear', coordinate_transformation_mode='half_pixel', rounding_method='', bicubic_alpha=-0.5, bicubic_exclude=0, extrapolation_value=0.0, out_dtype=None, output_shape=None)[源代码]#

Perform resize operation on the data.

Parameters#

datatvm.te.Tensor

inputs is a 5-D tensor with shape [batch, channel, in_depth, in_height, in_width] or [batch, in_depth, in_height, in_width, channel]

roi: Tuple of Float or Expr

The region of interest for cropping the input image. Expected to be of size 6, and format [start_d, start_h, start_w, end_d, end_h, end_w]. Only used if coordinate_transformation_mode is tf_crop_and_resize.

size: Tuple

Output resolution scale to

layout: string, optional

“NCDHW”, “NDHWC”, or “NCDHWc”.

method: string, optional

method of interpolation (“nearest”, “linear”, “bicubic”)

coordinate_transformation_modestring, optional

Describes how to transform the coordinate in the resized tensor to the coordinate in the original tensor. [half_pixel, align_corners, asymmetric, pytorch_half_pixel, tf_half_pixel_for_nn, and tf_crop_and_resize].

rounding_method:

Method for rounding coordinate locations

bicubic_alpha: float, optional

Bicubic spline coefficient

bicubic_exclude: bool, optional:

Exclude values outside the image fdor bicubic interpolation

extrapolation_value: float, optional

Value used for extrapolation, when applicable.

out_dtype: string, optional

Type to return. If left None will be same as input type.

output_shape: tvm.tir.container.Array, optional

Shape to return. If left None will be inferred (If shape is determined dynamically, pass out_dtype.shape as output_shape)

Returns#

outputtvm.te.Tensor

4-D with shape [batch, channel, in_depth*scale, in_height*scale, in_width*scale] or [batch, in_depth*scale, in_height*scale, in_width*scale, channel] or 5-D with shape [batch, channel-major, in_depth*scale, in_height*scale, in_width*scale, channel-minor]

tvm.topi.image.simplify(expr)[源代码]#

Simplify the expression if it is Expr, directly return if it is int.

Parameters#

exprExpr or int

The input.

Returns#

outExpr or int

The simplified output

tvm.topi.sparse#

Sparse operators

Functions:

csrmm(a, b[, c])

The csrmm routine performs a matrix-matrix operation defined as \(C := A*B + C\), where B and C are dense matrices, A is an m-by-k sparse matrix in the CSR format.

csrmv(a, x[, y])

The csrmv routine performs a matrix-vector operation defined as \(y := A*x + y\), where x and y are vectors, A is an m-by-k sparse matrix in the CSR format.

dense(data, weight[, bias])

Applies a linear transformation: \(Y = XW^T + b\).

tvm.topi.sparse.csrmm(a, b, c=None)[源代码]#

The csrmm routine performs a matrix-matrix operation defined as \(C := A*B + C\), where B and C are dense matrices, A is an m-by-k sparse matrix in the CSR format.

Parameters#

atvm.contrib.sparse.CSRNDArray

2-D sparse matrix with shape [m, k]

btvm.te.Tensor

2-D dense matrix with shape [k, n]

ctvm.te.Tensor, optional

1-D dense vector with shape [n]

Returns#

outputtvm.te.Tensor

2-D with shape [m, n]

tvm.topi.sparse.csrmv(a, x, y=None)[源代码]#

The csrmv routine performs a matrix-vector operation defined as \(y := A*x + y\), where x and y are vectors, A is an m-by-k sparse matrix in the CSR format.

Parameters#

atvm.contrib.sparse.CSRNDArray

2-D sparse matrix with shape [m, k]

xtvm.te.Tensor

2-D dense matrix with shape [k, 1]

ytvm.te.Tensor, optional

1-D dense vector with shape [1]

Returns#

outputtvm.te.Tensor

2-D dense matrix with shape [m, 1]

tvm.topi.sparse.dense(data, weight, bias=None)[源代码]#

Applies a linear transformation: \(Y = XW^T + b\). Either data or weight should be tvm.contrib.sparse.CSRNDArray.

Parameters#

datatvm.contrib.sparse.CSRNDArray or te.tensor.Tensor

2-D with shape [batch, in_dim]

weightte.tensor.Tensor or tvm.contrib.sparse.CSRNDArray

2-D with shape [out_dim, in_dim]

biaste.tensor.Tensor, optional

1-D with shape [out_dim]

Returns#

outputtvm.te.Tensor

2-D with shape [batch, out_dim]