tvm.relay

tvm.relay#

The Relay IR namespace containing the IR definition and compiler.

Classes:

Call(op, args[, attrs, type_args, span])

Function call node in Relay.

Clause(lhs, rhs)

Clause for pattern matching in Relay.

Constant(data[, span])

A constant expression in Relay.

Expr

RelayExpr 的别名

ExprFunctor()

An abstract visitor defined over Expr.

ExprMutator()

A functional visitor over Expr.

ExprVisitor()

A visitor over Expr.

Function(params, body[, ret_type, ...])

A function declaration expression.

If(cond, true_branch, false_branch[, span])

A conditional expression in Relay.

Let(variable, value, body[, span])

Let variable binding expression.

Match(data, clauses[, complete])

Pattern matching expression in Relay.

Pattern()

Base type for pattern matching constructs.

PatternConstructor(constructor[, patterns])

Constructor pattern in Relay: Matches an ADT of the given constructor, binds recursively.

PatternTuple([patterns])

Constructor pattern in Relay: Matches a tuple, binds recursively.

PatternVar(var)

Variable pattern in Relay: Matches anything and binds it to the variable.

PatternWildcard()

Wildcard pattern in Relay: Matches any ADT and binds nothing.

Prelude([mod])

Contains standard definitions.

RefCreate(value[, span])

Create a new reference from initial value. Parameters ---------- value: tvm.relay.Expr The initial value.

RefRead(ref[, span])

Get the value inside the reference. Parameters ---------- ref: tvm.relay.Expr The reference.

RefType

RelayRefType 的别名

RefWrite(ref, value[, span])

Update the value inside the reference. The whole expression will evaluate to an empty tuple. Parameters ---------- ref: tvm.relay.Expr The reference.

ScopeBuilder()

Scope builder class.

SequentialSpan(spans)

A sequence of source spans

Tuple(fields[, span])

Tuple expression that groups several fields together.

TupleGetItem(tuple_value, index[, span])

Get index-th item from a tuple.

TupleWrapper(tuple_value, size)

TupleWrapper.

TypeData(header, type_vars, constructors)

Stores the definition for an Algebraic Data Type (ADT) in Relay.

TypeFunctor()

An abstract visitor defined over Type.

TypeMutator()

A functional visitor over Type.

TypeVisitor()

A visitor over Type.

Functions:

ShapeVar(name)

A helper which constructs a type var of which the shape kind.

SpanCheck()

A debugging utility for reporting missing span information.

abs(data)

Compute element-wise absolute of data.

acos(data)

Compute elementwise acos of data.

acosh(data)

Compute elementwise acosh of data.

add(lhs, rhs)

Addition with numpy-style broadcasting.

adv_index(inputs)

Numpy style advanced indexing.

all(data[, axis, keepdims, exclude])

Computes the logical AND of boolean array elements over given axes.

any(data[, axis, keepdims, exclude])

Computes the logical OR of boolean array elements over given axes.

arange(start[, stop, step, dtype])

Return evenly spaced values within a given interval.

argmax(data[, axis, keepdims, exclude, ...])

Returns the indices of the maximum values along an axis.

argmin(data[, axis, keepdims, exclude, ...])

Returns the indices of the minimum values along an axis.

argsort(data[, axis, is_ascend, dtype])

Performs sorting along the given axis and returns an array of indices having same shape as an input array that index data in sorted order.

argwhere(condition)

Find the indices of elements of a tensor that are non-zero.

asin(data)

Compute elementwise asin of data.

asinh(data)

Compute elementwise asinh of data.

astext(obj[, show_meta_data, annotate])

Get the text format of the expression.

atan(data)

Compute elementwise atan of data.

atanh(data)

Compute elementwise atanh of data.

bind(expr, binds)

Bind an free variables in expr or function arguments.

bitwise_and(lhs, rhs)

bitwise AND with numpy-style broadcasting.

bitwise_not(data)

Compute element-wise bitwise not of data.

bitwise_or(lhs, rhs)

bitwise OR with numpy-style broadcasting.

bitwise_xor(lhs, rhs)

bitwise XOR with numpy-style broadcasting.

broadcast_to(data, shape)

Return a scalar value array with the same type, broadcasted to the provided shape.

broadcast_to_like(data, broadcast_type)

Return a scalar value array with the same shape and type as the input array.

build(ir_mod[, target, target_host, ...])

Helper function that builds a Relay function to run on TVM graph executor.

build_config([opt_level, required_pass, ...])

Configure the build behavior by setting config variables.

cast(data, dtype)

Cast input tensor to data type.

cast_like(data, dtype_like)

Cast input tensor to data type of another tensor.

ceil(data)

Compute element-wise ceil of data.

clip(a, a_min, a_max)

Clip the elements in a between a_min and a_max.

collapse_sum_like(data, collapse_type)

Return a scalar value array with the same shape and type as the input array.

collapse_sum_to(data, shape)

Return a summation of data to the specified shape.

concatenate(data, axis)

Concatenate the input tensors along the given axis.

const(value[, dtype, span])

Create a constant value.

copy(data)

Copy a tensor.

copy_shape_func(attrs, inputs, _)

Shape function for copy op.

cos(data)

Compute elementwise cos of data.

cosh(data)

Compute elementwise cosh of data.

create_executor([kind, mod, device, target, ...])

Factory function to create an executor.

cumprod(data[, axis, dtype, exclusive])

Numpy style cumprod op.

cumsum(data[, axis, dtype, exclusive])

Numpy style cumsum op.

device_copy(data, src_device, dst_device)

Copy data from the source device to the destination device.

dft(re_data, im_data[, inverse])

Computes the discrete Fourier transform of input (calculation along the last axis).

divide(lhs, rhs)

Division with numpy-style broadcasting.

einsum(data, equation)

Evaluates the Einstein summation convention on data

equal(lhs, rhs)

Broadcasted elementwise test for (lhs == rhs).

erf(data)

Compute elementwise error function of data.

exp(data)

Compute elementwise exp of data.

expand_dims(data, axis[, num_newaxis])

Insert num_newaxis axes at the position given by axis.

fixed_point_multiply(data, multiplier, shift)

Fixed point multiplication between data and a fixed point constant expressed as multiplier * 2^(-shift), where multiplier is a Q-number with 31 fractional bits

floor(data)

Compute element-wise floor of data.

floor_divide(lhs, rhs)

Floor division with numpy-style broadcasting.

floor_mod(lhs, rhs)

Floor mod with numpy-style broadcasting.

full(fill_value[, shape, dtype])

Fill array with scalar value.

full_like(data, fill_value)

Return a scalar value array with the same shape and type as the input array.

gather(data, axis, indices)

Gather values along given axis from given indices.

gather_nd(data, indices[, batch_dims, ...])

Gather elements or slices from data and store them to a tensor whose shape is defined by indices.

greater(lhs, rhs)

Broadcasted elementwise test for (lhs > rhs).

greater_equal(lhs, rhs)

Broadcasted elementwise test for (lhs >= rhs).

invert_permutation(data)

Computes the inverse permutation of data.

isfinite(data)

Compute element-wise finiteness of data.

isinf(data)

Compute element-wise infiniteness of data.

isnan(data)

Check nan in input data element-wise.

layout_transform(data, src_layout, dst_layout)

Transform the layout of a tensor.

left_shift(lhs, rhs)

Left shift with numpy-style broadcasting.

less(lhs, rhs)

Broadcasted elementwise test for (lhs < rhs).

less_equal(lhs, rhs)

Broadcasted elementwise test for (lhs <= rhs).

load_param_dict(param_bytes)

Load parameter dictionary to binary bytes.

log(data)

Compute elementwise log of data.

log10(data)

Compute elementwise log to the base 10 of data.

log2(data)

Compute elementwise log to the base 2 of data.

logical_and(lhs, rhs)

logical AND with numpy-style broadcasting.

logical_not(data)

Compute element-wise logical not of data.

logical_or(lhs, rhs)

logical OR with numpy-style broadcasting.

logical_xor(lhs, rhs)

logical XOR with numpy-style broadcasting.

logsumexp(data[, axis, keepdims])

Compute the log of the sum of exponentials of input elements over given axes.

matrix_set_diag(data, diagonal[, k, align])

Returns a tensor with the diagonals of input tensor replaced with the provided diagonal values.

max(data[, axis, keepdims, exclude])

Computes the max of array elements over given axes.

maximum(lhs, rhs)

Maximum with numpy-style broadcasting.

mean(data[, axis, keepdims, exclude])

Computes the mean of array elements over given axes.

mean_std(data[, axis, keepdims, exclude])

Computes the mean and standard deviation of data over given axes.

mean_variance(data[, axis, keepdims, ...])

Computes the mean and variance of data over given axes.

meshgrid(data[, indexing])

Create coordinate matrices from coordinate vectors.

min(data[, axis, keepdims, exclude])

Computes the min of array elements over given axes.

minimum(lhs, rhs)

Minimum with numpy-style broadcasting.

mod(lhs, rhs)

Mod with numpy-style broadcasting.

multiply(lhs, rhs)

Multiplication with numpy-style broadcasting.

ndarray_size(data[, dtype])

Get number of elements of input tensor.

negative(data)

Compute element-wise negative of data.

not_equal(lhs, rhs)

Broadcasted elementwise test for (lhs != rhs).

one_hot(indices, on_value, off_value, depth, ...)

Returns a one-hot tensor where the locations represented by indices take value on_value, and other locations take value off_value.

ones(shape, dtype)

Fill array with ones.

ones_like(data)

Returns an array of ones, with same type and shape as the input.

optimize(mod[, target, params])

Helper function that optimizes a Relay module.

power(lhs, rhs)

Power with numpy-style broadcasting.

pretty_print(obj)

Pretty print the object.

prod(data[, axis, keepdims, exclude])

Computes the products of array elements over given axes.

reinterpret(data, dtype)

Reinterpret input tensor to data type.

repeat(data, repeats, axis)

Repeats elements of an array.

reshape(data, newshape[, allowzero])

Reshape the input array.

reshape_like(data, shape_like[, lhs_begin, ...])

Reshapes the input tensor by the size of another tensor.

reverse(data, axis)

Reverses the order of elements along given axis while preserving array shape.

reverse_reshape(data, newshape)

Reshapes the input array where the special values are inferred from right to left.

reverse_sequence(data, seq_lengths[, ...])

Reverse the tensor for variable length slices.

right_shift(lhs, rhs)

Right shift with numpy-style broadcasting.

round(data)

Compute element-wise round of data.

rsqrt(data)

Compute elementwise rsqrt of data.

save_param_dict(params)

Save parameter dictionary to binary bytes.

scalar_type(dtype)

Creates a scalar type.

scatter_elements(data, indices, updates[, ...])

Scatter elements with updating data by reduction of values in updates at positions defined by indices.

scatter_nd(data, indices, updates[, mode])

Scatter values from an array and update.

script(pyfunc)

Decorate a python function as hybrid script.

searchsorted(sorted_sequence, values[, ...])

Find indices where elements should be inserted to maintain order.

segment_sum(data, segment_ids[, num_segments])

Computes the sum along segment_ids along axis 0.

sequence_mask(data, valid_length[, ...])

Sets all elements outside the expected length of the sequence to a constant value.

setrecursionlimit(limit, /)

Set the maximum depth of the Python interpreter stack to n.

shape_of(data[, dtype])

Get shape of a tensor.

sigmoid(data)

Compute elementwise sigmoid of data.

sign(data)

Compute element-wise absolute of data.

sin(data)

Compute elementwise sin of data.

sinh(data)

Compute elementwise sinh of data.

slice_like(data, shape_like[, axes])

Slice the first input with respect to the second input.

sliding_window(data, axis, window_shape, strides)

Slide a window over the data tensor.

sort(data[, axis, is_ascend])

Performs sorting along the given axis and returns data in sorted order.

sparse_fill_empty_rows(sparse_indices, ...)

Fill rows in a sparse matrix that do not contain any values.

sparse_reshape(sparse_indices, prev_shape, ...)

Reshape a sparse tensor.

sparse_to_dense(sparse_indices, ...[, ...])

Converts a sparse representation into a dense tensor.

split(data, indices_or_sections[, axis])

Split input tensor along axis by sections or indices.

sqrt(data)

Compute elementwise sqrt of data.

squeeze(data[, axis])

Squeeze axes in the array.

stack(data, axis)

Join a sequence of arrays along a new axis.

std(data[, axis, keepdims, exclude, unbiased])

Computes the standard deviation of data over given axes.

stft(data, n_fft[, hop_length, win_length, ...])

The STFT computes the Fourier transform of short overlapping windows of the input.

strided_set(data, v, begin, end[, strides])

Strided set of an array.

strided_slice(data, begin, end[, strides, ...])

Strided slice of an array.

subtract(lhs, rhs)

Subtraction with numpy-style broadcasting.

sum(data[, axis, keepdims, exclude])

Computes the sum of array elements over given axes.

take(data, indices[, axis, batch_dims, mode])

Take elements from an array along an axis.

tan(data)

Compute elementwise tan of data.

tanh(data)

Compute element-wise tanh of data.

tile(data, reps)

Repeats the whole array multiple times.

topk(data[, k, axis, ret_type, is_ascend, dtype])

Get the top k elements in an input tensor along the given axis.

transpose(data[, axes])

Permutes the dimensions of an array.

trilu(data, k[, upper])

Given a 2-D matrix or batches of 2-D matrices, returns the upper or lower triangular part of the tensor.

trunc(data)

Compute element-wise trunc of data.

trunc_divide(lhs, rhs)

Trunc division with numpy-style broadcasting.

trunc_mod(lhs, rhs)

Trunc mod with numpy-style broadcasting.

unique(data[, is_sorted, return_counts])

Find the unique elements of a 1-D tensor.

unravel_index(indices, shape)

Convert a flat index or array of flat indices into a tuple of coordinate arrays.

var(name_hint[, type_annotation, shape, ...])

Create a new tvm.relay.Var.

variance(data[, axis, keepdims, exclude, ...])

Computes the variance of data over given axes.

where(condition, x, y)

Selecting elements from either x or y depending on the value of the condition.

zeros(shape, dtype)

Fill array with zeros.

zeros_like(data)

Returns an array of zeros, with same type and shape as the input.

class tvm.relay.Call(op, args, attrs=None, type_args=None, span=None)[源代码]

Function call node in Relay.

Call node corresponds the operator application node in computational graph terminology.

Parameters#

op: tvm.ir.Op or any tvm.relay.Expr with function type.

The operation to be called.

args: List[tvm.relay.Expr]

The arguments to the call.

attrs: Optional[tvm.Attrs]

Attributes to the call, can be None

type_args: Optional[List[tvm.relay.Type]]

The additional type arguments, this is only used in advanced usecase of template functions.

span: Optional[tvm.relay.Span]

Span that points to original source code.

参数:

span (Span | None)

class tvm.relay.Clause(lhs, rhs)[源代码]

Clause for pattern matching in Relay.

Methods:

__init__(lhs, rhs)

Construct a clause.

__init__(lhs, rhs)[源代码]

Construct a clause.

Parameters#

lhs: tvm.relay.Pattern

Left-hand side of match clause.

rhs: tvm.relay.Expr

Right-hand side of match clause.

Returns#

clause: Clause

The Clause.

class tvm.relay.Constant(data, span=None)[源代码]

A constant expression in Relay.

Parameters#

datatvm.nd.NDArray

The data content of the constant expression.

span: Optional[tvm.relay.Span]

Span that points to original source code.

参数:

span (Span | None)

tvm.relay.Expr

RelayExpr 的别名 Attributes:

checked_type

Get the checked type of tvm.relay.Expr.

struct_info

Get the struct info field

class tvm.relay.ExprFunctor[源代码]

An abstract visitor defined over Expr.

Defines the default dispatch over expressions, and implements memoization.

Methods:

visit(expr)

Apply the visitor to an expression.

visit(expr)[源代码]

Apply the visitor to an expression.

class tvm.relay.ExprMutator[源代码]

A functional visitor over Expr.

The default behavior recursively traverses the AST and reconstructs the AST.

class tvm.relay.ExprVisitor[源代码]

A visitor over Expr.

The default behavior recursively traverses the AST.

class tvm.relay.Function(params, body, ret_type=None, type_params=None, attrs=None, span=None)[源代码]

A function declaration expression.

Parameters#

params: List[tvm.relay.Var]

List of input parameters to the function.

body: tvm.relay.Expr

The body of the function.

ret_type: Optional[tvm.relay.Type]

The return type annotation of the function.

type_params: Optional[List[tvm.relay.TypeParam]]

The additional type parameters, this is only used in advanced usecase of template functions.

span: Optional[tvm.relay.Span]

Span that points to original source code.

Methods:

__call__(*args)

Invoke the global function.

astext([show_meta_data, annotate])

Get the text format of the expression.

__call__(*args)[源代码]

Invoke the global function.

Parameters#

args: List[relay.Expr]

Arguments.

astext(show_meta_data=True, annotate=None)[源代码]

Get the text format of the expression.

Parameters#

show_meta_databool

Whether to include meta data section in the text if there is meta data.

annotate: Optional[Object->str]

Optionally annotate function to provide additional information in the comment block.

Returns#

textstr

The text format of the expression.

Notes#

The meta data section is necessary to fully parse the text format. However, it can contain dumps that are big (e.g constant weights), so it can be helpful to skip printing the meta data section.

参数:

span (Span | None)

class tvm.relay.If(cond, true_branch, false_branch, span=None)[源代码]

A conditional expression in Relay.

Parameters#

cond: tvm.relay.Expr

The condition.

true_branch: tvm.relay.Expr

The expression evaluated when condition is true.

false_branch: tvm.relay.Expr

The expression evaluated when condition is false.

span: Optional[tvm.relay.Span]

Span that points to original source code.

参数:

span (Span | None)

class tvm.relay.Let(variable, value, body, span=None)[源代码]

Let variable binding expression.

Parameters#

variable: tvm.relay.Var

The local variable to be bound.

value: tvm.relay.Expr

The value to be bound.

body: tvm.relay.Expr

The body of the let binding.

span: Optional[tvm.relay.Span]

Span that points to original source code.

参数:

span (Span | None)

class tvm.relay.Match(data, clauses, complete=True)[源代码]

Pattern matching expression in Relay.

Methods:

__init__(data, clauses[, complete])

Construct a Match.

__init__(data, clauses, complete=True)[源代码]

Construct a Match.

Parameters#

data: tvm.relay.Expr

The value being deconstructed and matched.

clauses: List[tvm.relay.Clause]

The pattern match clauses.

complete: Optional[Bool]

Should the match be complete (cover all cases)? If yes, the type checker will generate an error if there are any missing cases.

Returns#

match: tvm.relay.Expr

The match expression.

class tvm.relay.Pattern[源代码]

Base type for pattern matching constructs.

class tvm.relay.PatternConstructor(constructor, patterns=None)[源代码]

Constructor pattern in Relay: Matches an ADT of the given constructor, binds recursively.

Methods:

__init__(constructor[, patterns])

Construct a constructor pattern.

__init__(constructor, patterns=None)[源代码]

Construct a constructor pattern.

Parameters#

constructor: Constructor

The constructor.

patterns: Optional[List[Pattern]]

Optional subpatterns: for each field of the constructor, match to the given subpattern (treated as a variable pattern by default).

Returns#

wildcard: PatternWildcard

a wildcard pattern.

class tvm.relay.PatternTuple(patterns=None)[源代码]

Constructor pattern in Relay: Matches a tuple, binds recursively.

Methods:

__init__([patterns])

Construct a tuple pattern.

__init__(patterns=None)[源代码]

Construct a tuple pattern.

Parameters#

patterns: Optional[List[Pattern]]

Optional subpatterns: for each field of the constructor, match to the given subpattern (treated as a variable pattern by default).

Returns#

wildcard: PatternWildcard

a wildcard pattern.

class tvm.relay.PatternVar(var)[源代码]

Variable pattern in Relay: Matches anything and binds it to the variable.

Methods:

__init__(var)

Construct a variable pattern.

__init__(var)[源代码]

Construct a variable pattern.

Parameters#

var: tvm.relay.Var

Returns#

pv: PatternVar

A variable pattern.

class tvm.relay.PatternWildcard[源代码]

Wildcard pattern in Relay: Matches any ADT and binds nothing.

Methods:

__init__()

Constructs a wildcard pattern.

__init__()[源代码]

Constructs a wildcard pattern.

Parameters#

None

Returns#

wildcard: PatternWildcard

a wildcard pattern.

class tvm.relay.Prelude(mod=None)[源代码]

Contains standard definitions.

Methods:

get_ctor(ty_name, canonical, dtype)

Get constructor corresponding to the canonical name

get_ctor_static(ty_name, name, dtype, shape)

Get constructor corresponding to the canonical name

get_global_var(canonical, dtype)

Get global var corresponding to the canonical name

get_global_var_static(canonical, dtype, shape)

Get var corresponding to the canonical name

get_name(canonical, dtype)

Get name corresponding to the canonical name

get_name_static(canonical, dtype, shape[, ...])

Get name corresponding to the canonical name

get_tensor_ctor_static(name, dtype, shape)

Get constructor corresponding to the canonical name

get_type(canonical, dtype)

Get type corresponding to the canonical name

get_type_static(canonical, dtype, shape)

Get type corresponding to the canonical name

load_prelude()

Parses the Prelude from Relay's text format into a module.

get_ctor(ty_name, canonical, dtype)[源代码]

Get constructor corresponding to the canonical name

get_ctor_static(ty_name, name, dtype, shape)[源代码]

Get constructor corresponding to the canonical name

get_global_var(canonical, dtype)[源代码]

Get global var corresponding to the canonical name

get_global_var_static(canonical, dtype, shape, batch_dim=None)[源代码]

Get var corresponding to the canonical name

get_name(canonical, dtype)[源代码]

Get name corresponding to the canonical name

get_name_static(canonical, dtype, shape, batch_dim=None)[源代码]

Get name corresponding to the canonical name

get_tensor_ctor_static(name, dtype, shape)[源代码]

Get constructor corresponding to the canonical name

get_type(canonical, dtype)[源代码]

Get type corresponding to the canonical name

get_type_static(canonical, dtype, shape)[源代码]

Get type corresponding to the canonical name

load_prelude()[源代码]

Parses the Prelude from Relay’s text format into a module.

class tvm.relay.RefCreate(value, span=None)[源代码]

Create a new reference from initial value. Parameters ———- value: tvm.relay.Expr

The initial value.

span: Optional[tvm.relay.Span]

Span that points to original source code.

参数:

span (Span | None)

class tvm.relay.RefRead(ref, span=None)[源代码]

Get the value inside the reference. Parameters ———- ref: tvm.relay.Expr

The reference.

span: Optional[tvm.relay.Span]

Span that points to original source code.

参数:

span (Span | None)

tvm.relay.RefType

RelayRefType 的别名

class tvm.relay.RefWrite(ref, value, span=None)[源代码]

Update the value inside the reference. The whole expression will evaluate to an empty tuple. Parameters ———- ref: tvm.relay.Expr

The reference.

value: tvm.relay.Expr

The new value.

span: Optional[tvm.relay.Span]

Span that points to original source code.

参数:

span (Span | None)

class tvm.relay.ScopeBuilder[源代码]

Scope builder class.

Enables users to build up a nested scope(let, if) expression easily.

Examples#

Methods:

else_scope()

Create a new else scope.

get()

Get the generated result.

if_scope(cond)

Create a new if scope.

let(var, value)

Create a new let binding.

ret(value)

Set the return value of this scope.

type_of(expr)

Compute the type of an expression.

else_scope()[源代码]

Create a new else scope.

Returns#

scope: WithScope

The if scope.

get()[源代码]

Get the generated result.

Returns#

value: tvm.relay.expr.Expr

The final result of the expression.

if_scope(cond)[源代码]

Create a new if scope.

Parameters#

cond: tvm.relay.expr.Expr

The condition

Returns#

scope: WithScope

The if scope.

Note#

The user must follows with an else scope.

let(var, value)[源代码]

Create a new let binding.

Parameters#

var: Union[Tuple[str, relay.Type], tvm.relay.Var]

The variable or name of variable.

value: tvm.relay.Expr

The value to be bound

ret(value)[源代码]

Set the return value of this scope.

Parameters#

value: tvm.relay.expr.Expr

The return value.

type_of(expr)[源代码]

Compute the type of an expression.

Parameters#

expr: relay.Expr

The expression to compute the type of.

class tvm.relay.SequentialSpan(spans)[源代码]

A sequence of source spans

This span is specific for an expression, which is from multiple expressions after an IR transform.

Parameters#

spansArray

The array of spans.

class tvm.relay.Tuple(fields, span=None)[源代码]

Tuple expression that groups several fields together.

Parameters#

fieldsList[tvm.relay.Expr]

The fields in the tuple.

span: Optional[tvm.relay.Span]

Span that points to original source code.

Methods:

astype(_)

Cast the content type of the current data to dtype.

astype(_)[源代码]

Cast the content type of the current data to dtype.

Parameters#

dtypestr

The target data type.

Note#

This function only works for TensorType Exprs.

Returns#

resulttvm.relay.Expr

The result expression.

参数:

span (Span | None)

class tvm.relay.TupleGetItem(tuple_value, index, span=None)[源代码]

Get index-th item from a tuple.

Parameters#

tuple_value: tvm.relay.Expr

The input tuple expression.

index: int

The index.

span: Optional[tvm.relay.Span]

Span that points to original source code.

参数:

span (Span | None)

class tvm.relay.TupleWrapper(tuple_value, size)[源代码]

TupleWrapper.

This class is a Python wrapper for a Relay tuple of known size. It allows for accessing the fields of the Relay tuple as though it were a Python tuple.

Parameters#

tuple_value: tvm.relay.Expr

The input tuple

size: int

The size of the tuple.

Methods:

astext()

Get the text format of the tuple expression.

astuple()

Returns the underlying Relay tuple if this wrapper is passed as an argument to an FFI function.

astext()[源代码]

Get the text format of the tuple expression.

Returns#

textstr

The text format of the tuple expression.

astuple()[源代码]

Returns the underlying Relay tuple if this wrapper is passed as an argument to an FFI function.

class tvm.relay.TypeData(header, type_vars, constructors)[源代码]

Stores the definition for an Algebraic Data Type (ADT) in Relay.

Note that ADT definitions are treated as type-level functions because the type parameters need to be given for an instance of the ADT. Thus, any global type var that is an ADT header needs to be wrapped in a type call that passes in the type params.

Parameters#

header: GlobalTypeVar

The name of the ADT. ADTs with the same constructors but different names are treated as different types.

type_vars: List[TypeVar]

Type variables that appear in constructors.

constructors: List[Constructor]

The constructors for the ADT.

class tvm.relay.TypeFunctor[源代码]

An abstract visitor defined over Type.

Defines the default dispatch over types.

Methods:

visit(typ)

Apply the visitor to a type.

visit(typ)[源代码]

Apply the visitor to a type.

class tvm.relay.TypeMutator[源代码]

A functional visitor over Type.

The default behavior recursively traverses the AST and reconstructs the AST.

class tvm.relay.TypeVisitor[源代码]

A visitor over Type.

The default behavior recursively traverses the AST.

tvm.relay.ShapeVar(name)[源代码]

A helper which constructs a type var of which the shape kind.

Parameters#

name : str

Returns#

type_vartvm.relay.TypeVar

The shape variable.

tvm.relay.SpanCheck()[源代码]

A debugging utility for reporting missing span information.

tvm.relay.abs(data)[源代码]

Compute element-wise absolute of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.acos(data)[源代码]

Compute elementwise acos of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.acosh(data)[源代码]

Compute elementwise acosh of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.add(lhs, rhs)[源代码]

Addition with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

Examples#

x = relay.Var("a") # shape is [2, 3]
y = relay.Var("b") # shape is [2, 1]
z = relay.add(x, y)  # result shape is [2, 3]
tvm.relay.adv_index(inputs)[源代码]

Numpy style advanced indexing. Index with a list of tensors.

Parameters#

inputsUnion(List[relay.Expr], Tuple[relay.Expr])

Input tensor and indices. The first tensor is the input data and the rest are the indices.

Returns#

resultrelay.Expr

Output tensor.

tvm.relay.all(data, axis=None, keepdims=False, exclude=False)[源代码]

Computes the logical AND of boolean array elements over given axes.

Parameters#

datarelay.Expr

The input boolean tensor

axisNone or int or tuple of int

Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

excludebool

If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

Returns#

resultrelay.Expr

The computed result.

Examples#

data = relay.Constant(tvm.nd.array([[[ True,  True,  True],
                                   [ True,  True,  True],
                                   [False,  True, False]],
                                  [[ True, False, False],
                                   [ True,  True, False],
                                   [False,  True,  True]]]))

relay.all(data, axis=1)
# [[False,  True, False],
# [False, False, False]]

relay.all(data, axis=0)
# [[ True, False, False],
# [ True,  True, False],
# [False,  True, False]]
tvm.relay.any(data, axis=None, keepdims=False, exclude=False)[源代码]

Computes the logical OR of boolean array elements over given axes.

Parameters#

datarelay.Expr

The input boolean tensor

axisNone or int or tuple of int

Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

excludebool

If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

Returns#

resultrelay.Expr

The computed result.

Examples#

data = relay.Constant(tvm.nd.array([[[ True,  True,  True],
                                    [ True,  True,  True],
                                    [False,  True, False]],
                                    [[ True, False, False],
                                    [ True,  True, False],
                                    [False,  True,  True]]]))

relay.any(data, axis=1)
# [[True, True, True],
# [True,  True, True]]

relay.any(data, axis=0)
# [[ True, True, True],
# [ True,  True, True],
# [False,  True, True]]
tvm.relay.arange(start, stop=None, step=None, dtype='float32')[源代码]

Return evenly spaced values within a given interval.

备注

Similar to numpy.arange. When only one argument is given, it is used as stop instead of start while start takes default value 0.

Warning: Undefined behavior when dtype is incompatible with start/stop/step. It could lead to different results compared to numpy, MXNet, pytorch, etc.

Parameters#

startrelay.Expr, optional

Start of interval. The interval includes this value. The default start value is 0.

stoprelay.Expr

Stop of interval. The interval does not include this value.

steprelay.Expr, optional

Spacing between values. The default step size is 1.

dtypestr, optional

The target data type.

Returns#

resultrelay.Expr

The resulting tensor.

Examples#

relay.arange(5) = [0, 1, 2, 3, 4]
relay.arange(1, 5) = [1, 2, 3, 4]
relay.arange(1, 5, 1.5) = [1, 2.5, 4]
tvm.relay.argmax(data, axis=None, keepdims=False, exclude=False, select_last_index=False)[源代码]

Returns the indices of the maximum values along an axis.

Parameters#

datarelay.Expr

The input data

axisNone or int or tuple of int

Axis or axes along which a argmax operation is performed. The default, axis=None, will find the indices of the maximum element of the elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

excludebool

If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

select_last_indexbool

Whether to select the last index or the first index if the max element appears in multiple indices, default is False (first index).

Returns#

resultrelay.Expr

The computed result.

tvm.relay.argmin(data, axis=None, keepdims=False, exclude=False, select_last_index=False)[源代码]

Returns the indices of the minimum values along an axis.

Parameters#

datarelay.Expr

The input data

axisNone or int or tuple of int

Axis or axes along which a argmin operation is performed. The default, axis=None, will find the indices of minimum element all of the elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

excludebool

If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

select_last_indexbool

Whether to select the last index or the first index if the min element appears in multiple indices, default is False (first index).

Returns#

resultrelay.Expr

The computed result.

tvm.relay.argsort(data, axis=-1, is_ascend=1, dtype='int32')[源代码]

Performs sorting along the given axis and returns an array of indices having same shape as an input array that index data in sorted order.

Parameters#

datarelay.Expr

The input data tensor.

valid_counttvm.te.Tensor

The number of valid elements to be sorted.

axisint, optional

Axis long which to sort the input tensor.

is_ascendboolean, optional

Whether to sort in ascending or descending order.

dtypestring, optional

The data type of the output indices.

Returns#

outrelay.Expr

Tensor with same shape as data.

tvm.relay.argwhere(condition)[源代码]

Find the indices of elements of a tensor that are non-zero.

Parameters#

conditionrelay.Expr

The input condition tensor.

Returns#

resultrelay.Expr

Tensor with the indices of elements that are non-zero.

Examples#

condition = [[True, False], [False, True]]
relay.argwhere(condition) = [[0, 0], [1, 1]]
tvm.relay.asin(data)[源代码]

Compute elementwise asin of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.asinh(data)[源代码]

Compute elementwise asinh of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.astext(obj, show_meta_data=True, annotate=None)[源代码]

Get the text format of the expression.

Parameters#

objObject

The object to be printed.

show_meta_databool

Whether to include meta data section in the text if there is meta data.

annotate: Optional[Object->str]

Optionally annotate function to provide additional information in the comment block.

Returns#

textstr

The text format of the expression.

Notes#

The meta data section is necessary to fully parse the text format. However, it can contain dumps that are big (e.g constant weights), so it can be helpful to skip printing the meta data section.

参数:

obj (Object)

tvm.relay.atan(data)[源代码]

Compute elementwise atan of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.atanh(data)[源代码]

Compute elementwise atanh of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.bind(expr, binds)[源代码]

Bind an free variables in expr or function arguments.

We can bind parameters expr if it is a function.

Parameters#

exprtvm.relay.Expr

The input expression.

bindsMap[tvm.relay.Var, tvm.relay.Expr]

The specific bindings.

Returns#

resulttvm.relay.Expr

The expression or function after binding.

tvm.relay.bitwise_and(lhs, rhs)[源代码]

bitwise AND with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.bitwise_not(data)[源代码]

Compute element-wise bitwise not of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.bitwise_or(lhs, rhs)[源代码]

bitwise OR with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.bitwise_xor(lhs, rhs)[源代码]

bitwise XOR with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.broadcast_to(data, shape)[源代码]

Return a scalar value array with the same type, broadcasted to the provided shape.

Parameters#

datarelay.Expr

The input tensor.

shapetuple of int or relay.Expr

Provide the shape to broadcast to.

Returns#

resultrelay.Expr

The resulting tensor.

tvm.relay.broadcast_to_like(data, broadcast_type)[源代码]

Return a scalar value array with the same shape and type as the input array.

Parameters#

datarelay.Expr

The input tensor.

broadcast_typerelay.Expr

Provide the shape to broadcast to.

Returns#

resultrelay.Expr

The resulting tensor.

tvm.relay.build(ir_mod, target=None, target_host=None, executor=graph{"link-params": T.bool(False)}, runtime=cpp, workspace_memory_pools=None, constant_memory_pools=None, params=None, mod_name='default')[源代码]

Helper function that builds a Relay function to run on TVM graph executor.

Parameters#

ir_modIRModule

The IR module to build. Using relay.Function is deprecated.

targetNone, or any multi-target like object, see Target.canon_multi_target

For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets. Defaults to the current target in the environment if None.

target_hostNone, or any target like object, see Target.canon_target

Host compilation target, if target is device.

executorOptional[Executor]

The executor configuration with which to build the model. Defaults to “graph” if no executor specified.

runtimeOptional[Runtime]

Runtime configuration to use when building the model. Defaults to “cpp” if no runtime specified.

workspace_memory_poolsOptional[WorkspaceMemoryPools]

The object that contains an Array of WorkspacePoolInfo objects that hold properties of read-write workspace pools that could be used by the inference.

constant_memory_poolsOptional[ConstantMemoryPools]

The object that contains an Array of ConstantPoolInfo objects that hold properties of read-only pools that could be used by the inference.

paramsdict of str to NDArray

Input parameters to the graph that do not change during inference time. Used for constant folding.

mod_name: Optional[str]

The module name we will build

Returns#

factory_moduletvm.relay.backend.executor_factory.ExecutorFactoryModule

The runtime factory for the TVM graph executor.

tvm.relay.build_config(opt_level=2, required_pass=None, disabled_pass=None, trace=None)[源代码]

Configure the build behavior by setting config variables. This function will be deprecated in TVM v0.7. Instead, we should directly use tvm.transform.PassContext.

Parameters#

opt_level: int, optional

Optimization level. The optimization pass name and level are as the following:

OPT_PASS_LEVEL = {
    "SimplifyInference": 0,
    "OpFusion": 1,
    "FoldConstant": 2,
    "FoldScaleAxis": 3,
    "AlterOpLayout": 3,
    "CanonicalizeOps": 3,
    "CanonicalizeCast": 3,
    "EliminateCommonSubexpr": 3,
    "CombineParallelConv2D": 4,
    "CombineParallelDense": 4,
    "CombineParallelBatchMatmul": 4,
    "FastMath": 4
}
required_pass: set of str, optional

Optimization passes that are required regardless of optimization level.

disabled_pass: set of str, optional

Optimization passes to be disabled during optimization.

trace: Callable[[IRModule, PassInfo, bool], None]

A tracing function for debugging or introspection.

Returns#

pass_context: PassContext

The pass context for optimizations.

tvm.relay.cast(data, dtype)[源代码]

Cast input tensor to data type.

Parameters#

datarelay.Expr

The input data to the operator.

dtypestr

The target data type.

Returns#

resultrelay.Expr

The casted result.

tvm.relay.cast_like(data, dtype_like)[源代码]

Cast input tensor to data type of another tensor.

Parameters#

datarelay.Expr

The input data to the operator.

dtype_likerelay.Expr

The tensor to cast to.

Returns#

resultrelay.Expr

The casted result.

tvm.relay.ceil(data)[源代码]

Compute element-wise ceil of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.clip(a, a_min, a_max)[源代码]

Clip the elements in a between a_min and a_max. a_min and a_max are cast to a’s dtype.

Parameters#

arelay.Expr

The input tensor.

a_minfloat

The clip minimum.

a_maxfloat

The clip maximum.

Returns#

resultrelay.Expr

a with elements clipped between a_min and a_max.

Examples#

x = relay.Constant(tvm.nd.array([0, 1, 5, 3, 4, 2]))
relay.clip(x, 1., 4.)
# [1, 1, 4, 3, 4, 2]
tvm.relay.collapse_sum_like(data, collapse_type)[源代码]

Return a scalar value array with the same shape and type as the input array.

Parameters#

datarelay.Expr

The input tensor.

collapse_typerelay.Expr

Provide the shape to collapse to.

Returns#

resultrelay.Expr

The resulting tensor.

tvm.relay.collapse_sum_to(data, shape)[源代码]

Return a summation of data to the specified shape.

Parameters#

datarelay.Expr

The input tensor.

shaperelay.Expr

Shape to collapse to.

Returns#

resultrelay.Expr

The resulting tensor.

tvm.relay.concatenate(data, axis)[源代码]

Concatenate the input tensors along the given axis.

Parameters#

dataUnion(List[relay.Expr], Tuple[relay.Expr])

A list of tensors.

axisint

The axis along which the tensors are concatenated.

Returns#

result: relay.Expr

The concatenated tensor.

tvm.relay.const(value, dtype=None, span=None)[源代码]

Create a constant value.

Parameters#

value: Union[bool, int, float, numpy.ndarray, tvm.nd.NDArray]

The constant value.

dtype: str, optional

The data type of the resulting constant.

span: Optional[tvm.relay.Span]

Span that points to original source code.

Note#

When dtype is None, we use the following rule:

  • int maps to “int32”

  • float maps to “float32”

  • bool maps to “bool”

  • other using the same default rule as numpy.

tvm.relay.copy(data)[源代码]

Copy a tensor.

Parameters#

datarelay.Expr

The tensor to be copied.

Returns#

result: relay.Expr

The copied result.

tvm.relay.copy_shape_func(attrs, inputs, _)[源代码]

Shape function for copy op.

tvm.relay.cos(data)[源代码]

Compute elementwise cos of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.cosh(data)[源代码]

Compute elementwise cosh of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.create_executor(kind='debug', mod=None, device=None, target='llvm', params=None)[源代码]

Factory function to create an executor.

Example#

import tvm.relay
import numpy as np

x = tvm.relay.var("x", tvm.relay.TensorType([1], dtype="float32"))
expr = tvm.relay.add(x, tvm.relay.Constant(tvm.nd.array(np.array([1], dtype="float32"))))
tvm.relay.create_executor(
    kind="vm", mod=tvm.IRModule.from_expr(tvm.relay.Function([x], expr))
).evaluate()(np.array([2], dtype="float32"))
# returns `array([3.], dtype=float32)`

Parameters#

kindstr

The type of executor. Avaliable options are debug for the interpreter, graph for the graph executor, aot for the aot executor, and vm for the virtual machine.

modIRModule

The Relay module containing collection of functions

deviceDevice

The device to execute the code.

targetany multi-target like object, see Target.canon_multi_target

For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets. CAUTION: Though this API allows multiple targets, it does not allow multiple devices, so heterogenous compilation is not yet supported.

paramsdict of str to NDArray

Input parameters to the graph that do not change during inference time.

Returns#

executor : Executor

tvm.relay.cumprod(data, axis=None, dtype=None, exclusive=None)[源代码]

Numpy style cumprod op. Return the cumulative inclusive product of the elements along a given axis.

Parameters#

datarelay.Expr

The input data to the operator.

axisint, optional

Axis along which the cumulative product is computed. The default (None) is to compute the cumprod over the flattened array.

dtypestring, optional

Type of the returned array and of the accumulator in which the elements are multiplied. If dtype is not specified, it defaults to the dtype of data.

exclusivebool, optional

If true will return exclusive product in which the first element is not included. In other terms, if true, the j-th output element would be the product of the first (j-1) elements. Otherwise, it would be the product of the first j elements. The product of zero elements will be 1.

Returns#

resultrelay.Expr

The result has the same size as data, and the same shape as data if axis is not None. If axis is None, the result is a 1-d array.

Examples#

a = [[1, 2, 3], [4, 5, 6]]

cumprod(a)  # if axis is not provided, cumprod is done over the flattened input.
-> [ 1,  2,  6, 24, 120, 720]

cumprod(a, dtype="float32")
-> [  1.,  2.,  6., 24., 120., 720.]

cumprod(a, axis=0)  # multiply over rows for each of the 3 columns
-> [[1, 2, 3],
    [4, 10, 18]]

cumprod(a, axis=1)
-> [[ 1,  2,  6],
    [ 4,  20, 120]]

a = [1, 1, 1, 0, 1, 1, 0]  # a is a boolean array
cumprod(a, dtype=int32)  # dtype should be provided to get the expected results
-> [1, 1, 1, 0, 0, 0, 0]
tvm.relay.cumsum(data, axis=None, dtype=None, exclusive=None)[源代码]

Numpy style cumsum op. Return the cumulative inclusive sum of the elements along a given axis.

Parameters#

datarelay.Expr

The input data to the operator.

axisint, optional

Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array.

dtypestring, optional

Type of the returned array and of the accumulator in which the elements are summed. If dtype is not specified, it defaults to the dtype of data.

exclusivebool, optional

If true will return exclusive sum in which the first element is not included. In other terms, if true, the j-th output element would be the sum of the first (j-1) elements. Otherwise, it would be the sum of the first j elements.

Returns#

resultrelay.Expr

The result has the same size as data, and the same shape as data if axis is not None. If axis is None, the result is a 1-d array.

Examples#

a = [[1, 2, 3], [4, 5, 6]]

cumsum(a)  # if axis is not provided, cumsum is done over the flattened input.
-> [ 1,  3,  6, 10, 15, 21]

cumsum(a, dtype="float32")
-> [  1.,   3.,   6.,  10.,  15.,  21.]

cumsum(a, axis=0)  # sum over rows for each of the 3 columns
-> [[1, 2, 3],
    [5, 7, 9]]

cumsum(a, axis=1)
-> [[ 1,  3,  6],
    [ 4,  9, 15]]

a = [1, 0, 1, 0, 1, 1, 0]  # a is a boolean array
cumsum(a, dtype=int32)  # dtype should be provided to get the expected results
-> [1, 1, 2, 2, 3, 4, 4]
tvm.relay.device_copy(data, src_device, dst_device)[源代码]

Copy data from the source device to the destination device. This operator helps data transferring between difference devices for heterogeneous execution.

Parameters#

datatvm.relay.Expr

The tensor to be copied.

src_deviceUnion[Device, str]

The source device where the data is copied from.

dst_deviceUnion[Device, str]

The destination device where the data is copied to.

Returns#

resulttvm.relay.Expr

The copied result.

tvm.relay.dft(re_data, im_data, inverse=False)[源代码]

Computes the discrete Fourier transform of input (calculation along the last axis). This gives frequency components of the signal as they change over time.

Parameters#

re_datarelay.Expr

N-D tensor, real part of the input signal.

im_datarelay.Expr

N-D tensor, imaginary part of the input signal. If the signal is real, then the values of this tensor are zeros.

inversebool

Whether to perform the inverse discrete fourier transform.

Returns#

re_outputrelay.Expr

The Fourier Transform of the input (Real part).

im_outputrelay.Expr

The Fourier Transform of the input (Imaginary part).

tvm.relay.divide(lhs, rhs)[源代码]

Division with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.einsum(data, equation)[源代码]

Evaluates the Einstein summation convention on data

Parameters#

dataUnion(List[relay.Expr], Tuple[relay.Expr])

A list of tensors.

equationstr

The einsum expression string.

Returns#

resultrelay.Expr

The output tensor from the einsum op.

tvm.relay.equal(lhs, rhs)[源代码]

Broadcasted elementwise test for (lhs == rhs).

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.erf(data)[源代码]

Compute elementwise error function of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.exp(data)[源代码]

Compute elementwise exp of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.expand_dims(data, axis, num_newaxis=1)[源代码]

Insert num_newaxis axes at the position given by axis.

Parameters#

datarelay.Expr

The input data to the operator.

axisUnion[int, Expr]

The axis at which the input array is expanded. Should lie in range [-data.ndim - 1, data.ndim]. If axis < 0, it is the first axis inserted; If axis >= 0, it is the last axis inserted in Python’s negative indexing.

num_newaxisint, optional

Number of axes to be inserted. Should be >= 0.

Returns#

resultrelay.Expr

The reshaped result.

tvm.relay.fixed_point_multiply(data, multiplier, shift)[源代码]

Fixed point multiplication between data and a fixed point constant expressed as multiplier * 2^(-shift), where multiplier is a Q-number with 31 fractional bits

Parameters#

datarelay.Expr

The input tensor.

multiplierint

The integer multiplier of the fixed point constant.

shiftint

The integer shift of the fixed point constant.

Returns#

resultrelay.Expr

The output of the fixed point multiplication

tvm.relay.floor(data)[源代码]

Compute element-wise floor of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.floor_divide(lhs, rhs)[源代码]

Floor division with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.floor_mod(lhs, rhs)[源代码]

Floor mod with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.full(fill_value, shape=(), dtype='')[源代码]

Fill array with scalar value.

Parameters#

fill_valuerelay.Expr

The value to fill. Must be a scalar.

shapetuple of int or relay.Expr, optional

The shape of the target.

dtypedata type, optional (defaults to data type of the fill value)

The data type of the target.

Returns#

resultrelay.Expr

The resulting tensor.

tvm.relay.full_like(data, fill_value)[源代码]

Return a scalar value array with the same shape and type as the input array.

Parameters#

datarelay.Expr

The input tensor.

fill_valuerelay.Expr

The scalar value to fill.

Returns#

resultrelay.Expr

The resulting tensor.

tvm.relay.gather(data, axis, indices)[源代码]

Gather values along given axis from given indices.

E.g. for a 3D tensor, output is computed as:

out[i][j][k] = data[indices[i][j][k]][j][k]  # if axis == 0
out[i][j][k] = data[i][indices[i][j][k]][k]  # if axis == 1
out[i][j][k] = data[i][j][indices[i][j][k]]  # if axis == 2

indices must have the same shape as data, except at dimension axis which must just be not null. Output will have the same shape as indices.

Parameters#

datarelay.Expr

The input data to the operator.

axisint

The axis along which to index. Negative axis is supported.

indicesrelay.Expr

The indices of values to gather.

Examples#

data = [[1, 2], [3, 4]]
axis = 1
indices = [[0, 0], [1, 0]]
relay.gather(data, axis, indices) = [[1, 1], [4, 3]]
tvm.relay.gather_nd(data, indices, batch_dims=0, index_rank=None)[源代码]

Gather elements or slices from data and store them to a tensor whose shape is defined by indices.

Parameters#

datarelay.Expr

The input data to the operator.

indicesrelay.Expr

The shape of output tensor.

batch_dimsint, optional

The number of batch dimensions.

index_rankint, optional

The size of an indexing tuple, which is a fixed value and the same as indices.shape[0]. Only needed when other dimensions of indices are dynamic.

Returns#

retrelay.Expr

The computed result.

Examples#

data = [[0, 1], [2, 3]]
indices = [[1, 1, 0], [0, 1, 0]]
relay.gather_nd(data, indices) = [2, 3, 0]

data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
indices = [[0, 1], [1, 0]]
relay.gather_nd(data, indices) = [[3, 4], [5, 6]]

data = [[[0, 1], [2, 3]], [[4, 5], [6, 7]]]
indices = [[1, 0]]
relay.gather_nd(data, indices, batch_dims=1) = [[2, 3],[4, 5]]
tvm.relay.greater(lhs, rhs)[源代码]

Broadcasted elementwise test for (lhs > rhs).

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.greater_equal(lhs, rhs)[源代码]

Broadcasted elementwise test for (lhs >= rhs).

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.invert_permutation(data)[源代码]

Computes the inverse permutation of data. This operation computes the inverse of an index permutation. It takes a 1-D integer tensor x, which represents the indices of a zero-based array and swaps each value with its index position.

For an output tensor y and an input tensor x, this operation computes the following: y[x[i]] = i for i in [0, 1, …, len(x) - 1]

Parameters#

datarelay.Expr

The source data to be invert permuted.

Returns#

retrelay.Expr

Invert permuted data. Has the same type as data.

Examples#

data = [3, 4, 0, 2, 1]
relay.invert_permutation(data) = [2, 4, 3, 0, 1]
tvm.relay.isfinite(data)[源代码]

Compute element-wise finiteness of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.isinf(data)[源代码]

Compute element-wise infiniteness of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.isnan(data)[源代码]

Check nan in input data element-wise.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.layout_transform(data, src_layout, dst_layout)[源代码]

Transform the layout of a tensor.

Parameters#

datarelay.Expr

The source tensor to be transformed.

src_layoutstr

The source layout. (e.g NCHW)

dst_layoutstr

The destination layout. (e.g. NCHW16c)

Returns#

retrelay.Expr

The transformed tensor.

tvm.relay.left_shift(lhs, rhs)[源代码]

Left shift with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.less(lhs, rhs)[源代码]

Broadcasted elementwise test for (lhs < rhs).

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.less_equal(lhs, rhs)[源代码]

Broadcasted elementwise test for (lhs <= rhs).

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.load_param_dict(param_bytes)[源代码]

Load parameter dictionary to binary bytes.

自 0.9.0 版本弃用: Use tvm.runtime.load_param_dict() instead.

Parameters#

param_bytes: bytearray

Serialized parameters.

Returns#

paramsdict of str to NDArray

The parameter dictionary.

tvm.relay.log(data)[源代码]

Compute elementwise log of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.log10(data)[源代码]

Compute elementwise log to the base 10 of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.log2(data)[源代码]

Compute elementwise log to the base 2 of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.logical_and(lhs, rhs)[源代码]

logical AND with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.logical_not(data)[源代码]

Compute element-wise logical not of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.logical_or(lhs, rhs)[源代码]

logical OR with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.logical_xor(lhs, rhs)[源代码]

logical XOR with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.logsumexp(data, axis=None, keepdims=False)[源代码]

Compute the log of the sum of exponentials of input elements over given axes.

This function is more numerically stable than log(sum(exp(input))). It avoids overflows caused by taking the exp of large inputs and underflows caused by taking the log of small inputs.

Parameters#

datarelay.Expr

The input data

axisNone or int or tuple of int

Axis or axes along which a standard deviation operation is performed. The default, axis=None, will compute the log of the sum of exponentials of all elements in the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one.

Returns#

resultrelay.Expr

The computed result.

tvm.relay.matrix_set_diag(data, diagonal, k=0, align='RIGHT_LEFT')[源代码]

Returns a tensor with the diagonals of input tensor replaced with the provided diagonal values.

Parameters#

datarelay.Expr

Input tensor.

diagonalrelay.Expr

Values to be filled in the diagonal.

kint or tuple of int, optional

Diagonal offset(s). The diagonal or range of diagonals to set. (0 by default) Positive value means superdiagonal, 0 refers to the main diagonal, and negative value means subdiagonals. k can be a single integer (for a single diagonal) or a pair of integers specifying the low and high ends of a matrix band. k[0] must not be larger than k[1].

alignstring, optional

Some diagonals are shorter than max_diag_len and need to be padded. align is a string specifying how superdiagonals and subdiagonals should be aligned, respectively. There are four possible alignments: “RIGHT_LEFT” (default), “LEFT_RIGHT”, “LEFT_LEFT”, and “RIGHT_RIGHT”. “RIGHT_LEFT” aligns superdiagonals to the right (left-pads the row) and subdiagonals to the left (right-pads the row). It is the packing format LAPACK uses. cuSPARSE uses “LEFT_RIGHT”, which is the opposite alignment.

Returns#

resultrelay.Expr

New tensor with given diagonal values.

Examples#

data = [[[7, 7, 7, 7],
         [7, 7, 7, 7],
         [7, 7, 7, 7]],
        [[7, 7, 7, 7],
         [7, 7, 7, 7],
         [7, 7, 7, 7]]]

diagonal = [[1, 2, 3],
            [4, 5, 6]]

relay.matrix_set_diag(input, diagonal) =
    [[[1, 7, 7, 7],
      [7, 2, 7, 7],
      [7, 7, 3, 7]],
     [[4, 7, 7, 7],
      [7, 5, 7, 7],
      [7, 7, 6, 7]]]
tvm.relay.max(data, axis=None, keepdims=False, exclude=False)[源代码]

Computes the max of array elements over given axes.

Parameters#

datarelay.Expr

The input data

axisNone or int or tuple of int

Axis or axes along which the max operation is performed. The default, axis=None, will find the max element from all of the elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

excludebool

If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

Returns#

resultrelay.Expr

The computed result.

tvm.relay.maximum(lhs, rhs)[源代码]

Maximum with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.mean(data, axis=None, keepdims=False, exclude=False)[源代码]

Computes the mean of array elements over given axes.

Parameters#

datarelay.Expr

The input data

axisNone or int or tuple of int

Axis or axes along which a mean operation is performed. The default, axis=None, will compute the mean of all elements in the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

excludebool

If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

Returns#

resultrelay.Expr

The computed result.

tvm.relay.mean_std(data, axis=None, keepdims=False, exclude=False)[源代码]

Computes the mean and standard deviation of data over given axes.

Parameters#

datarelay.Expr

The input data

axisNone or int or tuple of int

Axis or axes along which a mean and standard deviation operation is performed. The default, axis=None, will compute the mean and standard deviation of all elements in the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

excludebool

If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

Returns#

resultrelay.Expr

The computed result.

tvm.relay.mean_variance(data, axis=None, keepdims=False, exclude=False, unbiased=False)[源代码]

Computes the mean and variance of data over given axes.

Parameters#

datarelay.Expr

The input data

axisNone or int or tuple of int

Axis or axes along which a mean and variance operation is performed. The default, axis=None, will compute the mean and variance of all elements in the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

excludebool

If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

unbiasedbool

If this is set to True, the unbiased estimation will be used.

Returns#

resultrelay.Expr

The computed result.

tvm.relay.meshgrid(data, indexing='ij')[源代码]

Create coordinate matrices from coordinate vectors.

备注

Similar to numpy.meshgrid.

Parameters#

dataUnion(List[relay.Expr], Tuple[relay.Expr])

A list of tensors, which must be either scalars or 1-D vectors.

indexingstr, optional

Indexing mode, either “ij” for matrix indexing or “xy” for Cartesian indexing.

Returns#

retrelay.Tuple([relay.Expr, relay.Expr])

The computed result.

Examples#

x = [1, 2, 3]
y = [4, 5]

gx, gy = relay.meshgrid([x, y])

gx = [[1., 1.],
      [2., 2.],
      [3., 3.]]

gy = [[4., 5.],
      [4., 5.],
      [4., 5.]]
tvm.relay.min(data, axis=None, keepdims=False, exclude=False)[源代码]

Computes the min of array elements over given axes.

Parameters#

datarelay.Expr

The input data

axisNone or int or tuple of int

Axis or axes along which a minimum operation is performed. The default, axis=None, will find the minimum element from all of the elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

excludebool

If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

Returns#

resultrelay.Expr

The computed result.

tvm.relay.minimum(lhs, rhs)[源代码]

Minimum with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.mod(lhs, rhs)[源代码]

Mod with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.multiply(lhs, rhs)[源代码]

Multiplication with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.ndarray_size(data, dtype='int32')[源代码]

Get number of elements of input tensor.

Parameters#

datatvm.relay.Expr

The input tensor.

dtypestr, optional

The target data type.

Returns#

resulttvm.relay.Expr

The number of elements of input tensor.

tvm.relay.negative(data)[源代码]

Compute element-wise negative of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.not_equal(lhs, rhs)[源代码]

Broadcasted elementwise test for (lhs != rhs).

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.one_hot(indices, on_value, off_value, depth, axis, dtype)[源代码]

Returns a one-hot tensor where the locations represented by indices take value on_value, and other locations take value off_value. Final dimension is <indices outer dimensions> x depth x <indices inner dimensions>.

Parameters#

indicesrelay.Expr

Locations to set to on_value.

on_valuerelay.Expr

Value to fill at indices.

off_valuerelay.Expr

Value to fill at all other positions besides indices.

depthint or relay.Expr

Depth of the one-hot dimension.

axisint

Axis to fill.

dtypestr

Data type of the output tensor.

Returns#

retrelay.Expr

The one-hot tensor.

Examples#

indices = [0, 1, 2]

relay.one_hot(indices, 3) =
    [[1, 0, 0],
     [0, 1, 0],
     [0, 0, 1]]
tvm.relay.ones(shape, dtype)[源代码]

Fill array with ones.

Parameters#

shapetuple of int or relay.Expr

The shape of the target.

dtypedata type

The data type of the target.

Returns#

resultrelay.Expr

The resulting tensor.

tvm.relay.ones_like(data)[源代码]

Returns an array of ones, with same type and shape as the input.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.optimize(mod, target=None, params=None)[源代码]

Helper function that optimizes a Relay module.

Parameters#

modIRModule

The module to build. Using relay.Function is deprecated.

targetNone, or any multi-target like object, see Target.canon_multi_target

For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets. Defaults to the current target in the environment if None.

paramsdict of str to NDArray

Input parameters to the graph that do not change during inference time. Used for constant folding.

Returns#

modIRModule

The optimized relay module.

paramsdict

The parameters of the final graph.

tvm.relay.power(lhs, rhs)[源代码]

Power with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.pretty_print(obj)[源代码]

Pretty print the object.

参数:

obj (Object)

返回类型:

None

tvm.relay.prod(data, axis=None, keepdims=False, exclude=False)[源代码]

Computes the products of array elements over given axes.

Parameters#

datarelay.Expr

The input data

axisNone or int or tuple of int

Axis or axes along which a product is performed. The default, axis=None, will find the indices of minimum element all of the elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

excludebool

If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

Returns#

resultrelay.Expr

The computed result.

tvm.relay.reinterpret(data, dtype)[源代码]

Reinterpret input tensor to data type.

Parameters#

datarelay.Expr

The input data to the operator.

dtypestr

The target data type.

Returns#

resultrelay.Expr

The reinterpreted result.

tvm.relay.repeat(data, repeats, axis)[源代码]

Repeats elements of an array. By default, repeat flattens the input array into 1-D and then repeats the elements.

Parameters#

datarelay.Expr

The input tensor.

repeatsint

The number of repetitions for each element.

axis: int

The axis along which to repeat values. The negative numbers are interpreted counting from the backward. By default, use the flattened input array, and return a flat output array.

Returns#

retrelay.Expr

The computed result.

Examples#

x = [[1, 2], [3, 4]]
relay.repeat(x, repeats=2) = [1., 1., 2., 2., 3., 3., 4., 4.]

relay.repeat(x, repeats=2, axis=1) = [[1., 1., 2., 2.],
                                      [3., 3., 4., 4.]]
tvm.relay.reshape(data, newshape, allowzero=False)[源代码]

Reshape the input array.

To give user more convenience in without doing manual shape inference, some dimensions of the shape can take special values from the set {0, -1, -2, -3, -4}. The significance of each is explained below:

0 copy this dimension from the input to the output shape.

data.shape = (2,3,4), newshape = (4,0,2), result.shape = (4,3,2)
data.shape = (2,3,4), newshape = (2,0,0), result.shape = (2,3,4)

Note: If the parameter allowzero is manually set to true, it specifies a special case where 0 actually means a true empty tensor.

-1 infers the dimension of the output shape by using the remainder of the input dimensions keeping the size of the new array same as that of the input array. At most one dimension of shape can be -1.

data.shape = (2,3,4), newshape = (6,1,-1), result.shape = (6,1,4)
data.shape = (2,3,4), newshape = (3,-1,8), result.shape = (3,1,8)
data.shape = (2,3,4), newshape = (-1,), result.shape = (24,)

-2 copy all/remainder of the input dimensions to the output shape.

data.shape = (2,3,4), newshape = (-2,), result.shape = (2,3,4)
data.shape = (2,3,4), newshape = (2,-2), result.shape = (2,3,4)
data.shape = (2,3,4), newshape = (-2,1,1), result.shape = (2,3,4,1,1)

-3 use the product of two consecutive dimensions of the input shape as the output dimension.

data.shape = (2,3,4), newshape = (-3,4), result.shape = (6,4)
data.shape = (2,3,4,5), newshape = (-3,-3), result.shape = (6,20)
data.shape = (2,3,4), newshape = (0,-3), result.shape = (2,12)
data.shape = (2,3,4), newshape = (-3,-2), result.shape = (6,4)

-4 split one dimension of the input into two dimensions passed subsequent to -4 in shape (can contain -1).

data.shape = (2,3,4), newshape = (-4,1,2,-2), result.shape = (1,2,3,4)
data.shape = (2,3,4), newshape = (2,-4,-1,3,-2), result.shape = (2,1,3,4)

Parameters#

datarelay.Expr

The input data to the operator.

newshapeUnion[int, Tuple[int], List[int]] or relay.Expr

The new shape. Should be compatible with the original shape.

allowzeroBool, optional

If true, then treat zero as true empty tensor rather than a copy instruction.

Returns#

resultrelay.Expr

The reshaped result.

tvm.relay.reshape_like(data, shape_like, lhs_begin=0, lhs_end=None, rhs_begin=0, rhs_end=None)[源代码]

Reshapes the input tensor by the size of another tensor. For an input tensor with shape (d0, d1, ..., d(k-1)), reshape_like operation reshapes the input tensor into an output tensor with the same shape as the second input tensor, in particular reshaping the dimensions of data in [lhs_begin, lhs_end) using the dimensions from shape_like in [rhs_begin, rhs_end).

备注

Sizes for data and the output tensor should be compatible.

Parameters#

datarelay.Expr

The input data to the operator.

shape_likerelay.Expr

The tensor to reshape data like. Should be compatible with the original shape on the reshaped dimensions.

lhs_beginint, optional

The axis of data to begin reshaping. Default is 0.

lhs_endint or None, optional

The axis of data where reshaping should stop, exclusive. Default is None which reshapes to the end.

rhs_beginint, optional

The axis of shape_like where the target shape begins. Default is 0.

rhs_endint or None, optional

The axis of shape_like where the target shape ends, exclusive. Default is None which extends to the end.

Returns#

retrelay.Expr

The computed result.

Examples#

data.shape == (1, 2, 3, 4)
shape_like.shape == (6, 2, 2, 3)

ret = relay.reshape_like(data, shape_like, lhs_begin=1, rhs_end=3)
ret.shape == (1, 6, 2, 2)
tvm.relay.reverse(data, axis)[源代码]

Reverses the order of elements along given axis while preserving array shape.

Parameters#

datarelay.Expr

The input data to the operator.

axis: int

The axis along which to reverse elements.

Returns#

retrelay.Expr

The computed result.

Examples#

x = [[1., 2.], [3., 4.]]
relay.reverse(x, axis=0) = [[3., 4.], [1., 2.]]

relay.reverse(x, axis=1) = [[2., 1.], [4., 3.]]
tvm.relay.reverse_reshape(data, newshape)[源代码]

Reshapes the input array where the special values are inferred from right to left.

The special values have the same semantics as tvm.relay.reshape. The difference is that special values are inferred from right to left. It can be explained in the example below.

data.shape = (10,5,4), newshape = (-1,0), reshape results in (40,5)
data.shape = (10,5,4), newshape = (-1,0), reverse_reshape results in (50,4)

Parameters#

datarelay.Expr

The input data to the operator.

newshapeUnion[int, Tuple[int], List[int]]

The new shape. Should be compatible with the original shape.

Returns#

resultrelay.Expr

The reshaped result.

tvm.relay.reverse_sequence(data, seq_lengths, seq_axis=1, batch_axis=0)[源代码]

Reverse the tensor for variable length slices. Input is first sliced along batch axis and then elements are reversed along seq axis.

Parameters#

datarelay.Expr

The tensor to be reversed.

seq_lengthsrelay.Expr

A 1D Tensor with length a.dims[batch_axis]. Must be one of the following types: int32, int64. If seq_lengths[i] > a.dims[seq_axis], it is rounded to a.dims[seq_axis]. If seq_lengths[i] < 1, it is rounded to 1.

seq_axisint, optional

The axis along which the elements will be reversed. Default is 1.

batch_axisint, optional

The axis along which the tensor will be sliced. Default is 0.

Returns#

retrelay.Expr

The computed result of same shape and type as of input.

Examples#

x = [[0, 1, 2, 3],
     [4, 5, 6, 7],
     [8, 9, 10, 11],
     [12, 13, 14, 15]]
relay.reverse(x, [1, 2, 3, 4], 0, 1) = [[0, 5, 10, 15],
                                        [4, 1, 6, 11],
                                        [8, 9, 2, 7],
                                        [12, 13, 14, 3]]

relay.reverse(x, [1, 2, 3, 4], 1, 0) = [[0, 1, 2, 3],
                                        [5, 4, 6, 7],
                                        [10, 9, 8, 11],
                                        [15, 14, 13, 12]]
tvm.relay.right_shift(lhs, rhs)[源代码]

Right shift with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.round(data)[源代码]

Compute element-wise round of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.rsqrt(data)[源代码]

Compute elementwise rsqrt of data.

\[1/sqrt(x)\]

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.save_param_dict(params)[源代码]

Save parameter dictionary to binary bytes.

The result binary bytes can be loaded by the GraphModule with API “load_params”.

自 0.9.0 版本弃用: Use tvm.runtime.save_param_dict() instead.

Parameters#

paramsdict of str to NDArray

The parameter dictionary.

Returns#

param_bytes: bytearray

Serialized parameters.

Examples#

# set up the parameter dict
params = {"param0": arr0, "param1": arr1}
# save the parameters as byte array
param_bytes = tvm.runtime.save_param_dict(params)
# We can serialize the param_bytes and load it back later.
# Pass in byte array to module to directly set parameters
tvm.runtime.load_param_dict(param_bytes)
tvm.relay.scalar_type(dtype)[源代码]

Creates a scalar type.

This function returns TensorType((), dtype)

Parameters#

dtypestr

The content data type.

Returns#

s_typetvm.relay.TensorType

The result type.

tvm.relay.scatter_elements(data, indices, updates, axis=0, reduction='update')[源代码]

Scatter elements with updating data by reduction of values in updates at positions defined by indices.

Parameters#

datarelay.Expr

The input data to the operator.

indicesrelay.Expr

The index locations to update.

updatesrelay.Expr

The values to update.

axisint

The axis to scatter elements on. It is zero by default.

reductionstring, optional

The reduction mode for scatter. Choise is from [“update”, “add”, “mul”, “mean”, “min”, max”] If update, the update values will replace the input data If add, the update values will be added to the input data If mul, the input data will be multiplied on the update values If mean, the input data will be mean between the update values and the input data If min, there is choice of minimal between the update values and the input data If max, there is choice of maximal between the update values and the input data It is “update” by default

Returns#

retrelay.Expr

The computed result.

tvm.relay.scatter_nd(data, indices, updates, mode='update')[源代码]

Scatter values from an array and update.

See tvm.topi.scatter() for how data is scattered.

Parameters#

datarelay.Expr

The input data to the operator.

indicesrelay.Expr

The index locations to update.

updatesrelay.Expr

The values to update.

modestring, optional

The accumulation mode for scatter. “update”, “add”, “mul”, “min” or “max” If update, the update values will replace the input data If add, the update values will be added to the input data If mul, the update values will be multiply to the input data If min, there is choice of minimal between the update values and the input data If max, there is choice of maximal between the update values and the input data It is “update” by default

Returns#

retrelay.Expr

The computed result.

tvm.relay.script(pyfunc)[源代码]

Decorate a python function as hybrid script.

The hybrid function support emulation mode and parsing to the internal language IR.

Returns#

hybrid_funcfunction

A decorated hybrid script function.

tvm.relay.searchsorted(sorted_sequence, values, right=False, dtype='int32')[源代码]
Find indices where elements should be inserted to maintain order.

If sorted_sequence is N-dimensional, the innermost dimension of values are searched in the corresponding dimension of sorted_sequence.

Parameters#

sorted_sequencerelay.Expr

N-D or 1-D Tensor, containing monotonically increasing sequence on the innermost dimension.

valuesrelay.Expr

N-D Tensor containing the search values. When sorted_sequence is 1-D, the shape of values can be arbitrary. Otherwise, ranks of sorted_sequence and values must be the same, and outer N-1 axes must have the same size.

rightbool, optional

Controls which index is returned if a value lands exactly on one of sorted values. If False, the index of the first suitable location found is given. If true, return the last such index. If there is no suitable index, return either 0 or N (where N is the size of the innermost dimension).

dtypestring, optional

The data type of the output indices.

Returns#

indicesrelay.Expr

Tensor with same shape as values, representing the indices of elements of values if they are inserted in sorted_sequence.

tvm.relay.segment_sum(data, segment_ids, num_segments=None)[源代码]

Computes the sum along segment_ids along axis 0. If multiple segment_ids reference the same location their contributions add up. result[index, j, k, …] = Σi… data[i, j, k,..] where index = segment_ids[i] This op is much better understood with visualization articulated in the following links and examples at the end of this docstring.

https://www.tensorflow.org/api_docs/python/tf/math/unsorted_segment_sum https://caffe2.ai/docs/sparse-operations.html#null__unsorted-segment-reduction-ops

Parameters#

datarelay.Expr

Input tensor. It can be of any type and multi-dimensional.

segment_idsrelay.Expr

A 1-D int32/int64 tensor containing the segment_ids of the rows to calculate the output sum upon. It defines a mapping from the zeroth dimension of data onto segment_ids. The segment_ids tensor should be the size of the first dimension, d0, with consecutive IDs in the range 0 to k, where k<d0. In particular, a segmentation of a matrix tensor is a mapping of rows to segments. This tensor doesn’t need to be sorted.

num_segmentsint, optional

An integer describing the shape of the zeroth dimension. If unspecified, it is calculated equivalent to the number of unique segment_ids.

Returns#

resultrelay.Expr

Output tensor.

Examples#

data = [[1, 2, 3, 4],
        [4, -3, 2, -1],
        [5, 6, 7, 8]]

segment_ids = [0, 0, 1]

relay.segment_sum(data, segment_ids) = [[5, -1, 5, 3],
                                        [5, 6, 7, 8]]

data = [[1, 2, 3, 4],
        [4, -3, 2, -1],
        [5, 6, 7, 8]]

segment_ids = [2, 0, 0]

num_segments = 3

segment_sum(data, segment_ids, num_segments) = [[9, 3, 9, 7],
                                                [0, 0, 0, 0],
                                                [1, 2, 3, 4]]
tvm.relay.sequence_mask(data, valid_length, mask_value=0, axis=0)[源代码]

Sets all elements outside the expected length of the sequence to a constant value.

This function takes an n-dimensional input array of the form [MAX_LENGTH, batch_size, …] or [batch_size, MAX_LENGTH, …] and returns an array of the same shape.

Parameters#

datarelay.Expr

The input data.

valid_lengthrelay.Expr

The expected (valid) length of each sequence in the tensor.

mask_valuefloat, optional

The masking value.

axisint, optional

The axis of the length dimension.

Returns#

retrelay.Expr

The computed result.

Examples#

 x = [[[  1.,   2.,   3.], [  4.,   5.,   6.]],
      [[  7.,   8.,   9.], [ 10.,  11.,  12.]],
      [[ 13.,  14.,   15.], [ 16.,  17.,   18.]]]

relay.sequence_mask(x, valid_length=[1, 1]) =
     [[[  1.,   2.,   3.], [  4.,   5.,   6.]],
      [[  0.,   0.,   0.], [  0.,   0.,   0.]],
      [[  0.,   0.,   0.], [  0.,   0.,   0.]]]

relay.sequence_mask(x, valid_length=[2, 3], mask_value=0.1) =
     [[[  1.,   2.,   3.], [  4.,   5.,   6.]],
      [[  7.,   8.,   9.], [  10.,  11.,  12.]],
      [[  0.1,  0.1,  0.1], [  16.,  17.,  18.]]]
tvm.relay.setrecursionlimit(limit, /)

Set the maximum depth of the Python interpreter stack to n.

This limit prevents infinite recursion from causing an overflow of the C stack and crashing Python. The highest possible limit is platform- dependent.

tvm.relay.shape_of(data, dtype='int32')[源代码]

Get shape of a tensor.

Parameters#

datatvm.relay.Expr

The input tensor.

dtypestr, optional

The target data type.

Returns#

resulttvm.relay.Expr

The shape tensor.

tvm.relay.sigmoid(data)[源代码]

Compute elementwise sigmoid of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.sign(data)[源代码]

Compute element-wise absolute of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.sin(data)[源代码]

Compute elementwise sin of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.sinh(data)[源代码]

Compute elementwise sinh of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.slice_like(data, shape_like, axes=None)[源代码]

Slice the first input with respect to the second input.

For an input array with shape (d1, d2, ..., dk), slice_like operation slices the input array corresponding to the size of the second array. By default will slice on all axes.

Parameters#

datarelay.Expr

The source array.

shape_likerelay.Expr

An array based on which shape, the result shape is computed.

axesTuple[int] or List[int], optional

List of axes on which input data will be sliced according to the corresponding size of the second input. By default will slice on all axes. Negative axes mean counting in reverse.

Returns#

resultrelay.Expr

The computed result.

tvm.relay.sliding_window(data, axis, window_shape, strides)[源代码]

Slide a window over the data tensor.

Parameters#

datarelay.Expr

The input data to the operator.

axisint

What axis the window begins sliding over. Window will be slid over this axis and all following axes. The axis value determines the window shape (and thus, the number of strides): window shape and strides must both be of length data.ndim-axis.

window_shapeList[int]

The window shape to form over the input. Window shape must be of length data.ndim-axis.

stridesList[int]

How to stride the window along each dimension. Strides must be of length data.ndim-axis.

Returns#

resultrelay.Expr

The resulting tensor.

Examples#

# Slide a window of shape (3, 4, 5) over the x tensor, beginning with
# dimension 1, which slides the window over the two subtensors of
# shape (3, 32, 32).
x = relay.var("x", relay.TensorType((2, 3, 32, 32), "float32"))
y = relay.sliding_window(x, 1, [3, 4, 5], [1, 2, 3])

data = np.random.rand(2, 3, 32, 32).astype("float32")
result = create_executor().evaluate(y, {x: relay.const(data)}).numpy()

# The resulting shape still has batch size 2. Each dimension in
# (1, 15, 10) represents the locations where we were able to
# form a window; that is, we were able to place the window
# in one place along the dimension of length 3, 15 places along
# the dimension of length 32 (when striding by 2), and 10 places
# along the second dimension of length 32 (when striding by 3).
# The remaining dimension (3, 4, 5) represent the formed windows.
assert result.shape == (2, 1, 15, 10, 3, 4, 5)

assert np.array_equal(result[0, 0, 0, 0, :, :, :], data[0, :, 0:4, 0:5])
assert np.array_equal(result[1, 0, 7, 3, :, :, :], data[1, :, 14:18, 9:14])
assert np.array_equal(result[1, 0, 14, 9, :, :, :], data[1, :, 28:32, 27:32])
tvm.relay.sort(data, axis=-1, is_ascend=1)[源代码]

Performs sorting along the given axis and returns data in sorted order.

Parameters#

datarelay.Expr

The input data tensor.

axisint, optional

Axis long which to sort the input tensor.

is_ascendboolean, optional

Whether to sort in ascending or descending order.

Returns#

outrelay.Expr

Tensor with same shape as data.

tvm.relay.sparse_fill_empty_rows(sparse_indices, sparse_values, dense_shape, default_value)[源代码]

Fill rows in a sparse matrix that do not contain any values. Values are placed in the first column of empty rows. The sparse array is in COO format. It returns a TupleWrapper with 3 outputs.

Parameters#

sparse_indicesrelay.Expr

A 2-D tensor[N, ndims] of integers containing the locations of sparse values, where N is the number of sparse values and n_dim is the number of dimensions of the dense_shape. The first column of this parameter must be sorted in ascending order.

sparse_valuesrelay.Expr

A 1-D tensor[N] containing the sparse values for the sparse indices.

dense_shaperelay.Expr

A 1-D tensor[ndims] which contains the shape of the dense output tensor.

default_valuerelay.Expr

A 1-D tensor[1] containing the default value for the remaining locations.

Returns#

new_sparse_indicesrelay.Expr

A 2-D tensor[?, ndims] of integers containing location of new sparse indices. The first column outputs must be sorted in ascending order.

new_sparse_valuesrelay.Expr

A 1-D tensor[?] containing the sparse values for the sparse indices.

empty_row_indicatorrelay.Expr

A 1-D tensor[dense_shape[0]] filled with zeros and ones indicating whether the particular row is empty or full respectively.

Note#

This op exactly follows the documentation here: https://www.tensorflow.org/api_docs/python/tf/sparse/fill_empty_rows There are two exceptions: 1. Input Sparse Indices are expected to be in row-major order. 2. Empty Row Indicator has int64 output type with 1(for True) and 0(for False).

Examples#

sparse_indices = [[0, 1],
                 [0, 3],
                 [2, 0],
                 [3, 1]]

sparse_values = [1, 2, 3, 4]

default_value = [10]

dense_shape = [5, 6]

new_sparse_indices, empty_row_indicator, new_sparse_values =
                    relay.sparse_fill_empty_rows(
                    sparse_indices,
                    sparse_values,
                    default_value,
                    dense_shape)

new_sparse_indices = [[0, 1],
                      [0, 3],
                      [1, 0],
                      [2, 0],
                      [3, 1],
                      [4, 0]]

empty_row_indicator = [False, True, False, False, True]

new_sparse_values = [1, 2, 10, 3, 4, 10]
tvm.relay.sparse_reshape(sparse_indices, prev_shape, new_shape)[源代码]

Reshape a sparse tensor. The sparse array is in COO format.

Parameters#

sparse_indicesrelay.Expr

A 2-D tensor[N, n_dim] of integers containing location of sparse values, where N is the number of sparse values and n_dim is the number of dimensions of the dense_shape.

prev_shaperelay.Expr

A 1-D tensor containing the previous shape of the dense tensor.

new_shaperelay.Expr

A 1-D tensor containing the new shape of the dense tensor.

Returns#

result: relay.Expr

Output tensor.

Examples#

sparse_indices = [[0, 0, 0],
                  [0, 0, 1],
                  [0, 1, 0],
                  [1, 0, 0],
                  [1, 2, 3]]

prev_shape = [2, 3, 6]

new_shape = [9, -1]

new_sparse_indices, new_shape = relay.sparse_reshape(sparse_indices,
                                                     prev_shape,
                                                     new_shape)
new_sparse_indices = [[0, 0],
                      [0, 1],
                      [1, 2],
                      [4, 2],
                      [8, 1]]
new_shape = [9, 4]
tvm.relay.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value=0)[源代码]

Converts a sparse representation into a dense tensor.

Parameters#

sparse_indicesrelay.Expr

A 0-D, 1-D, or 2-D tensor of integers containing location of sparse values.

output_shaperelay.Expr

A list of integers. Shape of the dense output tensor.

sparse_valuesrelay.Expr

A 0-D or 1-D tensor containing the sparse values for the sparse indices.

default_valuerelay.Expr, optional

A 0-D tensor containing the default value for the remaining locations. Defaults to 0.

Returns#

resultrelay.Expr

Dense tensor of shape output_shape. Has the same type as sparse_values.

Examples#

relay.sparse_to_dense([[0, 0], [1, 1]], [2, 2], [3, 3], 0) =
    [[3, 0],
     [0, 3]]
tvm.relay.split(data, indices_or_sections, axis=0)[源代码]

Split input tensor along axis by sections or indices.

If indices_or_sections is an integer, the input will be divided equally along given axis. If such a split is not possible, an error is raised.

If indices_or_sections is a tuple of sorted integers, the entries indicate where along axis the array is split.

Parameters#

datarelay.Expr

The source array.

indices_or_sectionsint or tuple of int

Indices or sections to split into. Accepts an int or a tuple.

axisint, optional

The axis over which to split.

Returns#

retrelay.Tuple([relay.Expr, relay.Expr])

The computed result.

tvm.relay.sqrt(data)[源代码]

Compute elementwise sqrt of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.squeeze(data, axis=None)[源代码]

Squeeze axes in the array.

Parameters#

datarelay.Expr

The input data to the operator.

axisUnion[None, int, Tuple[int], List[int]] or Expr

The set of axes to remove. If axis = None, remove all axes of dimension 1. If any specified axis has dimension that does not equal 1, it is an error.

Returns#

resultrelay.Expr

The squeezed result.

tvm.relay.stack(data, axis)[源代码]

Join a sequence of arrays along a new axis.

Parameters#

dataUnion(List[relay.Expr], relay.Expr)

A list of tensors or a Relay expression that evaluates to a tuple of tensors.

axisint

The axis in the result array along which the input arrays are stacked.

Returns#

retrelay.Expr

The stacked tensor.

tvm.relay.std(data, axis=None, keepdims=False, exclude=False, unbiased=False)[源代码]

Computes the standard deviation of data over given axes.

Parameters#

datarelay.Expr

The input data

axisNone or int or tuple of int

Axis or axes along which a standard deviation operation is performed. The default, axis=None, will compute the standard deviation of all elements in the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

excludebool

If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

unbiasedbool

If this is set to True, the unbiased estimation will be used.

Returns#

resultrelay.Expr

The computed result.

tvm.relay.stft(data, n_fft, hop_length=None, win_length=None, window=None, normalized=False, onesided=True)[源代码]

The STFT computes the Fourier transform of short overlapping windows of the input. This gives frequency components of the signal as they change over time.

Parameters#

datarelay.Expr

Either a 1-D tensor or a 2-D batch tensor.

n_fftint

The size of Fourier transform.

hop_lengthint, optional

The distance between neighboring sliding window frames. If is None, it is treated as equal to floor(n_fft / 4).

win_lengthint, optional

The size of window frame and STFT filter. If is None, it is treated as equal to n_fft.

windowrelay.Expr, optional

A 1-D tensor window frame. If is None (default), it is treated as if having 1 everywhere in the window.

normalizedbool, optional

Whether to return the normalized STFT results. Default value is False.

onesidedbool, optional

Whether to return onesided result or fill with conjugate symmetry. Default value is True.

Returns#

outputrelay.Expr

Tensor containing the STFT result with shape [batch, N, T, 2], where N is the number of frequencies where STFT is applied and T is the total number of frames used.

Examples#

data = [1, 2, 3, 4, 5, 6]
window = [4, 3, 2]
[n_fft, hop_length, win_length, normalized, onesided] = [3, 3, 3, False, True]

relay.stft(data, n_fft, hop_length, win_length, window, normalized, onesided)
-> [[[16.0000,  0.0000], [43.0000,  0.0000]], [[ -2.0000,  0.0000], [ 2.5000, -2.5981]]]
tvm.relay.strided_set(data, v, begin, end, strides=None)[源代码]

Strided set of an array.

Parameters#

datarelay.Expr

The source array to be sliced.

vrelay.Expr

The data to be set.

beginrelay.Expr, Tuple[int], or List[int]

The indices to begin with in the slicing.

endrelay.Expr, Tuple[int], or List[int]

Indices indicating end of the slice.

strides: relay.Expr, Tuple[int], or List[int], optional

Specifies the stride values. It can be negative. In that case, the input tensor will be reversed in that particular axis.

Returns#

retrelay.Expr

The computed result.

tvm.relay.strided_slice(data, begin, end, strides=None, axes=None, slice_mode='end')[源代码]

Strided slice of an array.

Parameters#

datarelay.Expr

The source array to be sliced.

beginrelay.Expr, Tuple[int], or List[int]

The indices to begin with in the slicing.

endrelay.Expr, Tuple[int], or List[int]

Indices indicating end of the slice.

stridesrelay.Expr, Tuple[int], or List[int], optional

Specifies the stride values. It can be negative. In that case, the input tensor will be reversed in that particular axis.

axesTuple[int] or List[int], optional

Axes along which slicing is applied. When it is specified, the length of begin, end, strides, and axes must be equal. Moreover, begin, end, strides, and axes must be static (cannot be relay.Expr). Axes argument for dynamic parameter slicing is not supported yet.

slice_modestr, optional

The slice mode [end, size]. end: The ending indices for the slice [default]. size: The input strides will be ignored. Input end in this mode indicates the size of a slice starting at the location specified by begin. If end[i] is -1, all remaining elements in that dimension are included in the slice.

Returns#

retrelay.Expr

The computed result.

tvm.relay.subtract(lhs, rhs)[源代码]

Subtraction with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.sum(data, axis=None, keepdims=False, exclude=False)[源代码]

Computes the sum of array elements over given axes.

Parameters#

datarelay.Expr

The input data

axisNone or int or tuple of int

Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

excludebool

If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

Returns#

resultrelay.Expr

The computed result.

tvm.relay.take(data, indices, axis=None, batch_dims=0, mode='clip')[源代码]

Take elements from an array along an axis.

Parameters#

datarelay.Expr

The source array.

indicesrelay.Expr

The indices of the values to extract.

axisint, optional

The axis over which to select values. By default, the flattened input array is used.

batch_dimsint, optional

The number of batch dimensions. By default is 0.

modestr, optional

Specifies how out-of-bound indices will behave [clip, wrap, fast]. clip: clip to the range (default). wrap: wrap around the indices. fast: no clip or wrap around (user must make sure indices are in-bound).

Returns#

retrelay.Expr

The computed result.

tvm.relay.tan(data)[源代码]

Compute elementwise tan of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.tanh(data)[源代码]

Compute element-wise tanh of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.tile(data, reps)[源代码]

Repeats the whole array multiple times.

Parameters#

datarelay.Expr

The input data to the operator.

repstuple of int or relay.Expr

The number of times repeating the tensor data.

Returns#

retrelay.Expr

The computed result.

Examples#

x = [[1, 2], [3, 4]]
relay.tile(x, reps=(2,3)) = [[1., 2., 1., 2., 1., 2.],
                             [3., 4., 3., 4., 3., 4.],
                             [1., 2., 1., 2., 1., 2.],
                             [3., 4., 3., 4., 3., 4.]]

relay.tile(x, reps=(2,)) = [[1., 2., 1., 2.],
                            [3., 4., 3., 4.]]

Notes#

Each dim size of reps must be a positive integer. If reps has length d, the result will have dimension of max(d, data.ndim); If data.ndim < d, data is promoted to be d-dimensional by prepending new axes. If data.ndim >= d, reps is promoted to a.ndim by pre-pending 1’s to it.

tvm.relay.topk(data, k=1, axis=-1, ret_type='both', is_ascend=False, dtype='int32')[源代码]

Get the top k elements in an input tensor along the given axis.

ret_type specifies the return type, can be one of (“both”, “values”, “indices”).

Parameters#

datarelay.Expr

The input data tensor.

kint or relay.Expr, optional

Number of top elements to select. Return all elements if k < 1.

axisint, optional

Axis long which to sort the input tensor.

ret_type: str, optional

The return type [both, values, indices]. “both”: return both top k data and indices. “values”: return top k data only. “indices”: return top k indices only.

is_ascendboolean, optional

Whether to sort in ascending or descending order.

dtypestring, optional

The data type of the indices output.

Returns#

outrelay.Expr or List[relay.Expr]

The computed result.

tvm.relay.transpose(data, axes=None)[源代码]

Permutes the dimensions of an array.

Parameters#

datarelay.Expr

The input data to the operator.

axesNone or List[int]

The target axes order, reverse order if not specified.

Returns#

resultrelay.Expr

The transposed result.

tvm.relay.trilu(data, k, upper=True)[源代码]

Given a 2-D matrix or batches of 2-D matrices, returns the upper or lower triangular part of the tensor.

Parameters#

datarelay.Expr

The tensor that trilu will be applied to. Must be either a 2D matrix or a tensor of batches of 2D matrices.

kint

The number of diagonals above or below the main diagonal to exclude or include.

upper: bool, optional

If True, only upper triangular values of input are kept, if False, the lower triangular values are kept.

Returns#

retrelay.Expr

The new tensor with appropriate diagonals set to zero.

Examples#

x = [[0, 1, 2],
     [3, 4, 5],
     [6, 7, 8]]

relay.trilu(x, True, 0) =
    [[0, 1, 2],
     [0, 4, 5],
     [0, 0, 8]]
tvm.relay.trunc(data)[源代码]

Compute element-wise trunc of data.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.trunc_divide(lhs, rhs)[源代码]

Trunc division with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.trunc_mod(lhs, rhs)[源代码]

Trunc mod with numpy-style broadcasting.

Parameters#

lhsrelay.Expr

The left hand side input data

rhsrelay.Expr

The right hand side input data

Returns#

resultrelay.Expr

The computed result.

tvm.relay.unique(data, is_sorted=True, return_counts=False)[源代码]

Find the unique elements of a 1-D tensor. Please note output and counts are all padded to have the same length of data and element with index >= num_unique[0] has undefined value.

Parameters#

datarelay.Expr

A 1-D tensor of integers.

is_sortedbool, optional

Whether to sort the unique elements in ascending order before returning as output.

return_countsbool, optional

Whether to return the count of each unique element.

Returns#

uniquerelay.Expr

A 1-D tensor containing the unique elements of the input data tensor.

indicesrelay.Expr

A 1-D tensor containing the indeces of the first occurence of each unique value in the input tensor.

inverse_indicesrelay.Expr

A 1-D tensor. For each entry in data, it contains the index of that data element in the unique array.

num_uniquerelay.Expr

A 1-D tensor with size=1 containing the number of unique elements in the input data tensor.

countsrelay.Expr, optional

A 1-D tensor containing the count of each unique element in the output.

Examples#

[output, indices, inverse_indices, num_unique] = unique([4, 5, 1, 2, 3, 3, 4, 5],
                                                        False,
                                                        False)
output          =  [4, 5, 1, 2, 3, _, _, _]
indices         =  [0, 1, 2, 3, 4, _, _, _]
inverse_indices =  [0, 1, 2, 3, 4, 4, 0, 1]
num_unique      =  [5]

[output, indices, inverse_indices, num_unique, counts] = unique([4, 5, 1, 2, 3, 3, 4, 5],
                                                                False,
                                                                True)
output          =  [4, 5, 1, 2, 3, _, _, _]
indices         =  [0, 1, 2, 3, 4, _, _, _]
inverse_indices =  [0, 1, 2, 3, 4, 4, 0, 1]
num_unique      =  [5]
counts          =  [2, 2, 1, 1, 2, _, _, _]

[output, indices, inverse_indices, num_unique] = unique([4, 5, 1, 2, 3, 3, 4, 5], True)
output          =  [1, 2, 3, 4, 5, _, _, _]
indices         =  [2, 3, 4, 0, 1, _, _, _]
inverse_indices =  [3, 4, 0, 1, 2, 2, 3, 4]
num_unique      =  [5]
tvm.relay.unravel_index(indices, shape)[源代码]

Convert a flat index or array of flat indices into a tuple of coordinate arrays.

Parameters#

indicesrelay.Expr

An integer array containing indices.

shaperelay.Expr

The shape of the array.

Returns#

resultrelay.Expr

The tuple of coordinate arrays.

Examples#

relay.unravel_index([22, 41, 37], [7, 6]) =
    [[3, 6, 6],
     [4, 5, 1]]
tvm.relay.var(name_hint, type_annotation=None, shape=None, dtype='float32', span=None)[源代码]

Create a new tvm.relay.Var.

This is a simple wrapper function that allows specify shape and dtype directly.

Parameters#

name_hint: str

The name of the variable. This name only acts as a hint, and is not used for equality.

type_annotation: Optional[tvm.relay.Type, str]

The type annotation on the variable. When type_annotation is a str, we will create a scalar variable.

shape: Optional[List[tvm.Expr]]

The shape of the tensor type.

dtype: str, optional

The data type of the tensor.

span: Optional[tvm.relay.Span]

Span that points to original source code.

Examples#

# The following 4 lines are equivalent to each other
x = tvm.relay.Var("x", tvm.relay.TensorType([1, 2]))
x = tvm.relay.var("x", tvm.relay.TensorType([1, 2]))
x = tvm.relay.var("x", shape=[1, 2])
x = tvm.relay.var("x", shape=[1, 2], dtype="float32")

# The following 2 lines are equivalent to each other.
y = tvm.relay.var("x", "float32")
y = tvm.relay.var("x", shape=(), dtype="float32")
tvm.relay.variance(data, axis=None, keepdims=False, exclude=False, unbiased=False, with_mean=None)[源代码]

Computes the variance of data over given axes.

Parameters#

datarelay.Expr

The input data

axisNone or int or tuple of int

Axis or axes along which a variance operation is performed. The default, axis=None, will compute the variance of all elements in the input array. If axis is negative it counts from the last to the first axis.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

excludebool

If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

unbiasedbool

If this is set to True, the unbiased estimation will be used.

with_meanOptional[relay.Expr]

To compute variance given an already computed mean

Returns#

resultrelay.Expr

The computed result.

tvm.relay.where(condition, x, y)[源代码]

Selecting elements from either x or y depending on the value of the condition.

备注

Shapes of condition, x, and y must be broadcastable to a common shape. Semantics follow numpy where function https://numpy.org/doc/stable/reference/generated/numpy.where.html

Parameters#

conditionrelay.Expr

Where True, yield x, otherwise yield y

xrelay.Expr

The first array or scalar to be selected.

yrelay.Expr

The second array or scalar to be selected.

Returns#

resultrelay.Expr

The selected array. The output shape is the broadcasted shape from condition, x, and y.

Examples#

x = [[1, 2], [3, 4]]
y = [[5, 6], [7, 8]]
condition = [[0, 1], [-1, 0]]
relay.where(conditon, x, y) = [[5, 2], [3, 8]]

condition = [[1], [0]]
relay.where(conditon, x, y) = [[1, 2], [7, 8]]
tvm.relay.zeros(shape, dtype)[源代码]

Fill array with zeros.

Parameters#

shapetuple of int or relay.Expr

The shape of the target.

dtypedata type

The data type of the target.

Returns#

resultrelay.Expr

The resulting tensor.

tvm.relay.zeros_like(data)[源代码]

Returns an array of zeros, with same type and shape as the input.

Parameters#

datarelay.Expr

The input data

Returns#

resultrelay.Expr

The computed result.