tvm.relax.frontend#

Frontends for constructing Relax programs, with the model importers

tvm.relax.frontend.detach_params(mod)[源代码]#

Detach the attribute "params" in the functions of the input IRModule as separate dictionary of params.

Parameters#

modtvm.IRModule

The IRModule whose functions' "param" attribute is going to be detached.

Returns#

detached_modtvm.IRModule

The IRModule after the detachment.

params_dictDict[str, List[tvm.nd.NDArray]]

The detached params. The dict keys corresponds to the names of the functions in the input IRModule that have attribute "params".

参数:

mod (IRModule)

返回类型:

Tuple[IRModule, Dict[str, List[NDArray]]]

tvm.relax.frontend.nn#

A PyTorch-like API to build IRModules.

class tvm.relax.frontend.nn.Any(*args, **kwargs)[源代码]

Special type indicating an unconstrained type.

  • Any is compatible with every type.

  • Any assumed to have all methods.

  • All values assumed to be instances of Any.

Note that all the above statements are true from the point of view of static type checkers. At runtime, Any should not be used with instance checks.

class tvm.relax.frontend.nn.Conv1D(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, dtype=None)[源代码]

Module for conv1d layer.

参数:
  • in_channels (int)

  • out_channels (int)

  • kernel_size (int)

  • stride (int)

  • padding (int)

  • dilation (int)

  • groups (int)

  • bias (bool)

  • dtype (str | None)

forward(x)[源代码]

Forward method for conv1d layer.

Parameters#

xTensor

The input tensor.

Returns#

retTensor

The output tensor for the conv1d layer.

参数:

x (Tensor)

返回类型:

Tensor

class tvm.relax.frontend.nn.Conv2D(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, dtype=None, data_layout='NCHW')[源代码]

Module for conv2d layer.

参数:
forward(x)[源代码]

Forward method for conv2d layer.

Parameters#

xTensor

The input tensor.

Returns#

retTensor

The output tensor for the conv2d layer.

参数:

x (Tensor)

返回类型:

Tensor

class tvm.relax.frontend.nn.Conv3D(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, dtype=None, data_layout='NCDHW')[源代码]

Module for conv3d layer.

参数:
forward(x)[源代码]

Forward method for conv3d layer.

Parameters#

xTensor

The input tensor.

Returns#

retTensor

The output tensor for the conv3d layer.

参数:

x (Tensor)

返回类型:

Tensor

class tvm.relax.frontend.nn.ConvTranspose1D(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, dilation=1, groups=1, bias=True, dtype=None)[源代码]

Module for ConvTranspose1D layer.

参数:
  • in_channels (int)

  • out_channels (int)

  • kernel_size (int)

  • stride (int)

  • padding (int)

  • output_padding (int)

  • dilation (int)

  • groups (int)

  • bias (bool)

  • dtype (str | None)

forward(x)[源代码]

Forward method for conv transpose 1d layer.

Parameters#

xTensor

The input tensor.

Returns#

retTensor

The output tensor for the conv transpose 1d layer.

参数:

x (Tensor)

返回类型:

Tensor

class tvm.relax.frontend.nn.Effect[源代码]

Effect is a special non-user facing type that is used to represent operations with side effects, for example, print. It is used to represent the output of a computation.

create(name_hint)[源代码]

Create the implicit inputs to a relax.Function that represents the side effect

参数:

name_hint (str)

返回类型:

List[Var]

emit_init(name_hint, builder)[源代码]

Emit the initialization of the effect. This method is called by the compiler to initialize the effect.

参数:
返回类型:

List[DataflowVar]

finalize()[源代码]

finalize the effect as the implicit return value of a relax.Function

返回类型:

List[Var]

set_state(state_vars)[源代码]

Set the variables that represents the effect

参数:

state_vars (List[Var])

返回类型:

None

to(dtype=None)[源代码]

Convert the effect to specific dtype. Usually it is no-op for most of the effects

参数:

dtype (str | None)

返回类型:

None

class tvm.relax.frontend.nn.Embedding(num, dim, dtype=None)[源代码]

Module for embedding layer.

参数:
forward(x)[源代码]

Forward method for embedding layer.

Parameters#

xTensor

The input tensor.

Returns#

retTensor

The output tensor for the embedding layer.

参数:

x (Tensor)

class tvm.relax.frontend.nn.ExternModule(symbols)[源代码]

The abstract base class for external modules. External modules are designed to help incorporate user-provided handcrafted kernels into the exported TVM IRModule.

参数:

symbols (Dict[str, Callable])

load()[源代码]

Loads the external module into a TVM runtime module.

返回类型:

Module

class tvm.relax.frontend.nn.GELU[源代码]

Module for GELU activation layer.

class tvm.relax.frontend.nn.GroupNorm(num_groups, num_channels, eps=1e-05, affine=True, dtype=None)[源代码]

Module for group norm layer.

参数:
forward(x, channel_axis=1, axes=None)[源代码]

Forward method for group norm layer.

Parameters#

xTensor

The input tensor.

channel_axisint

Channel axis of the input data.

axesOptional[List[int]]

Optional list of axes to compute norm over, if not specified, assumes that the first two axes should be left alone.

Returns#

retTensor

The output tensor for the group norm layer.

参数:
  • x (Tensor)

  • channel_axis (int)

  • axes (List[int] | None)

class tvm.relax.frontend.nn.IOEffect[源代码]

Modeling IO side effect, for example, printing the content of NDArrays on screen, inserting debug breakpoints, etc.

create(name_hint)[源代码]

Create the implicit inputs to a relax.Function that represents the side effect

参数:

name_hint (str)

返回类型:

List[Var]

emit_init(name_hint, builder)[源代码]

Emit the initialization of the effect. This method is called by the compiler to initialize the effect.

参数:

builder (BlockBuilder)

返回类型:

List[DataflowVar]

finalize()[源代码]

finalize the effect as the implicit return value of a relax.Function

返回类型:

List[Var]

set_state(state_vars)[源代码]

Set the variables that represents the effect

参数:

state_vars (List[Var])

返回类型:

None

class tvm.relax.frontend.nn.KVCache(init_seq_len, unit_shape, dtype=None)[源代码]

Effect to implement KVCache.

参数:
append(new_element)[源代码]

Append a new element in KVCache.

Parameters#

new_elementTensor

The new tensor to append.

参数:

new_element (Tensor)

返回类型:

None

create(name_hint)[源代码]

Create the implicit inputs to a relax.Function that represents the KVCache effect.

Parameters#

name_hintstr

The name hint of the relax.Var.

Returns#

retList[relax.Var]

The relax.Var for KVCache.

参数:

name_hint (str)

返回类型:

List[Var]

emit_init(name_hint, bb)[源代码]

Emit the initialization of the KVCache effect.

Parameters#

name_hintstr

The name hint of the initialization binding Var.

bbrelax.BlockBuilder

The relax BlockBuilder to emit.

参数:
finalize()[源代码]

Finalize the KVCache effect as the implicit return value of a relax.Function.

Returns#

retList[rx.Var]

The output relax.Var as KVCache.

返回类型:

List[Var]

set_state(state_vars)[源代码]

Set the variables that represents the effect

参数:

state_vars (List[Var])

返回类型:

None

to(dtype=None)[源代码]

Convert the KVCache effect to specific dtype.

Parameters#

dtypeOptional[str]

The target data type to convert.

参数:

dtype (str | None)

返回类型:

None

view(seq_len)[源代码]

View the last elements in KVCache.

Parameters#

seq_lentir.Var

The number of last elements to view.

Returns#

retTensor

The last tensor to view.

参数:

seq_len (Var)

返回类型:

Tensor

class tvm.relax.frontend.nn.LayerNorm(normalized_shape, eps=1e-05, elementwise_affine=True, dtype=None)[源代码]

Module for Layer Normalization

参数:
  • normalized_shape (int)

  • eps (float | None)

  • elementwise_affine (bool)

  • dtype (str | None)

forward(x)[源代码]

Forward method for layer normalization layer.

Parameters#

xTensor

The input tensor.

Returns#

retTensor

The output tensor for the layer normalization layer.

参数:

x (Tensor)

返回类型:

Tensor

class tvm.relax.frontend.nn.Linear(in_features, out_features, bias=True, dtype=None, out_dtype=None)[源代码]

Module for linear layer.

参数:
  • in_features (int | str | PrimExpr)

  • out_features (int | str | PrimExpr)

  • bias (bool)

  • dtype (str | None)

  • out_dtype (str | None)

forward(x)[源代码]

Forward method for linear layer.

Parameters#

xTensor

The input tensor.

Returns#

retTensor

The output tensor for the linear layer.

参数:

x (Tensor)

返回类型:

Tensor

to(dtype=None)[源代码]

Override to() such that we do not convert bias if there is out_dtype. Otherwise, we might run into dtype mismatch when computing x + self.bias since x is of type out_dtype and bias becomes dtype, potentially different.

参数:

dtype (str | None)

返回类型:

None

class tvm.relax.frontend.nn.Module[源代码]

Base class for neural network components. Subclass it to build your models. Modules can nest within each other in a tree structure using regular attribute assignment.

__call__(*args, **kwargs)[源代码]

Call the module with the given inputs and returns the output.

参数:
返回类型:

Any

export_tvm(spec, debug=False, allow_extern=False)[源代码]

Export the module to TVM IRModule and parameters

Parameters#

spec_spec.ModuleSpecType

A dictionary mapping each input name to a specification that defines the inputs shape and dtype.

debugbool

If set to True, then the exported module will support effects. This enables things like printing in the graph.

Returns#

irmoduletvm.ir.IRModule

The converted tvm IR representation of the model.

paramsList[Tuple[str, Parameter]]

A list of Parameters corresponding to the weights of the model.

ext_modsList[nn.ExternModule]

A list of ExternModules that are used in the model.

参数:
  • spec (_spec.ModuleSpecType)

  • debug (bool)

  • allow_extern (bool)

返回类型:

Tuple[IRModule, List[Tuple[str, Parameter]]] | Tuple[IRModule, List[Tuple[str, Parameter]], List[ExternModule]]

jit(spec, device='cpu', pipeline='default_build', out_format='torch', debug=False)[源代码]

Just-in-time compilation of a nn.model to an executable

参数:
  • spec (_spec.ModuleSpec)

  • device (str | Device)

  • pipeline (None | str | Pass)

  • out_format (str)

  • debug (bool)

返回类型:

Any

load_state_dict(state_dict, strict=True)[源代码]

This function copies parameters and buffers from the state_dict into the current module and its descendants. If strict is set to True, the keys in the state_dict must exactly match the keys returned by the state_dict() function of this module.

Parameters#

state_dictDict[str, Parameter]

A dictionary containing a whole state of the module

strictbool = True

Whether to strictly enforce that the keys in state_dict match the keys returned by this module's state_dict() function.

Returns#

(missing_keys, unexpected_keys)Tuple[List[str], List[str]]

A tuple of two lists: the missing keys and the unexpected keys.

参数:
返回类型:

Tuple[List[str], List[str]]

named_parameters(prefix='')[源代码]

This method provides an iterator over module parameters, yielding both the parameter name and its corresponding value.

Parameters#

prefixstr

Prefix to prepend to all parameter names.

Yields#

(str, Parameter) - Tuple containing the name and parameter

参数:

prefix (str)

返回类型:

Iterator[Tuple[str, Parameter]]

parameters()[源代码]

This method provides an iterator over module parameters, yielding only the Parameter value.

Yields#

Parameter - The module's parameter

返回类型:

Iterator[Parameter]

state_dict(*, prefix='', destination=None)[源代码]

Returns a dictionary containing references to the whole state of the module.

Parameters#

prefixstr

Prefix to prepend to all parameter names.

destinationOptional[Dict[str, Parameter]]

Dictionary to which state will be saved. If None, a new dictionary is created.

Returns#

dictDict[str, Parameter]

a dictionary containing a whole state of the module

参数:
  • prefix (str)

  • destination (Dict[str, Parameter] | None)

返回类型:

Dict[str, Parameter]

to(dtype=None)[源代码]

Convert the module to specific dtype recursively

参数:

dtype (str | None)

返回类型:

None

class tvm.relax.frontend.nn.ModuleList(modules)[源代码]

Holds submodules in a list.

参数:

modules (List[Module])

append(module)[源代码]

Add a module to the end of the ModuleList

参数:

module (Module)

forward(x)[源代码]

Feed-forward pass of the module

to(dtype=None)[源代码]

Convert the module to specific dtype recursively

参数:

dtype (str | None)

返回类型:

None

class tvm.relax.frontend.nn.Mutator[源代码]

The mutator for nn.Module transform. Users can override the visit_* methods to apply transform in different structures, or even override the visit method to change the logic of traversal.

visit(name, node)[源代码]

The base dispatching method for visiting of all nodes.

Parameters#

namestr

The name of the current node in parent's attribute.

nodeAny

The current node to visit.

Returns#

ret_node: Any

The new node to replace current node.

参数:
返回类型:

Any

visit_effect(name, node)[源代码]

The base visiting method for mutation of nn.Parameter nodes.

Parameters#

namestr

The name of the current node in parent's attribute.

nodenn.Parameter

The current node of nn.Parameter to mutate.

Returns#

ret_node: Any

The new node to replace current node.

参数:
  • name (str)

  • node (Parameter)

返回类型:

Any

visit_module(name, node)[源代码]

The base visiting method for mutation of nn.Module nodes.

Parameters#

namestr

The name of the current node in parent's attribute.

nodenn.Module

The current node of nn.Module to mutate.

Returns#

ret_node: Any

The new node to replace current node.

参数:
  • name (str)

  • node (Module)

返回类型:

Any

visit_modulelist(name, node)[源代码]

The base visiting method for mutation of nn.ModuleList nodes.

Parameters#

namestr

The name of the current node in parent's attribute.

nodenn.ModuleList

The current node of nn.MoModuleListdule to mutate.

Returns#

ret_node: Any

The new node to replace current node.

参数:
  • name (str)

  • node (ModuleList)

返回类型:

Any

visit_param(name, node)[源代码]

The base visiting method for mutation of nn.Effect nodes.

Parameters#

namestr

The name of the current node in parent's attribute.

nodenn.Effect

The current node of nn.Effect to mutate.

Returns#

ret_node: Any

The new node to replace current node.

参数:
  • name (str)

  • node (Effect)

返回类型:

Any

class tvm.relax.frontend.nn.Object(*, _expr, _name)[源代码]

A wrapper on top of relax.Expr whose struct_info is the base ObjectStructInfo (rather than any its subclass). Object effectively represents non-tensor frontend components such as KV caches.

参数:
__init__(*, _expr, _name)[源代码]

Private constructor. Object is never supposed to be constructed directly by users.

参数:
  • _expr (RelayExpr)

  • _name (str)

返回类型:

None

class tvm.relax.frontend.nn.ObjectModule(symbols, filepath)[源代码]

A subclass of nn.ExternModule, which allows users to provide an object .o file to be linked into compiled artifact;

参数:
load()[源代码]

Loads the external module into a TVM runtime module.

返回类型:

Module

class tvm.relax.frontend.nn.Parameter(shape, dtype=None)[源代码]

A parameter represents the weight of a neural network layer. It is a special tensor which could be bound or not bound to concrete values. If a parameter is bound to a concrete value, it is called a bound parameter, otherwise it is called an unbound parameter.

参数:
__init__(shape, dtype=None)[源代码]

Create a parameter with given shape and dtype. The parameter is not bound to any concrete values.

Parameters#

shapeSequence[Union[int, str, tir.PrimExpr]]

The shape of the parameter. If it is a string name, we create a symbolic shape tvm.tir.Var(name, "int64").

dtypeOptional[str]

The data type of the parameter. If not specified, the default dtype will be used.

参数:
返回类型:

None

to(dtype=None)[源代码]

Change the dtype of the parameter if it is not bound to any concrete data

参数:

dtype (str | None)

返回类型:

None

property data: NDArray | None

Returns the concrete value of the parameter if it is bound to a concrete value, otherwise returns None. The returned value is a tvm.runtime.NDArray.

class tvm.relax.frontend.nn.RMSNorm(hidden_size, axes, epsilon=1e-05, bias=True, dtype=None)[源代码]

Module for rms norm layer.

参数:
forward(x)[源代码]

Forward method for rms norm layer.

Parameters#

xTensor

The input tensor.

Returns#

retTensor

The output tensor for the rms norm layer.

参数:

x (Tensor)

class tvm.relax.frontend.nn.ReLU[源代码]

Module for ReLU activation layer.

class tvm.relax.frontend.nn.SiLU[源代码]

Module for SiLU activation layer.

class tvm.relax.frontend.nn.SourceModule(symbols, source_code, source_format, compile_options=None, compiler=None, output_format='obj')[源代码]

A subclass of nn.ExternModule. It compiles C++/CUDA source code and link them into the eventual IRModule.

Shape/dtype inference. The nn.ExternModule system requires users to provide additional information to work, namely, symbols. It is a dictionary that maps each symbol in the external object file to its shape/dtype inference function. Consider a case where function my_func accepts two tensors, a of shape (x, y, 1), and b of shape (y, z, 5), and produces a tensor c of shape (x, y, z, 9), the shape/dtype inference function should look like:

def shape_dtype_inference(a, b):
    x, y, _ = a.shape
    _, z, _ = b.shape
    return nn.Tensor.placeholder((x, y, z, 9), dtype="float32")

and the symbols dictionary should be provided as:

symbols={
    "my_func": shape_dtype_inference,
}

Calling convention. All external modules now follows "destination-passing-style" (DPS) calling convention, which means the returned tensors are pre-allocated by the system already and passed in as an argument of the external function.

Reuse the example above, the implementation of my_func should include three parameters in its signature, where tensors are represented using DLTensor from DLPack, the de facto standard of in-memory representation of tensors. More details: dmlc/dlpack.

To expose the symbol, TVM_DLL_EXPORT_TYPED_FUNC(symbol, function) is guaranteed available:

// those headers are guaranteed to be available
#include <dlpack/dlpack.h>
#include <tvm/runtime/data_type.h>
#include <tvm/runtime/packed_func.h>

namespace {
// anonymous namespace hides the symbol `_my_func_impl` from other translation units
int _my_func_impl(DLTensor* a, DLTensor* b, DLTensor* c) {
    // `a` and `b` are inputs, and `c` is the output
}
}
// expose symbol `my_func` instead of `_my_func_impl`
TVM_DLL_EXPORT_TYPED_FUNC(my_func, _my_func_impl);

A compiler pass `AttachExternModules`. It is introduced to attach a list of nn.ExternModule`s into an IRModule at any stage of the compilation pipeline, and attach the compiled external modules as `runtime.Module`s into IRModule's `external_mods attribute. It is required by linking in relax.build, but with the existence of this pass, source compilation can be deferred to arbitrary stage of TVM compilation.

Caveats. It is required to call nn.add_extern to register external modules exactly once during export_tvm. Each symbol should be registered exactly once to avoid potential conflicts, and otherwise an error will be raised.

参数:
__init__(symbols, source_code, source_format, compile_options=None, compiler=None, output_format='obj')[源代码]

Constructs a nn.SourceModule from source code.

Parameters#

symbolsDict[str, Callable]

The dictionary that maps each symbol in the external object file to its shape/dtype inference function.

source_codeUnion[str, Path]

Source code or path to the source code to be compiled.

source_formatstr

The source code format. It can be either "cpp" or "cu".

compile_optionsOptional[List[str]]

The compile options. If not provided, the default compile options will be used.

compilerOptional[str]

The compiler. If not provided, the default compiler will be used. On Windows, compilation requires clang by default.

output_formatstr

The output format. It can be either "obj" or "wasm". "obj" is the default format, which is a shared object file. "wasm" is the WebAssembly format, which is a binary file.

参数:
compile(output_path)[源代码]

Compiles the source code in a provided directory and returns the compiled artifact.

参数:

output_path (Path)

返回类型:

None

static get_compile_options(source_format, tvm_pkg=None)[源代码]

Returns the default compile options depending on source_format, including the default inlcude paths w.r.t. tvm_home(), default flags to configure DMLC-Core, and by default, it uses "-O3" and "-std=c++17".

Parameters#

source_formatstr

The source code format. It can be either "cpp" or "cu".

tvm_pkgOptional[List[str]]

The list of packages to be included under tvm_home/3rdparty. Each element should be a relative path to tvm_home/3rdparty.

Returns#

compile_optionsList[str]

The list of compilation flags.

参数:
返回类型:

List[str]

static get_includes(tvm_pkg=None)[源代码]

Returns the default include paths according to tvm_home(). By default, it includes TVM, DLPack, and DMLC-Core. With tvm_pkg provided, it also includes the specified package under tvm_home/3rdparty.

Parameters#

tvm_pkgOptional[List[str]]

The list of packages to be included under tvm_home/3rdparty. Each element should be a relative path to tvm_home/3rdparty.

Returns#

includesList[pathlib.Path]

The list of include paths.

参数:

tvm_pkg (List[str] | None)

返回类型:

List[Path]

load()[源代码]

Loads the external module into a TVM runtime module.

返回类型:

Module

static tvm_home()[源代码]

Find TVM's home directory. If TVM_HOME environment variable is set, use it. Otherwise, use the directory where the tvm Python package is installed. As a sanity check, it is required to have include and 3rdparty as direct subdirectories.

Returns#

tvm_homepathlib.Path

The TVM home directory, and it is guaranteed to have include and 3rdparty as direct subdirectories.

返回类型:

Path

class tvm.relax.frontend.nn.SubroutineMixin[源代码]

A mixin that generates a

Contains common logic for tvm.relax.frontend.nn.Module and tvm.relax.testing.nn.Module.

classmethod __init_subclass__()[源代码]

Update the cls.forward of subclasses

class tvm.relax.frontend.nn.Tensor(*, _expr)[源代码]

A wrapper on top of relax.Expr whose struct_info is a TensorStructInfo, providing more convenient access shape and dtype information. Tensor is always symbolc and not bound to any concrete values. Shape and dtype inference is done eagerly upon tensor creation, i.e. when operators are applied on tensors, the shape and dtype information is already available.

参数:

_expr (RelayExpr)

__init__(*, _expr)[源代码]

Private constructor. Tensor is never supposed to be constructed directly by users.

参数:

_expr (RelayExpr)

返回类型:

None

static from_const(data)[源代码]

Construct a tensor from numpy constants.

返回类型:

Tensor

static from_scalar(data, dtype)[源代码]

Construct a tensor from a scalar with dtype specified.

参数:
返回类型:

Tensor

static from_struct_info(struct_info, name='tensor')[源代码]

Construct a nn.Tensor from relax TensorStructInfo

参数:
返回类型:

Tensor

static placeholder(shape, dtype, name='tensor')[源代码]

Create a placeholder tensor with given shape and dtype. A placeholder tensor should never be created directly by users in usual cases, and the only exception is to indicate the shape/dtype of return values of an external function.

If shape is a string name, we create a symbolic shape tvm.tir.Var(name, "int64").

参数:
返回类型:

Tensor

property dtype: str

Returns the data type of the tensor.

Returns#

dtypestr

The data type of the tensor

property ndim: int

Returns the number of dimensions of the tensor.

Returns#

ndimint

The number of dimensions of the tensor

property shape: List[int | PrimExpr]

Returns the shape of the tensor as a list of integers.

An integer can be a python int or tvm.tir.PrimExpr, depending on whether the shape is fully static, for example, [1, 2, tvm.tir.Var("n")] is a valid shape where the last dimension is dynamic while the first two dimensions are always static constants.

Returns#

shapeList[Union[int, tir.PrimExpr]]

The shape of the tensor

class tvm.relax.frontend.nn.TypeVar(name, *constraints, bound=None, covariant=False, contravariant=False)[源代码]

Type variable.

Usage:

T = TypeVar('T')  # Can be anything
A = TypeVar('A', str, bytes)  # Must be str or bytes

Type variables exist primarily for the benefit of static type checkers. They serve as the parameters for generic types as well as for generic function definitions. See class Generic for more information on generic types. Generic functions work as follows:

def repeat(x: T, n: int) -> List[T]:

'''Return a list containing n references to x.''' return [x]*n

def longest(x: A, y: A) -> A:

'''Return the longest of two strings.''' return x if len(x) >= len(y) else y

The latter example's signature is essentially the overloading of (str, str) -> str and (bytes, bytes) -> bytes. Also note that if the arguments are instances of some subclass of str, the return type is still plain str.

At runtime, isinstance(x, T) and issubclass(C, T) will raise TypeError.

Type variables defined with covariant=True or contravariant=True can be used to declare covariant or contravariant generic types. See PEP 484 for more details. By default generic types are invariant in all type variables.

Type variables can be introspected. e.g.:

T.__name__ == 'T' T.__constraints__ == () T.__covariant__ == False T.__contravariant__ = False A.__constraints__ == (str, bytes)

Note that only type variables defined in global scope can be pickled.

tvm.relax.frontend.nn.add(a, b, name='add')[源代码]

Addition with numpy-style broadcasting.

Parameters#

aTensor

The first input tensor.

bTensor

The second input tensor.

namestr

Name hint.

Returns#

resultTensor

The computed result.

Examples#

c = add(a, b)
参数:
  • a (Tensor)

  • b (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.add_extern(mod)[源代码]

Add an external module to the exporter.

参数:

mod (ExternModule)

返回类型:

None

tvm.relax.frontend.nn.argsort(data, axis=-1, descending=False, dtype='int32', name='argsort')[源代码]

Performs sorting along the given axis and returns an array of indices having same shape as an input array that index data in sorted order.

Parameters#

dataTensor

The input data tensor.

axisint

Axis long which to sort the input tensor.

descendingbool

Whether to sort in descending order, the default is False

dtypestr

The data type of the output indices.

namestr

Name hint.

Returns#

outTensor

The indices of the sorted tensor.

参数:
  • data (Tensor)

  • axis (int)

  • descending (bool)

  • dtype (str)

tvm.relax.frontend.nn.astype(x, dtype, name='astype')[源代码]

Cast input tensor to the given data type.

Parameters#

xTensor

The input data to the operator.

dtype: str

The target data type

namestr

Name hint.

Returns#

resultTensor

The casted result.

参数:
  • x (Tensor)

  • dtype (str)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.broadcast_to(x, shape, name='broadcast_to')[源代码]

Broadcasts a tensor to a specified shape.

Parameters#

xTensor

The input data to the operator.

shapeSequence[IntExpr]

The target shape.

namestr

Name hint.

Returns#

resultTensor

The broadcasted tensor.

参数:
返回类型:

Tensor

tvm.relax.frontend.nn.ccl_allgather(x, num_workers, name='ccl_allgather')[源代码]

CCL Allgather operator

Parameters#

xrelax.Expr

The input tensor.

num_workersint

Number of workers.

namestr

Name hint for this operation.

Returns#

resultTensor

The result tensor of allgather.

参数:
  • x (Tensor)

  • num_workers (int)

tvm.relax.frontend.nn.ccl_allreduce(x, op_type='sum', in_group=True, name='ccl_allreduce')[源代码]

CCL Allreduce operator

Parameters#

xrelax.Expr

The input tensor.

op_typestr

The type of reduction operation to be applied to the input data. Now "sum", "prod", "min", "max" and "avg" are supported.

in_groupbool

Whether the reduction operation performs globally or in group as default.

namestr

Name hint for this operation.

Returns#

resultTensor

The result tensor of allreduce.

参数:
  • x (Tensor)

  • op_type (str)

  • in_group (bool)

tvm.relax.frontend.nn.ccl_broadcast_from_worker0(x, name='broadcast_from_worker')[源代码]

Broadcast data from worker-0 to all other workers.

Parameters#

xTensor

The tensor to be broadcast.

namestr

Name hint for this operation.

Returns#

resultTensor

The same tensor, which has been broadcast to all other workers.

参数:

x (Tensor)

tvm.relax.frontend.nn.chunk(x, chunks, dim=0, name='chunk')[源代码]

Split a tensor along dim into the specified number of chunks.

Parameters#

xTensor

Input tensor to be split.

chunksint

Number of pieces to slice x into.

dimint

Which dimension to split x.

namestr

Name hint for this operation.

Returns#

resultTuple[Tensor]

A tuple with chunks elements containing slices of x.

参数:
  • x (Tensor)

  • chunks (int)

  • dim (int)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.concat(x, dim, name='concat')[源代码]

Concatenate a list of tensors along an axis.

Parameters#

xList[Tensor]

List of tensors to concatenate.

dimint

Dimension to concatenate upon.

namestr

Name hint for this operator.

Returns#

resultTensor

Expanded result.

参数:
返回类型:

Tensor

tvm.relax.frontend.nn.conv1d(x, weight, bias=None, stride=1, padding=0, dilation=1, groups=1, name='conv1d')[源代码]

1D convolution.

This operator takes the weight as the 1D convolution kernel and convolves it with data to produce an output.

In the default case, where the data_layout is NCW and kernel_layout is OIW, conv1d takes in a data Tensor with shape (batch_size, in_channels, width), and a weight Tensor with shape (channels, in_channels, kernel_w), where kernel_w is the length of the W kernel dimension, to produce an output Tensor with the following rule:

\[\mbox{out}[b, c, x] = \sum_{dx, k} \mbox{data}[b, k, \mbox{strides} * x + dx] * \mbox{weight}[c, k, dx]\]

Padding and dilation are applied to data and weight respectively before the computation. This operator accepts data layout specification. Semantically, the operator will convert the layout to the canonical layout (NCW for data and OIW for weight), perform the computation, then convert to the out_layout.

Parameters#

xTensor

The input data to the operator.

weightTensor

The weight expressions.

biasOptional[Tensor]

Optional bias tensor of shape [O].

stridesOptional[Union[int, Tuple]]

The strides of convolution. It is required to have length 1.

paddingOptional[Union[int, Tuple, str]]

The padding of convolution on both sides of inputs before convolution. It is required to have length either 1 or 2.

dilationOptional[Union[int, Tuple]]

Specifies the dilation rate to be used for dilated convolution. It is required to have length 1.

groupsOptional[int]

Number of groups to split the input into for grouped convolution. The number of input and output channels should be divisible by the number of groups.

namestr

Name hint.

Returns#

resultTensor

The computed result.

参数:
  • x (Tensor)

  • weight (Tensor)

  • bias (Tensor | None)

  • stride (int | Tuple | None)

  • padding (int | Tuple | str | None)

  • dilation (int | Tuple | None)

  • groups (int | None)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.conv1d_transpose(x, weight, bias=None, stride=1, padding=0, output_padding=0, dilation=1, groups=1, name='conv1d_transpose')[源代码]

1D transposed convolution operator.

This operator can be seen as the gradient operator of conv1d.

The output shape can be explained in the simple case when data_layout == "NCW" and kernel_layout == "IOW". Suppose data has shape (N, in_channel, in_w), weight has shape (in_channel, out_channel, weight_w), we need to assure that in_channel % groups == 0. The shape of the output will be (N, out_channel * groups, out_w), where

  • out_w = ((in_w - 1) * strides[0] + weight_w - 2 * padding[0] + output_padding[0])

Parameters#

dataTensor

The input data to the operator.

weightTensor

The weight tensor.

stridesUnion[int, Tuple[int]]

The strides of convolution. It is required to have length 1.

paddingUnion[int, Tuple[int, ...]]

The padding of convolution on both sides of inputs before convolution. It is required to have length either 1 or 2.

output_paddingUnion[int, Tuple[int, ...]], optional

Used to disambiguate the output shape.

dilationUnion[int, Tuple[int]]

Specifies the dilation rate to be used for dilated convolution. It is required to have length either 1.

groupsint

Number of groups to split the input into for grouped convolution. The number of input and output channels should be divisible by the number of groups.

data_layoutstr

Layout of the input.

kernel_layoutstr

Layout of the weight.

out_layoutOptional[str]

Layout of the output. If not specified, it is the same as data_layout

out_dtypeOptional[Union[str, DataType]]

Specifies the output data type for mixed precision conv2d.

Returns#

resultTensor

The computed result.

参数:
返回类型:

Tensor

tvm.relax.frontend.nn.conv2d(x, weight, bias=None, stride=1, padding=0, dilation=1, groups=1, data_layout='NCHW', name='conv2d')[源代码]

Applies a 2D convolution over an input image composed of sevaral input planes

Parameters#

xTensor

Input tensor of shape [B, N, H, W]

weightTensor

Filters of shape [O, N/groups, kH, kW]

biasOptional[Tensor]

Optional bias tensor of shape [O].

strideOptional[Union[int, Tuple]]

The stride of the convolving kernel. Can be a single number or tuple of (sH, sW).

paddingOptional[[Union[int, Tuple]]]

Implicit paddings on both sides of the input.

dilationOptional[Union[int, Tuple]]

The spacing between kernel elements. Can be a single number of tuple (dH, dW).

groupsOptional[int]

Split input into a number of groups.

data_layoutOptional[str]

Layout of input and output data.

namestr

Name hint.

Returns#

resultTensor

The computed result with shape [B, O, oH, oW].

参数:
  • x (Tensor)

  • weight (Tensor)

  • bias (Tensor | None)

  • stride (int | Tuple | None)

  • padding (int | Tuple | str | None)

  • dilation (int | Tuple | None)

  • groups (int | None)

  • data_layout (str | None)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.conv3d(x, weight, bias=None, stride=1, padding=0, dilation=1, groups=1, data_layout='NCDHW', name='conv3d')[源代码]

Applies a 3D convolution over an input image composed of sevaral input planes

Parameters#

xTensor

Input tensor of shape [B, N, D, H, W]

weightTensor

Filters of shape [O, N/groups, kD, kH, kW]

biasOptional[Tensor]

Optional bias tensor of shape [O].

strideOptional[Union[int, Tuple]]

The stride of the convolving kernel. Can be a single number or tuple of (sD, sH, sW).

paddingOptional[[Union[int, Tuple]]]

Implicit paddings on both sides of the input.

dilationOptional[Union[int, Tuple]]

The spacing between kernel elements. Can be a single number of tuple (dD, dH, dW).

groupsOptional[int]

Split input into a number of groups.

data_layoutOptional[str]

Optional layout of the input and output data.

namestr

Name hint.

Returns#

resultTensor

The computed result with shape [B, O, oD, oH, oW].

参数:
  • x (Tensor)

  • weight (Tensor)

  • bias (Tensor | None)

  • stride (int | Tuple | None)

  • padding (int | Tuple | str | None)

  • dilation (int | Tuple | None)

  • groups (int | None)

  • data_layout (str | None)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.cumsum(data, axis=None, dtype=None, exclusive=None, name='cumsum')[源代码]

Numpy style cumsum op. Return the cumulative inclusive sum of the elements along a given axis.

Parameters#

dataTensor

The input data to the operator.

axisOptional[int]

Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array.

dtypeOptional[str]

Type of the returned array and of the accumulator in which the elements are summed. If dtype is not specified, it defaults to the dtype of data.

exclusiveOptional[bool]

If true will return exclusive sum in which the first element is not included.

namestr

Name hint.

Returns#

resultTensor

The result has the same size as data, and the same shape as data if axis is not None. If axis is None, the result is a 1-d array.

Examples#

a = [[1, 2, 3], [4, 5, 6]]

cumsum(a)  # if axis is not provided, cumsum is done over the flattened input.
-> [ 1,  3,  6, 10, 15, 21]

cumsum(a, dtype="float32")
-> [  1.,   3.,   6.,  10.,  15.,  21.]

cumsum(a, axis=0)  # sum over rows for each of the 3 columns
-> [[1, 2, 3],
    [5, 7, 9]]

cumsum(a, axis=1)
-> [[ 1,  3,  6],
    [ 4,  9, 15]]

a = [1, 0, 1, 0, 1, 1, 0]  # a is a boolean array
cumsum(a, dtype=int32)  # dtype should be provided to get the expected results
-> [1, 1, 2, 2, 3, 4, 4]
参数:
  • data (Tensor)

  • axis (int | None)

  • dtype (str | None)

  • exclusive (bool | None)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.debug_func(name, *args, _line_info=None)[源代码]

Call a debug function during runtime. The debug function must be registered with the following type signature:

@tvm.register_func(name_of_debug_func)
def debug_func(lineno: str, arg_0, arg_1, ...) -> None:
    ...

Parameters#

namestr

The name of the debug function to call.

*argsUnion[Tensor, _tir.PrimExpr, int, float, str]

The arguments to pass to the debug function.

参数:
tvm.relax.frontend.nn.divide(a, b, name='divide')[源代码]

Division with numpy-style broadcasting.

Parameters#

aTensor

The first input tensor.

bTensor

The second input tensor.

namestr

Name hint.

Returns#

resultTensor

The computed result.

Examples#

c = divide(a, b)
参数:
  • a (Tensor)

  • b (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.empty(shape, dtype='float32', name='empty')[源代码]

Construct an uninitialized tensor, with the input shape and dtype.

Parameters#

shapeSequence[IntExpr]

The shape of the created tensor.

dtypestr

The data type of the created tensor.

namestr

Name hint.

Returns#

resultTensor

The result tensor.

参数:
返回类型:

Tensor

tvm.relax.frontend.nn.equal(a, b, name='equal')[源代码]

Broadcasted element-wise comparison for (lhs == rhs).

Parameters#

aTensor

The first input tensor.

bTensor

The second input tensor.

namestr

Name hint.

Returns#

resultTensor

The computed result.

参数:
  • a (Tensor)

  • b (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.exp(x, name='exp')[源代码]

Applies the exponential function.

\[\text{Exp}(x) = e^x\]

Parameters#

xTensor

The input data to the operator.

namestr

Name hint.

Returns#

resultTensor

The computed result.

Note#

The input tensor is required to have float dtype

参数:
  • x (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.extern(name, args, out)[源代码]

Invoke an extern function during runtime. The extern function must be registered with the " TVM runtime using TVM_REGISTER_GLOBAL (C++), or tvm.register_func (Python).

Parameters#

namestr

The name of the extern function to call.

argsSequence[Union[Tensor, _tir.PrimExpr, int, float, str]]

The arguments to pass to the extern function.

outUnion[Tensor, List[Tensor]]

The output tensors, only

Returns#

resultTensor

The result

参数:
返回类型:

OutType

tvm.relax.frontend.nn.full(shape, fill_value, dtype='float32', name='full')[源代码]

Fill array with scalar value.

Parameters#

shapeSequence[IntExpr]

The shape of the created tensor.

fill_valueTensor

The value to fill. Must be a scalar tensor.

dtypestr

The data type of the created tensor. If dtype is not given, it will by default use the dtype of fill_value.

namestr

Name hint.

Returns#

resultTensor

The result tensor.

参数:
返回类型:

Tensor

tvm.relax.frontend.nn.gelu(x, approximate=None, name='gelu')[源代码]

Applies the Gaussian Error Linear Units function

\[\text{GeLU}(x) = 0.5 * x * (1 + \text{erf}(x * 0.5**0.5))\]

where \(erf\) is the Gauss Error function.

Parameters#

xTensor

The input data

approximateOptional[str]

If set to tanh, use an approximation when calculating CDF.

namestr

Name hint.

Returns#

resultTensor

The computed result.

Note#

The input tensor is required to have float dtype

参数:
  • x (Tensor)

  • approximate (str | None)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.get_default_dtype()[源代码]

Get the default parameter dtype if not specified. By default it is float32.

Returns#

dtypestr

The default dtype

返回类型:

str

tvm.relax.frontend.nn.get_timestep_embedding(x, embedding_dim, flip_sin_to_cos=False, downscale_freq_shift=1, scale=1, max_period=10000, name='get_timestep_embedding')[源代码]

Timestep calculation as described in Denoising Diffusion Probabilistic Models.

Parameters#

xTensor

A 1-D Tensor of N indices.

embedding_dimint

The dimension of the output.

flip_sin_to_cosbool

If True, change the order of sine and cosine embeddings.

downscale_freq_shiftfloat

Adjusts the frequency of the sinusoidal sampling.

scalefloat

Weight adjustment for embedding magnitude.

max_periodint

Controls the minimum frequency of the embeddings.

namestr

The name to label this operator with.

Returns#

resultTensor

[N x dim] Tensor of positional embeddings.

参数:
  • x (Tensor)

  • embedding_dim (int)

  • flip_sin_to_cos (bool)

  • downscale_freq_shift (float)

  • scale (float)

  • max_period (int)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.greater(a, b, name='greater')[源代码]

Broadcasted element-wise comparison for (lhs > rhs).

Parameters#

aTensor

The first input tensor.

bTensor

The second input tensor.

namestr

Name hint.

Returns#

resultTensor

The computed result.

参数:
  • a (Tensor)

  • b (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.greater_equal(a, b, name='greater_equal')[源代码]

Broadcasted element-wise comparison for (lhs >= rhs).

Parameters#

aTensor

The first input tensor.

bTensor

The second input tensor.

namestr

Name hint.

Returns#

resultTensor

The computed result.

参数:
  • a (Tensor)

  • b (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.group_norm(x, num_groups, weight, bias, eps=1e-05, channel_axis=1, axes=None, name='group_norm')[源代码]

Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization

\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]

Parameters#

xTensor

Input to which rms_norm will be applied.

num_groupsint

Number of groups to separate the channels into.

weightTensor

The gamma scale factor.

biasTensor

The beta offset factor.

epsilonfloat

Small float added to square mean to avoid dividing by zero.

channel_axis: int

The channel axis of the data.

axesOptional[int]

Which axes to compute the groupnorm over. If None, assumes first two channels should be ignored.

namestr

Name hint.

Returns#

resultTensor

The computed result.

参数:
  • x (Tensor)

  • num_groups (int)

  • weight (Tensor | None)

  • bias (Tensor | None)

  • eps (float)

  • channel_axis (int)

  • axes (List[int] | None)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.interpolate(x, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None, antialias=None, data_layout='NCHW', name='interpolate')[源代码]

Resize a tensor using the specified mode.

Parameters#

xTensor

Input tensor to be resized.

sizeOptional[Union[int, Tuple[int]]]

Requested output size, only one of size and scale_factor may be specified.

scale_factorOptional[Union[float, Tuple[float]]]

Multiplier for spatial size.

modestr

Algorithm used for sampling.

align_cornersOptional[bool]

How to map pixels before and after sampling.

recompute_scale_factorOptional[bool]

Recompute the scale_factor for use in interpolation.

antialiasOptional[bool]

Apply antialiasing to output.

data_layoutOptional[str]

Layout of the input and output data.

namestr

Name hint for this operation.

Returns#

resultTensor

Output tensor with requested shape.

参数:
tvm.relax.frontend.nn.layer_norm(x, normalized_shape, weight=None, bias=None, eps=1e-05, name='layer_norm')[源代码]

Layer normalization (Lei Ba and et al., 2016). Applies layer normalization to the n-dimensional input array. This operator takes an n-dimensional input array and normalizes the input using the given axis:

\[out = \frac{data - mean(data, axis)}{\sqrt{var(data, axis)+\epsilon}} * gamma + beta\]

Unlike batch normalization, the mean and var are computed along the channel dimension.

Assume the input has size k on axis 1, then both gamma and beta have shape (k,).

备注

This operator can be optimized away for inference.

Parameters#

xTensor

Input to which layer_norm will be applied.

normalized_shape: Union[int, List[int]]

The shape of axes to normalize. If a single integer is used, it is treated as a singleton list and this module will normalize over the last dimension.

weight: Tensor

The gamma scale factor.

bias: Tensor

The beta offset factor.

eps: float

Small float added to variance to avoid dividing by zero.

namestr

Name hint.

Returns#

resultTensor

The computed result.

参数:
  • x (Tensor)

  • normalized_shape (int | List[int])

  • weight (Tensor | None)

  • bias (Tensor | None)

  • eps (float)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.less(a, b, name='less')[源代码]

Broadcasted element-wise comparison for (lhs < rhs).

Parameters#

aTensor

The first input tensor.

bTensor

The second input tensor.

namestr

Name hint.

Returns#

resultTensor

The computed result.

参数:
  • a (Tensor)

  • b (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.less_equal(a, b, name='less_equal')[源代码]

Broadcasted element-wise comparison for (lhs <= rhs).

Parameters#

aTensor

The first input tensor.

bTensor

The second input tensor.

namestr

Name hint.

Returns#

resultTensor

The computed result.

参数:
  • a (Tensor)

  • b (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.matmul(a, b, out_dtype=None, name='matmul')[源代码]

General matrix multiplication of two tensors, with broadcasting on batched dimensions.

The semantics and output shape deduction rule is specified as https://data-apis.org/array-api/latest/API_specification/generated/array_api.matmul.html.

Parameters#

aTensor

The first input tensor.

bTensor

The second input tensor.

out_dtype: Optional[Union[str, DataType]]

The data type of the matmul result. When it is not specified, the output dtype will be the same as input dtype.

namestr

Name hint.

Returns#

resultTensor

The computed result.

Examples#

c = matmul(a, b)
参数:
  • a (Tensor)

  • b (Tensor)

  • out_dtype (str | None)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.maximum(x1, x2, name='maximum')[源代码]

Element-wise maximum

Parameters#

x1Tensor

The first input tensor.

x2Tensor

The second input tensor.

namestr

Name hint.

Returns#

resultTensor

The computed result.

Examples#

c = maximum(a, b)
参数:
  • x1 (Tensor)

  • x2 (Tensor)

  • name (str)

tvm.relax.frontend.nn.minimum(x1, x2, name='minimum')[源代码]

Element-wise minimum

Parameters#

x1Tensor

The first input tensor.

x2Tensor

The second input tensor.

namestr

Name hint.

Returns#

resultTensor

The computed result.

Examples#

c = minimum(a, b)
参数:
  • x1 (Tensor)

  • x2 (Tensor)

  • name (str)

tvm.relax.frontend.nn.multinomial_from_uniform(prob, uniform_sample, sample_indices=None, dtype='int64', name='multinomial_from_uniform')[源代码]

Returns a tensor where each row contains the index sampled from the multinomial probability distribution located in the corresponding row of tensor prob.

Notes#

For better cpu performance, use 'vm.builtin.multinomial_from_uniform'. For accurate results, ensure probabilities are between 0 and 1 and sum to 1.

Parameters#

probTensor

A 2-D tensor of shape (batch, vocab_size) representing probability distributions. Each row is a distribution across vocabulary for a batch, where: Values range from [0, 1], indicating the probability of each vocabulary item. The sum of values in each row is 1, forming a valid distribution.

uniform_sampleTensor

The uniformly sampled 2-D tensor with the shape (n, 1). Values range from 0 to 1, indicating probabilities sampled uniformly.

sample_indicesOptional[Tensor]

The 2-D tensor with the shape [n, 1], which indicates the specific probability distribution to sample from. The value of sample_indices[i] determines that the ith token should be sampled from the sample_indices[i]th probability distribution. For instance, if there are 3 distinct probability distributions and the requirement is to sample 2, 3, and 4 tokens from each, then sample_indices would be [0, 0, 1, 1, 1, 2, 2, 2, 2].

dtypestr

The data type of output tensor.

Returns#

resultTensor

The computed tensor with shape (n, 1).

Examples#

prob = [[0.2, 0.3, 0.5], [0.3, 0.4, 0.3]]
usample = [[0.4], [0.9]]
sample_indices = [[0], [1]]

multinomial_from_uniform(prob, usample)
-> [[1], [2]]
multinomial_from_uniform(prob, usample, sample_indices)
-> [[1], [2]]
参数:
  • prob (Tensor)

  • uniform_sample (Tensor)

  • sample_indices (Tensor | None)

  • dtype (str)

  • name (str)

tvm.relax.frontend.nn.multiply(a, b, name='mul')[源代码]

Multiplication with numpy-style broadcasting.

Parameters#

aTensor

The first input tensor.

bTensor

The second input tensor.

namestr

Name hint.

Returns#

resultTensor

The computed result.

Examples#

c = multiply(a, b)
参数:
  • a (Tensor)

  • b (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.negative(x, name='neg')[源代码]

Numerical negative of the input tensor.

Parameters#

xTensor

The input data to the operator.

namestr

Name hint.

Returns#

resultTensor

The computed result.

参数:
  • x (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.not_equal(a, b, name='not_equal')[源代码]

Broadcasted element-wise comparison for (lhs != rhs).

Parameters#

aTensor

The first input tensor.

bTensor

The second input tensor.

namestr

Name hint.

Returns#

resultTensor

The computed result.

参数:
  • a (Tensor)

  • b (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.ones(shape, dtype='float32', name='ones')[源代码]

Construct a tensor of all zeros, with the input shape and dtype.

Parameters#

shapeSequence[IntExpr]

The shape of the created tensor.

dtypestr

The data type of the created tensor.

namestr

Name hint.

Returns#

resultTensor

The result tensor.

参数:
返回类型:

Tensor

tvm.relax.frontend.nn.pad(x, pad, mode='constant', value=0, name='pad')[源代码]

Apply spatial padding to the input tensor.

Parameters#

xTensor

Input tensor to be padded.

padList[int]

List in the format of [before_0, after_0, before_1, after_1, ...] indicating how much to pad each axis of x.

modstr

Padding mode to use, constant implies padded elements will use value argument.

valueint

What to pad with in constant mode.

namestr

Name hint for this operator.

Returns#

resultTensor

Padded output tensor.

参数:
返回类型:

Tensor

tvm.relax.frontend.nn.permute(x, axes, name='permute')[源代码]

Permutes the dimensions of the input tensor.

Parameters#

xTensor

The input data to the operator.

axesOptional[List[int]]

The target axes order.

namestr

Name hint.

Returns#

resultTensor

The transposed result.

参数:
返回类型:

Tensor

tvm.relax.frontend.nn.permute_dims(x, axes=None, name=None)[源代码]

Permutes the dimensions of an array.

Parameters#

xTensor

The input data to the operator.

axesOptional[List[int]]

The target axes order, reverse order if not specified.

namestr

Name hint.

Returns#

resultTensor

The transposed result.

参数:
返回类型:

Tensor

tvm.relax.frontend.nn.print_(tensor)[源代码]

Debug printing a Tensor during runtime.

参数:

tensor (Tensor)

tvm.relax.frontend.nn.relu(x, name='relu')[源代码]

Rectified Linear Unit (ReLU) activation function.

\[ext{ReLU}(x) = ext{max}(x, 0)\]

Parameters#

xTensor

The input data.

namestr

Name hint.

Returns#

resultTensor

The computed result.

参数:
  • x (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.renormalize_top_p_top_k_prob(prob, sorted_prob, top_p, top_k)[源代码]

Renormalizes probabilities after filtering with top_p and top_k, ensuring they sum up to 1.

Notes#

For accurate results, ensure probabilities are between 0 and 1 and sum to 1.

Parameters#

probTensor

A 2-D tensor of shape (batch, vocab_size) representing probability distributions.

sorted_probTensor

Probabilities sorted in descending order.

top_pTensor

The cumulative probability threshold with shape (batch, 1) for nucleus sampling.

top_k :Tensor

A tensor with shape (batch, 1), representing the number of top probabilities to consider for top-k sampling.

Returns#

resultTensor

The filtered and nomalized tensor with the sampe shape as input prob.

tvm.relax.frontend.nn.repeat(x, repeats, axis=None, name='repeat')[源代码]

Repeats elements of an array.

Parameters#

dataTensor

The input tensor.

repeatsint

The number of repetitions.

axis: Optional[int]

The axis along which to repeat values. The negative numbers are interpreted counting from the backward. By default, use the flattened input array, and return a flat output array.

namestr

Name hint.

Returns#

retTensor

The computed result.

Examples#

np_x = numpy.array([[1, 2], [3, 4]])
x = Tensor.from_const(np_x)
lv1 = repeat(x, repeats=2) # lv1 == [1, 1, 2, 2, 3, 3, 4, 4]
lv2 = repeat(x, repeats=2, axis=1)   # lv2 == [[1., 1., 2., 2.],
                                     #         [3., 3., 4., 4.]]
参数:
  • x (Tensor)

  • repeats (int)

  • axis (int | None)

返回类型:

Tensor

tvm.relax.frontend.nn.reshape(x, shape, name='reshape')[源代码]

Reshape the input array.

-1 infers the dimension of the output shape by using the remainder of the input dimensions keeping the size of the new array same as that of the input array. At most one dimension of shape can be -1.

x.shape = (2, 3, 4), shape = (6, 1, -1), result.shape = (6, 1, 4)
x.shape = (2, 3, 4), shape = (3, -1, 8), result.shape = (3, 1, 8)
x.shape = (2, 3, 4), shape = (-1,), result.shape = (24,)

Parameters#

xTensor

The input data to the operator.

shapeSequence[IntExpr]

The new shape. Should be compatible with the original shape.

namestr

Name hint.

Returns#

resultTensor

The reshaped result.

Note#

The -1 inference is only performed at compile-time. That is to say, in any case the dimension length of -1 cannot be inferred in compile-time, an error will be thrown.

参数:
返回类型:

Tensor

tvm.relax.frontend.nn.rms_norm(x, weight, axes, epsilon=1e-05, name='rms_norm')[源代码]

Root mean square normalization (Biao Zhang and et al., 2019). Applies root mean square normalization to the n-dimensional input array. This operator takes an n-dimensional input array and normalizes the input using the given axis:

\[out = \frac{data}{\sqrt{mean(data, axis)+\epsilon}} * weight\]

Parameters#

dataTensor

Input to which rms_norm will be applied.

weightTensor

The scale factor.

axesUnion[int, List[int]]

The axes that along which the normalization is applied.

epsilonfloat

Small float added to square mean to avoid dividing by zero.

namestr

Name hint.

Returns#

resultTensor

The computed result.

参数:
返回类型:

Tensor

tvm.relax.frontend.nn.sample_top_p_top_k_from_sorted_prob(sorted_prob, sorted_index, top_p, top_k, uniform_sample, sample_indices=None)[源代码]

Samples indices from a sorted probability tensor based on top_p and top_k criteria.

Notes#

For accurate results, ensure probabilities are between 0 and 1 and sum to 1.

Parameters#

sorted_probTensor

A 2-D tensor, with shape (batch, vocab_size), contains probabilities sorted in descending order.

sorted_index: Tensor

The indices tensor with shape (batch, vocab_size), corresponding to the sorted_prob. Potentially from applying argsort on the original probability tensor in descending order.

top_pTensor

The cumulative probability threshold with shape (batch, 1) for nucleus sampling.

top_k :Tensor

A tensor with shape (batch, 1), representing the number of top probabilities to consider for top-k sampling.

uniform_sampleTensor

Uniformly sampled values with shape (n, 1) are used to select the output indices.

sample_indicesOptional[Tensor]

The 2-D tensor with the shape [n, 1], which indicates the specific probability distribution to sample from. The value of sample_indices[i] determines that the ith token should be sampled from the sample_indices[i]th probability distribution. For instance, if there are 3 distinct probability distributions and the requirement is to sample 2, 3, and 4 tokens from each, then sample_indices would be [0, 0, 1, 1, 1, 2, 2, 2, 2].

Returns#

resultTensor

The selected indices with shape (n, 1).

Examples#

prob = [[0.1 , 0.4, 0.5],
        [0.3, 0.3, 0.4]]
sorted_prob = [[0.5, 0.4, 0.1],
               [0.4, 0.3, 0.3]]
sorted_index = [[2, 1, 0],
                [2, 0, 1]]
top_p = [[0.6],[0.9]]
top_k = [[3],[2]]
uniform_sample = [[0.5], [0.6]]
sample_indices = [[0], [1]]

sample_top_p_top_k_from_sorted_prob(
    sorted_prob, sorted_index,top_p, top_k, uniform_sample, sample_indices)
-> [2, 0]
参数:
  • sorted_prob (Tensor)

  • sorted_index (Tensor)

  • top_p (Tensor)

  • top_k (Tensor)

  • uniform_sample (Tensor)

  • sample_indices (Tensor | None)

tvm.relax.frontend.nn.scaled_dot_product_attention(query, key, value, attn_mask=None, is_causal=False, scale=None, name='scaled_dot_product_attention')[源代码]

Computes a scaled dot product attention on provided attention query, key, and values. Compliant with the functional torch implementation.

Parameters#

queryTensor

Tensor representing current attention lookup of shape [batch, seq_len, num_heads, head_size].

keyTensor

Tensor representing cross attention mapping of shape [batch, seq_len_kv, num_heads_kv, head_size].

valueTensor

Tensor representing embedded attention values of shape [batch, seq_len_kv, num_heads_kv, head_size_value].

attn_maskOptional[Tensor]

Optional mask for attention, not yet supported.

is_causalOptional[bool]

If set, uses a causal attention mask.

scaleOptional[float]

Optional extra scaling argument applied to attention.

namestr

Name hint for this function.

参数:
  • query (Tensor)

  • key (Tensor)

  • value (Tensor)

  • attn_mask (Tensor | None)

  • is_causal (bool | None)

  • scale (float | None)

  • name (str)

tvm.relax.frontend.nn.sigmoid(x, name='sigmoid')[源代码]

Computes sigmoid.

\[\text{sigmoid}(x) = \frac{1}{1 + \exp(-x)}\]

Parameters#

data: Tensor

The input data to the operator.

namestr

Name hint.

Returns#

resultTensor

The computed result.

Note#

The input tensor is required to have float dtype

参数:
  • x (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.silu(x, name='silu')[源代码]

Sigmoid Linear Unit function

\[\text{SiLU}(x) = x * \text{sigmoid}(x)\]

Parameters#

dataTensor

The input data

namestr

Name hint.

Returns#

resultTensor

The computed result.

Note#

The input tensor is required to have float dtype

参数:
  • x (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.softmax(x, axis=-1, name='softmax')[源代码]

Computes softmax.

\[\text{softmax}(x)_i = \frac{\exp(x_i)}{\sum_j \exp(x_j)}\]

Parameters#

data: Tensor

The input data to the operator.

axis: int

The axis to sum over when computing softmax. If not specified, it is by default the last axis of the input tensor. Supports negative indexing.

namestr

Name hint.

Returns#

resultTensor

The computed result.

Note#

The input tensor is required to have float dtype

参数:
  • x (Tensor)

  • axis (int)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.sort(x, axis=-1, descending=False, name='sort')[源代码]

Performs sorting along the given axis and returns an array in sorted order.

Parameters#

xTensor

The input tensor.

axisint

Axis along which to sort the input tensor. By default the last axis of the input is used.

descendingbool

Whether to sort in descending order, the default is False

namestr

Name hint.

Returns#

outTensor

The sorted tensor.

参数:
  • x (Tensor)

  • axis (int)

  • descending (bool)

tvm.relax.frontend.nn.split(ary, indices_or_sections, axis=0, name='split')[源代码]

Split an array into multiple sub-arrays.

Parameters#

aryTensor

Input tensor to be split.

indices_or_sectionsUnion[int, Sequence[int]]

Indices or sections to split into.

axisint = 0

The axis along which to split, default is 0.

namestr

Name hint.

Returns#

resultTuple[Tensor, ...]

A list of sub-arrays as the outcome of splitting.

参数:
返回类型:

Tuple[Tensor, ...]

tvm.relax.frontend.nn.sqrt(x, name='sqrt')[源代码]

Computes the element-wise sqrt of the input tensor.

Parameters#

xTensor

The input tensor.

namestr

Name hint.

Returns#

resultTensor

The computed result.

Note#

The input tensor is required to have float dtype

参数:
  • x (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.square(x, name='square')[源代码]

Computes the element-wise square of the input tensor.

Parameters#

xTensor

The input tensor.

namestr

Name hint.

Returns#

resultTensor

The computed result.

参数:
  • x (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.squeeze(x, axis=-1, name='squeeze')[源代码]

Squeeze axes in the array.

Parameters#

xTensor

The input data to the operator.

axisOptional[Union[int, List[int]]

The set of axes to remove. If axis = None, remove all axis of dimensions 1. If any specified axis has dimension that does not equal 1, it is an error.

namestr

Name hint.

Returns#

resultTensor

The squeezed result.

参数:
  • x (Tensor)

  • axis (int)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.subtract(a, b, name='subtract')[源代码]

Subtraction with numpy-style broadcasting.

Parameters#

aTensor

The first input tensor.

bTensor

The second input tensor.

namestr

Name hint.

Returns#

resultTensor

The computed result.

Examples#

c = subtract(a, b)
参数:
  • a (Tensor)

  • b (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.sum(x, axis=None, keepdims=False, name='sum')[源代码]

Computes the sum of tensor elements over given axes.

Parameters#

xTensor

The input data tensor

axisOptional[Union[int, List[int]]]

Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input tensor. Negative indexing is supported.

keepdimsbool

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor.

namestr

Name hint for this operation.

Returns#

resultTensor

The computed result.

参数:
返回类型:

Tensor

tvm.relax.frontend.nn.take(x, indices, axis=None, name='take')[源代码]

Take elements from a tensor along an axis. Its semantic is mostly similar to numpy.take (https://numpy.org/doc/stable/reference/generated/numpy.take.html), which can cover torch.take (https://pytorch.org/docs/stable/generated/torch.take.html) and onnx.gather (onnx/onnx).

Parameters#

xTensor

The source tensor.

indicesTensor

The indices of the values to extract.

axisOptional[int]

The axis over which to select values. If it is none, the input tensor is required to be one-dimensional.

namestr

Name hint.

Returns#

retTensor

The taken result.

参数:
  • x (Tensor)

  • indices (Tensor)

  • axis (int | None)

返回类型:

Tensor

tvm.relax.frontend.nn.tanh(x, name='tanh')[源代码]

Applies the hyperbolic tangent function.

\[\text{Tanh}(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}}\]

Parameters#

xTensor

The input data to the operator.

namestr

Name hint.

Returns#

resultTensor

The computed result.

Note#

The input tensor is required to have float dtype

参数:
  • x (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.tensor_expr_op(tensor_expr_func, name_hint, args, *, attrs=None)[源代码]

Build the given tensor_expr_func with te.

Parameters#

tensor_expr_funcCallable

A function that returns a te tensor or a list of tensors.

name_hintstr

Name hint.

args: List[Union[Tensor, _tir.Var]]

Arguments passed to the function.

attrs: Optional[Dict[str, Any]]

A dict of attributes to apply to the function.

Returns#

resultTensor

The result tensor.

参数:
tvm.relax.frontend.nn.tensor_ir_inplace_op(func, name_hint, args, inplace_indices, out)[源代码]

Create a call_tir_inplace binding with given PrimFunc

Parameters#

func_tir.PrimFunc

The PrimFunc to call.

name_hintstr

Name hint.

argsUnion[Tensor, Sequence[Union[Tensor, rx.ShapeExpr, _tir.PrimExpr]]]

The arguments to pass to the PrimFunc.

inplace_indicesUnion[int, List[int]]

Specify which arguments should be used for in-place computations. If inplace_indices is a single integer, it will be made into a singleton list. Suppose inplace_indices[i] = j, where j >= 0. Then the i`th output will be an alias of `args[j]. If inplace_indices[i] = -1, then the i`th output will be a freshly allocated tensor. At least one member of `inplace_indices must not be -1.

outUnion[Tensor, List[Tensor]]

The output tensors.

Returns#

resultTensor

The result tensor

参数:
返回类型:

OutType

tvm.relax.frontend.nn.tensor_ir_op(func, name_hint, args, out)[源代码]

Create a call_tir binding with given PrimFunc

Parameters#

func_tir.PrimFunc

The PrimFunc to call.

name_hintstr

Name hint.

argsUnion[Tensor, Sequence[Union[Tensor, rx.ShapeExpr, _tir.PrimExpr]]]

The arguments to pass to the PrimFunc.

outUnion[Tensor, List[Tensor]]

The output tensors.

Returns#

resultTensor

The result tensor

参数:
返回类型:

OutType

tvm.relax.frontend.nn.topk(data, k=1, axis=-1, ret_type='both', largest=True, dtype='int32', name='topk')[源代码]

Get the top k elements in an input tensor along the given axis.

ret_type specifies the return type, can be one of ("both", "values", "indices").

Parameters#

dataTensor

The input data tensor.

kint

Number of top elements to select. Return all elements if k < 1.

axisint

Axis long which to sort the input tensor.

ret_type: str

The return type [both, values, indices]. "both": return both top k data and indices. "values": return top k data only. "indices": return top k indices only.

largestbool

Whether to return largest or smallest elements. The k smallest elements are returned if largest is False.

dtypestr

The data type of the indices output.

namestr

Name hint.

Returns#

outTensor or Tuple[Tensor, Tensor]

The computed result.

参数:
tvm.relax.frontend.nn.triu(x, diagonal=0, name='triu')[源代码]

Return the upper triangular part of a matrix or a batch of matrices.

Parameters#

xTensor

The tensor that triu will be applied to. It is required to have at least two dimensions.

kint

The index indicating the diagonal below which to zero elements. If k = 0, the diagonal is the main diagonal. If k < 0, the diagonal is below the main diagonal. If k > 0, the diagonal is above the main diagonal.

namestr

Name hint.

Returns#

retTensor

The result tensor.

参数:
  • x (Tensor)

  • diagonal (int)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.unsqueeze(x, dim, name='unsqueeze')[源代码]

Add a new axis to a tensor

Parameters#

xTensor

Input tensor to expand.

dimint

Dimension to expand.

namestr

Name hint for this operator.

Returns#

resultTensor

Expanded result.

参数:
  • x (Tensor)

  • dim (int)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.where(condition, x1, x2, name='where')[源代码]

Selecting elements from either the input tensors depending on the value of the condition.

For a given position, return the corresponding value in x1 if condition is True, and return the corresponding value in x2 otherwise.

Parameters#

conditionTensor

When True, yield x1; otherwise, yield x2. Must be broadcasting compatible with x1 and x2. Must have boolean dtype.

x1Tensor

The first input tensor. Must be broadcasting compatible with condition and x2.

x2Tensor

The second input tensor. Must be broadcasting compatible with condition and x1.

namestr

Name hint.

Returns#

resultTensor

The result tensor.

参数:
  • condition (Tensor)

  • x1 (Tensor)

  • x2 (Tensor)

  • name (str)

返回类型:

Tensor

tvm.relax.frontend.nn.wrap_nested(expr, name)[源代码]

Wrap the given relax.Expr, emit it using the current BlockBuilder, and automatically handle nested cases if the expr represents a Tuple.

Parameters#

exprrelax.Expr

The Expr to be wrapped.

namestr

Name hint.

Returns#

resultUnion[Tensor, Tuple[Tensor]]

The computed result.

参数:
  • expr (RelayExpr)

  • name (str)

返回类型:

Tensor | Sequence[Tensor]

tvm.relax.frontend.nn.zeros(shape, dtype='float32', name='zeros')[源代码]

Construct a tensor of all zeros, with the input shape and dtype.

Parameters#

shapeSequence[IntExpr]

The shape of the created tensor.

dtypestr

The data type of the created tensor.

namestr

Name hint.

Returns#

resultTensor

The result tensor.

参数:
返回类型:

Tensor

tvm.relax.frontend.onnx#

Tools for converting ONNX graphs into Relax graphs.

tvm.relax.frontend.onnx.from_onnx(model, shape_dict=None, dtype_dict='float32', opset=None, keep_params_in_input=False, sanitize_input_names=True)[源代码]#

Convert a ONNX model into an equivalent Relax Function. ONNX graphs are represented as Python Protobuf objects.

The current implementation assumes that the input model is after ONNX v1.1.0.

Parameters#

modelprotobuf object

ONNX ModelProto after ONNX v1.1.0

shape_dictdict of str to tuple, optional

The input shape to the graph

dtype_dictstr or dict of str to str, optional

The input types to the graph

opsetint, optional

Override to autodetected opset. This can be helpful for some testing.

keep_params_in_inputbool

If True, parameters will be treated as input variables. If false, parameters are treated as constant and folded directly into the graph.

sanitize_input_namesbool, optional

Whether to sanitize the input names to ensure they are valid Relax identifiers.

Returns#

modtvm.IRModule

The relax module for compilation

参数:
返回类型:

IRModule

tvm.relax.frontend.stablehlo#

StableHLO Frontends for constructing Relax programs, with the model importers

tvm.relax.frontend.stablehlo.from_stablehlo(stablehlo_module, input_info=None)[源代码]#

Convert a StableHLO Module to a Relax program

Parameters#

stablehlo_moduleUnion[str, mlir.ir.Module]

The StableHLO Module to convert.

input_infoList[Tuple[Tuple[int], str]]

A list of shapes and data types of input tensors.

Returns#

outputtvm.IRModule

The result IRModule with entry function "main"

参数:

input_info (List[Tuple[Tuple[int], str]])

返回类型:

IRModule

tvm.relax.frontend.torch#