tvm.ir

目录

tvm.ir#

Common data structures across all IR variants.

Classes:

Array()

Array container of TVM.

Attrs()

Attribute node, which is mainly use for defining attributes of relay operators.

BaseExpr()

Base class of all the expressions.

BaseFunc()

Base class of all functions.

CallingConv(value[, names, module, ...])

Possible kinds of calling conventions.

ConstantMemoryPools(pools)

This object contains a list of ConstantPoolInfo objects to be used as read-only memory in the compilation

ConstantPoolInfo(pool_name, targets[, ...])

ConstantPoolInfo object holds information related to RO memory pools where the statically sized allocate nodes are pooled into.

Constructor(name_hint, inputs, belong_to)

Relay ADT constructor.

DictAttrs()

Dictionary attributes.

DummyGlobalInfo()

EnvFunc()

Environment function.

FuncType(arg_types, ret_type[, type_params, ...])

Function type.

GlobalInfo()

Base node for all global info that can appear in the IR

GlobalTypeVar(name_hint[, kind])

A global type variable that is used for defining new types or type aliases.

GlobalVar(name_hint[, type_annot])

A global variable in the IR.

IRModule([functions, type_definitions, ...])

IRModule that holds functions and type definitions.

IncompleteType([kind])

Incomplete type during type inference.

Map()

Map container of TVM.

Node()

Base class of all IR Nodes.

Op()

Primitive operator in the IR.

PointerType(element_type[, storage_scope])

PointerType used in the low-level TIR.

PoolInfo()

PoolInfo object holds information related to memory pools where the statically sized allocate nodes will pooled into.

PoolInfoProperties([size_hint_bytes, ...])

PoolInfo object holds information related to memory pools where the statically sized allocate nodes will pooled into.

PrimExpr()

Base class of all primitive expressions.

PrimType(dtype)

Primitive data type in the low level IR

Range(begin[, end, span])

Represent a range in TVM.

RelayExpr()

Base class of all non-primitive expressions.

RelayRefType(value)

Reference Type in relay.

SequentialSpan(spans)

A sequence of source spans

SourceName(name)

A identifier for a source location.

Span(source_name, line, end_line, column, ...)

Specifies a location in a source program.

TensorAffineType(scale, zero_point, dtype[, ...])

The quantized type of a tensor, with scale, zero point, and datatype

TensorType(shape[, dtype])

A concrete TensorType in Relay.

TupleAffineType(types)

Affine types of a node with multiple outputs

TupleType(fields)

The type of tuple values.

Type()

The base class of all types.

TypeCall(func, args)

Type function application.

TypeConstraint()

Abstract class representing a type constraint.

TypeData(header, type_vars, constructors)

Stores the definition for an Algebraic Data Type (ADT) in Relay.

TypeKind(value[, names, module, qualname, ...])

Possible kinds of TypeVars.

TypeRelation(func, args, num_inputs, attrs)

User defined type relation, it is an input-output relation on types.

TypeVar(name_hint[, kind])

Type parameter in functions.

VDevice([target, vdevice_id, memory_scope])

WorkspaceMemoryPools(pools)

This object contains a list of WorkspacePoolInfo objects to be used as workspace memory in the compilation

WorkspacePoolInfo(pool_name, targets[, ...])

WorkspacePoolInfo object holds information related to RW memory pools where the statically sized allocate nodes will pooled into.

Functions:

assert_structural_equal(lhs, rhs[, ...])

Assert lhs and rhs are structurally equal to each other.

load_json(json_str)

Load tvm object from json_str.

make_node(type_key, **kwargs)

Make a new IR node by its type key and fields

register_intrin_lowering(op_name, target, *)

Register Op lowering function

register_op_attr(op_name, attr_key[, value, ...])

Register an operator property of an operator by name.

save_json(node)

Save tvm object as json string.

structural_equal(lhs, rhs[, map_free_vars])

Check structural equality of lhs and rhs.

structural_hash(node[, map_free_vars])

Compute structural hash of node

class tvm.ir.Array[源代码]#

Array container of TVM.

You do not need to create Array explicitly. Normally python list and tuple will be converted automatically to Array during tvm function call. You may get Array in return values of TVM function call.

class tvm.ir.Attrs[源代码]#

Attribute node, which is mainly use for defining attributes of relay operators.

Used by function registered in python side, such as compute, schedule and alter_layout. Attrs is passed as the first argument to these functions.

Methods:

get_int(key)

Get a python int value of a key

get_int_tuple(key)

Get a python int tuple of a key

get_str(key)

Get a python int value of a key

keys()

Get list of names in the attribute.

list_field_info()

Get fields information

get_int(key)[源代码]#

Get a python int value of a key

Parameters#

key: str

Returns#

value: int

get_int_tuple(key)[源代码]#

Get a python int tuple of a key

Parameters#

key: str

Returns#

value: Tuple of int

get_str(key)[源代码]#

Get a python int value of a key

Parameters#

key: str

Returns#

value: int

keys()[源代码]#

Get list of names in the attribute.

Returns#

keyslist of str

List of keys

list_field_info()[源代码]#

Get fields information

Returns#

infos: list of AttrFieldInfo

List of field information

class tvm.ir.BaseExpr[源代码]#

Base class of all the expressions.

class tvm.ir.BaseFunc[源代码]#

Base class of all functions.

Methods:

with_attr(attr_key_or_dict[, attr_value])

Create a new copy of the function and update the attribute.

with_attrs(attr_map)

Copy the IRModule and add the given attribute map to it. Parameters ---------- attr_map: Union[DictAttrs, Dict[str, Object]] The attribute map Returns ------- func : BaseFunc A new copy of the function.

without_attr(attr_key)

Create a new copy of the function with an attribute without provided key.

Attributes:

attrs

Return the attrs member of the function.

with_attr(attr_key_or_dict, attr_value=None)[源代码]#

Create a new copy of the function and update the attribute.

Parameters#

attr_key_or_dictUnion[str, dict]

The attribute key to use or a dict containing multiple key value pairs.

attr_valueObject

The new attribute value.

Returns#

funcBaseFunc

A new copy of the function

返回类型:

BaseFunc

with_attrs(attr_map)[源代码]#

Copy the IRModule and add the given attribute map to it. Parameters ———- attr_map: Union[DictAttrs, Dict[str, Object]]

The attribute map

Returns#

funcBaseFunc

A new copy of the function

参数:

attr_map (DictAttrs | Dict[str, Object])

返回类型:

BaseFunc

without_attr(attr_key)[源代码]#

Create a new copy of the function with an attribute without provided key.

Parameters#

attr_keystr

The attribute key to delete from the attrubte pairs.

Returns#

funcBaseFunc

A new copy of the function

参数:

attr_key (str)

返回类型:

BaseFunc

property attrs#

Return the attrs member of the function.

class tvm.ir.CallingConv(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[源代码]#

Possible kinds of calling conventions.

class tvm.ir.ConstantMemoryPools(pools)[源代码]#

This object contains a list of ConstantPoolInfo objects to be used as read-only memory in the compilation

Parameters#

poolsList[ConstantPoolInfo]

The list of ConstantPoolInfo objects to be used with the compilation

参数:

pools (List[ConstantPoolInfo])

class tvm.ir.ConstantPoolInfo(pool_name, targets, constant_info_arr=None, pool_info_properties=None)[源代码]#

ConstantPoolInfo object holds information related to RO memory pools where the statically sized allocate nodes are pooled into.

Parameters#

pool_namestr

The name of the memory pool

targetslist[Target]

describes which targets could access the pool

pool_info_propertiesPoolInfoProperties

The properties of the pool.

参数:

pool_name (str)

class tvm.ir.Constructor(name_hint, inputs, belong_to)[源代码]#

Relay ADT constructor.

Parameters#

name_hintstr

Name of constructor (only a hint).

inputsList[Type]

Input types.

belong_toGlobalTypeVar

Denotes which ADT the constructor belongs to.

Methods:

__call__(*args)

Call the constructor.

__call__(*args)[源代码]#

Call the constructor.

Parameters#

args: List[RelayExpr]

The arguments to the constructor.

Returns#

call: RelayExpr

A call to the constructor.

class tvm.ir.DictAttrs[源代码]#

Dictionary attributes.

Methods:

_dict()

Get internal dict

get(key[, default])

Get an element with a default value.

items()

Get items from the map.

keys()

Get list of names in the attribute.

_dict()[源代码]#

Get internal dict

get(key, default=None)[源代码]#

Get an element with a default value.

items()[源代码]#

Get items from the map.

keys()[源代码]#

Get list of names in the attribute.

Returns#

keyslist of str

List of keys

class tvm.ir.DummyGlobalInfo[源代码]#
class tvm.ir.EnvFunc[源代码]#

Environment function.

This is a global function object that can be serialized by its name.

Methods:

get(name)

Get a static env function

static get(name)[源代码]#

Get a static env function

Parameters#

namestr

The name of the function.

class tvm.ir.FuncType(arg_types, ret_type, type_params=None, type_constraints=None)[源代码]#

Function type.

A function type consists of a list of type parameters to enable the definition of generic functions, a set of type constraints which we omit for the time being, a sequence of argument types, and a return type.

We can informally write them as: forall (type_params), (arg_types) -> ret_type where type_constraints

Parameters#

arg_typesList[tvm.relay.Type]

The argument types

ret_typetvm.relay.Type

The return type.

type_paramsOptional[List[tvm.relay.TypeVar]]

The type parameters

type_constraintsOptional[List[tvm.relay.TypeConstraint]]

The type constraints.

class tvm.ir.GlobalInfo[源代码]#

Base node for all global info that can appear in the IR

Methods:

__eq__(other)

Compare two struct info for structural equivalence.

same_as(other)

Overload with structural equality.

__eq__(other)[源代码]#

Compare two struct info for structural equivalence.

same_as(other)[源代码]#

Overload with structural equality.

class tvm.ir.GlobalTypeVar(name_hint, kind=TypeKind.AdtHandle)[源代码]#

A global type variable that is used for defining new types or type aliases.

Parameters#

name_hint: str

The name of the type variable. This name only acts as a hint, and is not used for equality.

kindOptional[TypeKind]

The kind of the type parameter.

Methods:

__call__(*args)

Create a type call from this type.

__call__(*args)[源代码]#

Create a type call from this type.

Parameters#

args: List[Type]

The arguments to the type call.

Returns#

call: Type

The result type call.

class tvm.ir.GlobalVar(name_hint, type_annot=None)[源代码]#

A global variable in the IR.

GlobalVar is used to refer to the global functions stored in the IRModule.

Parameters#

name_hint: str

The name of the variable.

Methods:

__call__(*args)

Call the global variable.

astext([show_meta_data, annotate])

Get the text format of the expression.

__call__(*args)[源代码]#

Call the global variable.

Parameters#

args: List[RelayExpr]

The arguments to the call.

Returns#

call: BaseExpr

A call taking the variable as a function.

参数:

args (RelayExpr)

返回类型:

BaseExpr

astext(show_meta_data=True, annotate=None)[源代码]#

Get the text format of the expression.

Parameters#

show_meta_databool

Whether to include meta data section in the text if there is meta data.

annotate: Optional[Object->str]

Optionally annotate function to provide additional information in the comment block.

Returns#

textstr

The text format of the expression.

Notes#

The meta data section is necessary to fully parse the text format. However, it can contain dumps that are big (e.g constant weights), so it can be helpful to skip printing the meta data section.

参数:
返回类型:

str

参数:
  • name_hint (str)

  • type_annot (Type | None)

class tvm.ir.IRModule(functions=None, type_definitions=None, attrs=None, global_infos=None)[源代码]#

IRModule that holds functions and type definitions.

IRModule is the basic unit for all IR transformations across the stack.

Parameters#

functions: Optional[dict].

Map of global var to BaseFunc

Methods:

__getitem__(var)

Lookup a global definition by name or by variable.

__setitem__(var, val)

Add a mapping to the module.

astext([show_meta_data, annotate])

Get the text format of the expression.

from_expr(expr[, functions, type_defs])

Construct a module from a standalone expression.

functions_items()

Get items in self.functions.items() in alphabetical order.

get_attr(attr_key)

Get the IRModule attribute.

get_constructor(tag)

Look up an ADT constructor by tag.

get_global_type_var(name)

Get a global type variable in the function by name.

get_global_type_vars()

Collect all global type vars defined in this module.

get_global_var(name)

Get a global variable in the function by name.

get_global_vars()

Collect all global vars defined in this module.

update(other)

Insert functions in another Module to current one.

update_func(var, func)

Update the function corresponding to a global variable in the module.

update_global_info(name, global_info)

Update global info in the module

with_attr(attr_key, attr_value)

Copy the IRModule and add an attribute to it.

with_attrs(attr_map)

Copy the IRModule and add the given attribute map to it. Parameters ---------- attr_map: Union[DictAttrs, Dict[str, Object]] The attribute map Returns ------- mod : IRModule A new copy of the IRModule with the attribute.

without_attr(attr_key)

Copy the IRModule and remove an attribute key and its associated value. Parameters ---------- attr_key : str The attribute key. Returns ------- mod : IRModule A new copy of the IRModule without the attribute.

__getitem__(var)[源代码]#

Lookup a global definition by name or by variable.

Parameters#

var: Union[String, GlobalVar, GlobalTypeVar]

The name or global variable.

Returns#

val: Union[Function, Type]

The definition referenced by var (either a function or type).

__setitem__(var, val)[源代码]#

Add a mapping to the module.

Parameters#

var: GlobalVar

The global variable.

val: Union[Function, Type]

The value.

astext(show_meta_data=True, annotate=None)[源代码]#

Get the text format of the expression.

Parameters#

show_meta_databool

Whether to include meta data section in the text if there is meta data.

annotate: Optional[Object->str]

Optionally annotate function to provide additional information in the comment block.

Returns#

textstr

The text format of the expression.

Notes#

The meta data section is necessary to fully parse the text format. However, it can contain dumps that are big (e.g constant weights), so it can be helpful to skip printing the meta data section.

static from_expr(expr, functions=None, type_defs=None)[源代码]#

Construct a module from a standalone expression.

Parameters#

expr: RelayExpr

The starting expression

global_funcs: Optional[dict]

Map of global vars to function definitions

type_defs: Optional[dict]

Map of global type vars to type definitions

Returns#

mod: Module

A module containing the passed definitions, where expr is set as the entry point (wrapped in a function if necessary)

functions_items()[源代码]#

Get items in self.functions.items() in alphabetical order.

Returns#

items: List[Tuple[GlobalVar, Function]]

The functions items.

get_attr(attr_key)[源代码]#

Get the IRModule attribute.

Parameters#

attr_keystr

The attribute key.

Returns#

attr_valueAny

Attribute value

get_constructor(tag)[源代码]#

Look up an ADT constructor by tag.

Parameters#

tag: int

The tag for a constructor.

Returns#

constructor: Constructor

The constructor associated with the given tag,

Raises#

tvm.error.TVMError if the corresponding constructor cannot be found.

get_global_type_var(name)[源代码]#

Get a global type variable in the function by name.

Parameters#

name: str

The name of the global type variable.

Returns#

global_type_var: GlobalTypeVar

The global variable mapped to name.

Raises#

tvm.error.TVMError if we cannot find corresponding global type var.

get_global_type_vars()[源代码]#

Collect all global type vars defined in this module.

Returns#

global_type_vars: Array[GlobalTypeVar]

An array of global type vars.

get_global_var(name)[源代码]#

Get a global variable in the function by name.

Parameters#

name: str

The name of the global variable.

Returns#

global_var: GlobalVar

The global variable mapped to name.

Raises#

tvm.error.TVMError if we cannot find corresponding global var.

get_global_vars()[源代码]#

Collect all global vars defined in this module.

Returns#

global_vars: Array[GlobalVar]

An array of global vars.

update(other)[源代码]#

Insert functions in another Module to current one.

Parameters#

other: IRModule

The module to merge into the current Module.

update_func(var, func)[源代码]#

Update the function corresponding to a global variable in the module.

Parameters#

var: GlobalVar

The global variable.

func: tvm.relay.Function

The function to be inserted.

update_global_info(name, global_info)[源代码]#

Update global info in the module

Parameters#

name: str

The name for the global info.

global_info: List[GlobalInfo]

The global info to be updated.

with_attr(attr_key, attr_value)[源代码]#

Copy the IRModule and add an attribute to it.

Parameters#

attr_keystr

The attribute key.

attr_valueObject

The new attribute value.

Returns#

modIRModule

A new copy of the IRModule with the attribute

with_attrs(attr_map)[源代码]#

Copy the IRModule and add the given attribute map to it. Parameters ———- attr_map: Union[DictAttrs, Dict[str, Object]]

The attribute map

Returns#

modIRModule

A new copy of the IRModule with the attribute

参数:

attr_map (DictAttrs | Dict[str, Object])

返回类型:

IRModule

without_attr(attr_key)[源代码]#

Copy the IRModule and remove an attribute key and its associated value. Parameters ———- attr_key : str

The attribute key.

Returns#

modIRModule

A new copy of the IRModule without the attribute

参数:

attr_key (str)

返回类型:

IRModule

class tvm.ir.IncompleteType(kind=TypeKind.Type)[源代码]#

Incomplete type during type inference.

kindOptional[TypeKind]

The kind of the incomplete type.

class tvm.ir.Map[源代码]#

Map container of TVM.

You do not need to create Map explicitly. Normally python dict will be converted automatically to Map during tvm function call. You can use convert to create a dict[Object-> Object] into a Map

Methods:

get(key[, default])

Get an element with a default value.

items()

Get the items from the map

get(key, default=None)[源代码]#

Get an element with a default value.

Parameters#

keyobject

The attribute key.

defaultobject

The default object.

Returns#

value: object

The result value.

items()[源代码]#

Get the items from the map

class tvm.ir.Node[源代码]#

Base class of all IR Nodes.

class tvm.ir.Op[源代码]#

Primitive operator in the IR.

Methods:

add_argument(name, type, description)

Add arguments information to the function.

add_type_rel(rel_name[, type_rel_func])

Attach the type function corresponding to the return type.

astext([show_meta_data, annotate])

Get the text format of the expression.

get(op_name)

Get the Op for a given name

get_attr(attr_name)

Get additional attribute about the operator.

has_attr(attr_name)

Check whether the operator has additional attribute.

list_op_names()

List all the op names in the op registry.

reset_attr(attr_name)

Reset attribute about the operator.

set_attr(attr_name, value[, plevel])

Set attribute about the operator.

set_attrs_type_key(key)

Set the attribute type key of op.

set_num_inputs(n)

Set the support level of op.

set_support_level(level)

Set the support level of op.

add_argument(name, type, description)[源代码]#

Add arguments information to the function.

Parameters#

namestr

The argument name.

typestr

The argument type.

descriptionstr

The argument description.

add_type_rel(rel_name, type_rel_func=None)[源代码]#

Attach the type function corresponding to the return type.

Parameters#

rel_namestr

The type relation name to register.

type_rel_funcOptional[function (args: List[Type], attrs: Attrs) -> Type]

The backing relation function which can solve an arbitrary relation on variables. Differences with type_rel_func in C++:

  1. When type_rel_func is not None

    1. OpAddTypeRel on C++ side will adjust type_rel_func with TypeReporter to calling convention of relay type system.

    2. type_rel_func returns output argument’s type, return None means can’t infer output’s type.

    3. only support single output operators for now, the last argument is output tensor.

  2. when type_rel_func is None, will call predefined type_rel_funcs in relay

    according to tvm.relay.type_relation. + rel_name.

astext(show_meta_data=True, annotate=None)[源代码]#

Get the text format of the expression.

Parameters#

show_meta_databool

Whether to include meta data section in the text if there is meta data.

annotate: Optional[Object->str]

Optionally annotate function to provide additional information in the comment block.

Returns#

textstr

The text format of the expression.

Notes#

The meta data section is necessary to fully parse the text format. However, it can contain dumps that are big (e.g constant weights), so it can be helpful to skip printing the meta data section.

static get(op_name)[源代码]#

Get the Op for a given name

Parameters#

op_namestr

The operator name

Returns#

opOp

The op of the corresponding name

get_attr(attr_name)[源代码]#

Get additional attribute about the operator.

Parameters#

attr_namestr

The attribute name.

Returns#

valueobject

The attribute value

has_attr(attr_name)[源代码]#

Check whether the operator has additional attribute.

Parameters#

attr_namestr

The attribute name.

Returns#

valuebool

Whether the operator has additional attribute

static list_op_names()[源代码]#

List all the op names in the op registry.

Returns#

valueList[str]

The registered op names

reset_attr(attr_name)[源代码]#

Reset attribute about the operator.

Parameters#

attr_namestr

The attribute name

set_attr(attr_name, value, plevel=10)[源代码]#

Set attribute about the operator.

Parameters#

attr_namestr

The attribute name

valueobject

The attribute value

plevelint

The priority level

set_attrs_type_key(key)[源代码]#

Set the attribute type key of op.

Parameters#

keystr

The type key.

set_num_inputs(n)[源代码]#

Set the support level of op.

Parameters#

nint

The input number.

set_support_level(level)[源代码]#

Set the support level of op.

Parameters#

levelint

The support level.

class tvm.ir.PointerType(element_type, storage_scope='')[源代码]#

PointerType used in the low-level TIR.

Parameters#

element_typetvm.ir.Type

The type of pointer’s element.

storage_scopestr

The storage scope into which the pointer addresses.

class tvm.ir.PoolInfo[源代码]#

PoolInfo object holds information related to memory pools where the statically sized allocate nodes will pooled into. This is a base class for WorkspacePoolInfo and ConstantPoolInfo.

class tvm.ir.PoolInfoProperties(size_hint_bytes=-1, clock_frequency_hz=-1, read_bandwidth_bytes_per_cycle=-1, write_bandwidth_bytes_per_cycle=-1, read_latency_cycles=0, write_latency_cycles=0, target_burst_bytes=None)[源代码]#

PoolInfo object holds information related to memory pools where the statically sized allocate nodes will pooled into.

Parameters#

size_hint_bytesOptional[int]

The expected size hint to be used by the allocator. The default value would be -1 which means the pool is not size restricted.

clock_frequency_hzOptional[int]

The clock frequency that the memory pool runs at in Hz. If not specified/known, this will default to -1 indicating it hasn’t been defined.

read_bandwidth_bytes_per_cycleOptional[int]

The read bandwidth of the memory pool in bytes/cycle. If not specified/known, this will default to -1 indicating it hasn’t been defined.

write_bandwidth_bytes_per_cycleOptional[int]

The write bandwidth of the memory pool in bytes/cycle. If not specified/known, this will default to -1 indicating it hasn’t been defined.

read_latency_cyclesOptional[int]

The read latency of the memory pool in cycles. If not specified/known, this will default to 0.

write_latency_cyclesOptional[int]

The write latency of the memory pool in cycles. If not specified/known, this will default to 0.

target_burst_bytesOptional[Union[Dict[Target, int], None]]

The burst length of the memory pool in bytes per target. If not specified/known for a given target, a burst length of 1 byte will be assumed.

参数:
  • size_hint_bytes (int | None)

  • clock_frequency_hz (int | None)

  • read_bandwidth_bytes_per_cycle (int | None)

  • write_bandwidth_bytes_per_cycle (int | None)

  • read_latency_cycles (int | None)

  • write_latency_cycles (int | None)

class tvm.ir.PrimExpr[源代码]#

Base class of all primitive expressions.

PrimExpr is used in the low-level code optimizations and integer analysis.

class tvm.ir.PrimType(dtype)[源代码]#

Primitive data type in the low level IR

Parameters#

dtypestr

The runtime data type relates to the primtype.

class tvm.ir.Range(begin, end=None, span=None)[源代码]#

Represent a range in TVM.

You do not need to create a Range explicitly. Python lists and tuples will be converted automatically to a Range in API functions.

Parameters#

beginPrimExpr

The begin value of the range when end is None. Otherwise it is the length of the range.

endOptional[PrimExpr]

The end value of the range.

spanOptional[Span]

The location of this node in the source code.

Note#

The constructor creates the range [begin, end) if the end argument is not None. Otherwise, it creates [0, begin).

Methods:

from_min_extent(min_value, extent[, span])

Construct a Range by min and extent.

static from_min_extent(min_value, extent, span=None)[源代码]#

Construct a Range by min and extent.

This constructs a range in [min_value, min_value + extent)

Parameters#

min_valuePrimExpr

The minimum value of the range.

extentPrimExpr

The extent of the range.

spanOptional[Span]

The location of this node in the source code.

Returns#

rngRange

The constructed range.

参数:
返回类型:

Range

参数:
class tvm.ir.RelayExpr[源代码]#

Base class of all non-primitive expressions.

Attributes:

checked_type

Get the checked type of tvm.relay.Expr.

struct_info

Get the struct info field

property checked_type#

Get the checked type of tvm.relay.Expr.

Returns#

checked_typetvm.relay.Type

The checked type.

property struct_info: StructInfo | None#

Get the struct info field

Returns#

struct_infotvm.relax.StructInfo

The struct info if available.

class tvm.ir.RelayRefType(value)[源代码]#

Reference Type in relay.

Parameters#

value: Type

The value type.

class tvm.ir.SequentialSpan(spans)[源代码]#

A sequence of source spans

This span is specific for an expression, which is from multiple expressions after an IR transform.

Parameters#

spansArray

The array of spans.

class tvm.ir.SourceName(name)[源代码]#

A identifier for a source location.

Parameters#

namestr

The name of the source.

class tvm.ir.Span(source_name, line, end_line, column, end_column)[源代码]#

Specifies a location in a source program.

Parameters#

sourceSourceName

The source name.

linenoint

The line number.

col_offsetint

The column offset of the location.

class tvm.ir.TensorAffineType(scale, zero_point, dtype, axis=-1)[源代码]#

The quantized type of a tensor, with scale, zero point, and datatype

The real space value is calculated as x = x_q * scale + zero_point

Parameters#

scale: Expr

The scale

zero_point: Expr

The zero_point

dtypestr

The content data type.

axisint

The axis for per-channel quantization.

class tvm.ir.TensorType(shape, dtype='float32')[源代码]#

A concrete TensorType in Relay.

This is the type assigned to tensors with a known dtype and shape. For example, a tensor of float32 and (5, 5).

Parameters#

shapeList[tvm.ir.PrimExpr]

The shape of the Tensor

dtypeOptional[str]

The content data type.

Attributes:

concrete_shape

Get shape of the type as concrete tuple of int.

property concrete_shape#

Get shape of the type as concrete tuple of int.

Returns#

shapeList[int]

The concrete shape of the Type.

Raises#

TypeError : If the shape is symbolic

class tvm.ir.TupleAffineType(types)[源代码]#

Affine types of a node with multiple outputs

Parameters#

typesList[TensorAffineType]

The shape of the Tensor

class tvm.ir.TupleType(fields)[源代码]#

The type of tuple values.

Parameters#

fieldsList[Type]

The fields in the tuple

class tvm.ir.Type[源代码]#

The base class of all types.

Methods:

__eq__(other)

Compare two types for structural equivalence.

same_as(other)

Compares two Relay types by referential equality.

__eq__(other)[源代码]#

Compare two types for structural equivalence.

same_as(other)[源代码]#

Compares two Relay types by referential equality.

class tvm.ir.TypeCall(func, args)[源代码]#

Type function application.

Parameters#

func: tvm.ir.Type

The function.

args: List[tvm.ir.Type]

The arguments.

Returns#

type_call: TypeCall

The type function application.

class tvm.ir.TypeConstraint[源代码]#

Abstract class representing a type constraint.

class tvm.ir.TypeData(header, type_vars, constructors)[源代码]#

Stores the definition for an Algebraic Data Type (ADT) in Relay.

Note that ADT definitions are treated as type-level functions because the type parameters need to be given for an instance of the ADT. Thus, any global type var that is an ADT header needs to be wrapped in a type call that passes in the type params.

Parameters#

header: GlobalTypeVar

The name of the ADT. ADTs with the same constructors but different names are treated as different types.

type_vars: List[TypeVar]

Type variables that appear in constructors.

constructors: List[Constructor]

The constructors for the ADT.

class tvm.ir.TypeKind(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[源代码]#

Possible kinds of TypeVars.

class tvm.ir.TypeRelation(func, args, num_inputs, attrs)[源代码]#

User defined type relation, it is an input-output relation on types.

TypeRelation is more generalized than TypeCall as it allows inference

of both inputs and outputs.

Parameters#

funcEnvFunc

User defined relation function.

args[tvm.ir.Type]

List of types to the func.

num_inputsint

Number of input arguments in args, this act as a hint for type inference.

attrsAttrs

The attribute attached to the relation information

Returns#

type_relationtvm.ir.TypeRelation

The type relation.

class tvm.ir.TypeVar(name_hint, kind=TypeKind.Type)[源代码]#

Type parameter in functions.

A type variable represents a type placeholder which will be filled in later on. This allows the user to write functions which are generic over types.

Parameters#

name_hint: str

The name of the type variable. This name only acts as a hint, and is not used for equality.

kindOptional[TypeKind]

The kind of the type parameter.

Methods:

__call__(*args)

Create a type call from this type.

__call__(*args)[源代码]#

Create a type call from this type.

Parameters#

args: List[Type]

The arguments to the type call.

Returns#

call: Type

The result type call.

class tvm.ir.VDevice(target=None, vdevice_id=0, memory_scope='global')[源代码]#
参数:
  • vdevice_id (int)

  • memory_scope (str)

class tvm.ir.WorkspaceMemoryPools(pools)[源代码]#

This object contains a list of WorkspacePoolInfo objects to be used as workspace memory in the compilation

Parameters#

poolsList[WorkspacePoolInfo]

The list of ConstantPoolInfo objects to be used with the compilation

参数:

pools (List[WorkspacePoolInfo])

class tvm.ir.WorkspacePoolInfo(pool_name, targets, pool_info_properties=None)[源代码]#

WorkspacePoolInfo object holds information related to RW memory pools where the statically sized allocate nodes will pooled into.

Parameters#

pool_namestr

The name of the memory pool

targetslist[Target]

A list of targets which could access the pool

pool_info_propertiesPoolInfoProperties

The properties of the pool.

参数:

pool_name (str)

tvm.ir.assert_structural_equal(lhs, rhs, map_free_vars=False)[源代码]#

Assert lhs and rhs are structurally equal to each other.

Parameters#

lhsObject

The left operand.

rhsObject

The left operand.

map_free_varsbool

Whether or not shall we map free vars that does not bound to any definitions as equal to each other.

Raises#

ValueError : if assertion does not hold.

See Also#

structural_equal

tvm.ir.load_json(json_str)[源代码]#

Load tvm object from json_str.

Parameters#

json_strstr

The json string

Returns#

nodeObject

The loaded tvm node.

返回类型:

Object

tvm.ir.make_node(type_key, **kwargs)[源代码]#

Make a new IR node by its type key and fields

Parameters#

type_keystr

The type key of the node.

**kwargsdict

The fields of the node.

Returns#

nodeNode

The corresponding IR Node

Note#

If the created node is instance of AttrsNode, then the creator function will also run bound checks and default value setup as supported by Attrs.

Example#

The following code constructs a IntImm object

x = tvm.ir.make_node("IntImm", dtype="int32", value=10, span=None)
assert isinstance(x, tvm.tir.IntImm)
assert x.value == 10
tvm.ir.register_intrin_lowering(op_name, target, *, f=None, level=10)[源代码]#

Register Op lowering function

Parameters#

op_namestr

The op name

targetstr

The target string for given intrinsic lowering function

ffunction, optional

The function to be registered.

levelint

The priority level

Returns#

fregisterfunction

Register op lowering function if f is not specified.

tvm.ir.register_op_attr(op_name, attr_key, value=None, level=10)[源代码]#

Register an operator property of an operator by name.

Parameters#

op_namestr

The name of operator

attr_keystr

The attribute name.

valueobject, optional

The value to set

levelint, optional

The priority level

Returns#

fregisterfunction

Register function if value is not specified.

tvm.ir.save_json(node)[源代码]#

Save tvm object as json string.

Parameters#

nodeObject

A TVM object to be saved.

Returns#

json_strstr

Saved json string.

返回类型:

str

tvm.ir.structural_equal(lhs, rhs, map_free_vars=False)[源代码]#

Check structural equality of lhs and rhs.

The structural equality is recursively defined in the DAG of IRNodes. There are two kinds of nodes:

  • Graph node: a graph node in lhs can only be mapped as equal to one and only one graph node in rhs.

  • Normal node: equality is recursively defined without the restriction of graph nodes.

Vars(tir::Var, TypeVar) and non-constant relay expression nodes are graph nodes. For example, it means that %1 = %x + %y; %1 + %1 is not structurally equal to %1 = %x + %y; %2 = %x + %y; %1 + %2 in relay.

A var-type node(e.g. tir::Var, TypeVar) can be mapped as equal to another var with the same type if one of the following condition holds:

  • They appear in a same definition point(e.g. function argument).

  • They points to the same VarNode via the same_as relation.

  • They appear in a same usage point, and map_free_vars is set to be True.

The rules for var are used to remap variables occurs in function arguments and let-bindings.

Parameters#

lhsObject

The left operand.

rhsObject

The left operand.

map_free_varsbool

Whether free variables (i.e. variables without a definition site) should be mapped as equal to each other.

Return#

resultbool

The comparison result.

See Also#

structural_hash assert_strucural_equal

tvm.ir.structural_hash(node, map_free_vars=False)[源代码]#

Compute structural hash of node

The structural hash value is recursively defined in the DAG of IRNodes. There are two kinds of nodes:

  • Normal node: the hash value is defined by its content and type only.

  • Graph node: each graph node will be assigned a unique index ordered by the first occurence during the visit. The hash value of a graph node is combined from the hash values of its contents and the index.

structural_hash is made to be concistent with structural_equal. If two nodes are structurally equal to each other, then their structural hash (with the same map_free_vars option) should be equal to each other as well.

If the structural hash of two nodes equals to each other, then it is highly likely(except for rare hash value collison cases) that the two nodes are structurally equal to each other.

Parameters#

nodeObject

The input to be hashed.

map_free_varsbool

If map_free_vars is set to true, we will hash free variables by the order of their occurrences. Otherwise, we will hash by their in-memory pointer address.

Return#

resultint

The hash result

See Also#

structrual_equal

tvm.instrument#

Common pass instrumentation across IR variants.

Classes:

PassInstrument()

A pass instrument implementation.

PassPrintingInstrument(...)

A pass instrument to print if before or print ir after each element of a named pass.

PassTimingInstrument()

A wrapper to create a passes time instrument that implemented in C++

PrintAfterAll(*args, **kwargs)

Print the name of the pass, the IR, only after passes execute.

PrintBeforeAll(*args, **kwargs)

Print the name of the pass, the IR, only before passes execute.

Functions:

_wrap_class_pass_instrument(pi_cls)

Wrap a python class as pass instrument

pass_instrument([pi_cls])

Decorate a pass instrument.

class tvm.instrument.PassInstrument#

A pass instrument implementation.

To use, a user class can either subclass from PassInstrument directly, or can apply the pass_instrument() wrapper. In either case, the enter_pass_ctx, exit_pass_ctx, should_run, run_before_pass, and run_after_pass methods can be defined to adjust the instrument’s behavior. See the no-op implementations in this class definition for more information on each.

Methods:

enter_pass_ctx()

Called when entering the instrumented context.

exit_pass_ctx()

Called when exiting the instrumented context.

run_after_pass(mod, info)

Instrument after the pass runs.

run_before_pass(mod, info)

Instrument before the pass runs.

should_run(mod, info)

Determine whether to run the pass or not.

enter_pass_ctx()#

Called when entering the instrumented context.

Returns#

None

exit_pass_ctx()#

Called when exiting the instrumented context.

Returns#

None

run_after_pass(mod, info)#

Instrument after the pass runs.

Called once for each pass that is run while the instrumented context is active.

Parameters#

mod : tvm.ir.module.IRModule

The module on which an optimization pass is being run.

info : tvm.transform.PassInfo

The pass information.

Returns#

None

run_before_pass(mod, info)#

Instrument before the pass runs.

Called once for each pass that is run while the instrumented context is active.

Parameters#

mod : tvm.ir.module.IRModule

The module on which an optimization pass is being run.

info : tvm.transform.PassInfo

The pass information.

Returns#

None

should_run(mod, info)#

Determine whether to run the pass or not.

Called once for each pass that is run while the instrumented context is active.

Parameters#

mod : tvm.ir.module.IRModule

The module on which an optimization pass is being run.

info : tvm.transform.PassInfo

The pass information.

Returns#

should_run : bool

True to run the pass, or False to skip the pass.

class tvm.instrument.PassPrintingInstrument(print_before_pass_names, print_after_pass_names)#

A pass instrument to print if before or print ir after each element of a named pass.

class tvm.instrument.PassTimingInstrument#

A wrapper to create a passes time instrument that implemented in C++

Methods:

render()

Retrieve rendered time profile result Returns ------- string : string The rendered string result of time profiles

static render()#

Retrieve rendered time profile result Returns ——- string : string

The rendered string result of time profiles

Examples#

timing_inst = PassTimingInstrument()
with tvm.transform.PassContext(instruments=[timing_inst]):
    relay_mod = relay.transform.InferType()(relay_mod)
    relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
    # before exiting the context, get profile results.
    profiles = timing_inst.render()
class tvm.instrument.PrintAfterAll(*args, **kwargs)#

Print the name of the pass, the IR, only after passes execute.

class tvm.instrument.PrintBeforeAll(*args, **kwargs)#

Print the name of the pass, the IR, only before passes execute.

tvm.instrument._wrap_class_pass_instrument(pi_cls)#

Wrap a python class as pass instrument

tvm.instrument.pass_instrument(pi_cls=None)#

Decorate a pass instrument.

Parameters#

pi_classclass

Instrument class. See example below.

Examples#

@tvm.instrument.pass_instrument
class SkipPass:
    def __init__(self, skip_pass_name):
        self.skip_pass_name = skip_pass_name

    # Uncomment to customize
    # def enter_pass_ctx(self):
    #    pass

    # Uncomment to customize
    # def exit_pass_ctx(self):
    #    pass

    # If pass name contains keyword, skip it by return False. (return True: not skip)
    def should_run(self, mod, pass_info)
        if self.skip_pass_name in pass_info.name:
            return False
        return True

    # Uncomment to customize
    # def run_before_pass(self, mod, pass_info):
    #    pass

    # Uncomment to customize
    # def run_after_pass(self, mod, pass_info):
    #    pass

skip_annotate = SkipPass("AnnotateSpans")
with tvm.transform.PassContext(instruments=[skip_annotate]):
    tvm.relay.build(mod, "llvm")

tvm.transform#

Common pass infrastructure across IR variants.

Classes:

ModulePass()

A pass that works on tvm.IRModule.

Pass()

The base class of all passes.

PassContext([opt_level, required_pass, ...])

The basis where a Relay optimization/analysis runs on.

PassInfo(opt_level, name[, required, traceable])

The class contains the meta data required by a pass.

Sequential([passes, opt_level, name, ...])

A pass that works on a sequence of pass objects.

Functions:

ApplyPassToFunction(transform, func_name_regex)

Utility to apply a pass to specific functions in an IRModule

PrintIR([header, show_meta_data])

A special trace pass that prints the header and IR.

_wrap_class_module_pass(pass_cls, pass_info)

Wrap a python class as function pass

module_pass([pass_func, opt_level, name, ...])

Decorate a module pass.

class tvm.transform.ModulePass#

A pass that works on tvm.IRModule. Users don’t need to interact with this class directly. Instead, a module pass should be created through module_pass, because the design of the module_pass API is flexible enough to handle the creation of a module pass in different manners. In addition, all members of a module pass can be accessed from the base class. The same rule applies to FunctionPass as well.

class tvm.transform.Pass#

The base class of all passes. All methods here are just simple wrappers that are implemented in the backend. They are defined for users to conveniently interact with the base class.

Methods:

__call__(mod)

Execute the pass.

Attributes:

info

Get the pass meta.

__call__(mod)#

Execute the pass. Note that for sequential pass, the dependency among different passes will be resolved in the backend.

Parameters#

modtvm.IRModule

The module that a certain optimization is performed on.

Returns#

modtvm.IRModule

The updated module after applying this pass.

property info#

Get the pass meta.

class tvm.transform.PassContext(opt_level=2, required_pass=None, disabled_pass=None, instruments=None, config=None, trace=None, trace_stack=None, make_traceable=None, num_evals=0, tuning_api_database=None)#

The basis where a Relay optimization/analysis runs on. Each pass context contains a number of auxiliary information that is used to help an optimization pass. Such information includes the error reporter to record the errors of during the optimization, etc.

opt_levelOptional[int]

The optimization level of this pass.

required_passOptional[Union[List[str], Set[str], Tuple[str]]]

The list of passes that are required by a certain pass.

disabled_passOptional[Union[List[str], Set[str], Tuple[str]]]

The list of passes that are disabled.

instrumentsOptional[Sequence[PassInstrument]]

The list of pass instrument implementations.

configOptional[Dict[str, Object]]

Additional configurations for specific passes.

trace: Optional[relax.tuning.Trace]

Initial trace for trace mode.

trace_stack: Optional[List[relax.tuning_api.Trace]]

Initial trace stack for trace mode.

make_traceable: Optional[List[str]]

List of passes to make traceable.

num_evals: int

initial number of evaluations conducted in the pipeline.

tuning_api_database: Optional[relax.tuning_api.JSONDatabase]

Methods:

current()

Return the current pass context.

get_current_trace()

Get the trace on the top of the stack.

get_trace_stack()

Get the current trace stack.

get_trace_stack_size()

Get the size of current stack.

get_tuning_api_database()

Get tuning api database.

inc_num_evals(num)

Increment the number of evaluations conducted in the pipeline.

list_configs()

List all registered PassContext configuration names and metadata.

override_instruments(instruments)

Override instruments within this PassContext.

pop_trace([return_current])

Pop a topmost trace from the stack.

push_trace(trace)

Push a trace into the stack.

set_num_evals(num)

Set the number of evaluations conducted in the pipeline.

static current()#

Return the current pass context.

get_current_trace()#

Get the trace on the top of the stack.

get_trace_stack()#

Get the current trace stack.

get_trace_stack_size()#

Get the size of current stack.

get_tuning_api_database()#

Get tuning api database.

inc_num_evals(num)#

Increment the number of evaluations conducted in the pipeline.

参数:

num (int)

static list_configs()#

List all registered PassContext configuration names and metadata.

Returns#

configs : Dict[str, Dict[str, str]]

override_instruments(instruments)#

Override instruments within this PassContext.

If there are existing instruments, their exit_pass_ctx callbacks are called. Then switching to new instruments and calling new enter_pass_ctx callbacks.

instrumentsSequence[PassInstrument]

The list of pass instrument implementations.

pop_trace(return_current=True)#

Pop a topmost trace from the stack. Returns ——- Trace : Optional[relax.tuning.Trace]

push_trace(trace)#

Push a trace into the stack.

set_num_evals(num)#

Set the number of evaluations conducted in the pipeline.

参数:

num (int)

class tvm.transform.PassInfo(opt_level, name, required=None, traceable=False)#

The class contains the meta data required by a pass. It is the container of information needed by running an optimization or analysis. This class can be extended by adding new members when more meta data is needed.

Parameters#

opt_levelint

The optimization level of this pass.

namestr

The pass name.

requiredList[str]

The list of passes that are required by a certain pass.

class tvm.transform.Sequential(passes=None, opt_level=0, name='sequential', required=None, traceable=False)#

A pass that works on a sequence of pass objects. Multiple passes can be executed sequentially using this class.

Note that users can also provide a series of passes that they don’t want to apply when running a sequential pass. Pass dependency will be resolved in the backend as well.

Parameters#

passesOptional[List[Pass]]

A sequence of passes candidate for optimization.

opt_levelOptional[int]

The optimization level of this sequential pass. The opt_level of a default sequential pass is set to 0. Note that some of the passes within the Sequantial may still not be executed if their opt_level is higher than the provided opt_level.

nameOptional[str]

The name of the sequential pass.

requiredOptional[List[str]]

The list of passes that the sequential pass is dependent on.

tvm.transform.ApplyPassToFunction(transform, func_name_regex, error_if_no_function_matches_regex=False)#

Utility to apply a pass to specific functions in an IRModule

TVM uses IRModule to IRModule transformations at all stages of lowering. These transformations may be useful when hand-writing an optimized model, or to perform optimizations on specific kernels within an IRModule. This utility allows a pass to be applied to a specified function, without altering other functions in the module.

Parameters#

transform: Pass

The IRModule to IRModule pass to be applied.

func_name_regex: str

A regex used to select the functions to be updated. The pass will be applied to all functions whose name matches the regex.

error_if_no_function_matches_regex: bool

Specifies the behavior if an IRModule does not contain any function matching the provided regex. If true, an error will be raised. If false (default), the IRModule will be returned unmodified.

Returns#

new_transform: Pass

The modified IRModule to IRModule pass.

参数:
  • transform (Pass)

  • func_name_regex (str)

  • error_if_no_function_matches_regex (bool)

返回类型:

Pass

tvm.transform.PrintIR(header='', show_meta_data=False)#

A special trace pass that prints the header and IR.

Parameters#

headerstr

The header to be displayed along with the dump.

show_meta_databool

A boolean flag to indicate if meta data should be printed.

Returns#

The pass

tvm.transform._wrap_class_module_pass(pass_cls, pass_info)#

Wrap a python class as function pass

tvm.transform.module_pass(pass_func=None, opt_level=None, name=None, required=None, traceable=False)#

Decorate a module pass.

This function returns a callback when pass_func is provided. Otherwise, it serves a decorator function.

pass_func can also be a class type with a method transform_module. This function will create a decorated ModulePass using transform_module as the pass function.

Parameters#

pass_funcOptional[Callable[(Module, PassContext) ->Module]]

The transformation function or class.

opt_levelint

The optimization level of this module pass.

nameOptional[str]

The name of the module pass. The name could be empty. In this case, the name of the optimization function will be used as the pass name.

requiredOptional[List[str]]

The list of passes that the module pass is dependent on.

traceable: Boolean

Boolean variable whether the module pass is traceable

Returns#

create_module_passUnion[Callable, ModulePass]

A decorator will be returned if pass_func is not provided, otherwise return the decorated result. The returned decorator has two behaviors depending on the input: A new ModulePass will be returned when we decorate a pass function. A new ModulePass class will be returned when we decorate a class type.

Examples#

The following code block decorates a module pass class.

@relay.transform.module_pass
class CustomPipeline:
    def __init__(self, enable_fold):
        self.enable_fold = enable_fold
        self.cse = relay.transform.EliminateCommonSubexpr()
        self.const_fold = relay.transform.FoldConstant()

    def transform_module(self, mod, ctx):
        mod = self.cse(mod, ctx)
        if self.enable_fold:
            mod = self.const_fold(mod, ctx)
        return mod

# create an instance of customized pipeline
pipeline = CustomPipeline(enable_fold=False)
assert isinstance(pipeline, transform.ModulePass)
# run the pipeline.
output_module = pipeline(input_module)

The following code creates a module pass by decorating a user defined transform function.

@relay.transform.module_pass(opt_level=2)
def transform(mod, ctx):
    tp = relay.TensorType((10,), "float32")
    x = relay.var("x", tp)
    gv = relay.GlobalVar("var")
    func = relay.Function([x], relay.abs(x))
    new_mod = tvm.IRModule({gv: func})
    new_mod.update(mod)
    return new_mod

module_pass = transform
assert isinstance(module_pass, transform.ModulePass)
assert module_pass.info.opt_level == 2

# Given a module m, the optimization could be invoked as the follwoing:
updated_mod = module_pass(m)
# Now a function abs should be added to the module m.