tvm.relay.backend#
Backend codegen modules for relay.
The Python interface to the Relay reference interpreter.
- class tvm.relay.backend.interpreter.Executor[源代码]#
An abstract interface for executing Relay programs.
- evaluate(expr=None, binds=None)[源代码]#
Evaluate a Relay expression on the executor.
- 参数:
expr (Optional[tvm.relay.Expr]) -- The expression to evaluate.
binds (Optional[Map[tvm.relay.Var, tvm.relay.Expr]]) -- Additional binding of free variable.
- 返回:
val -- The evaluation result.
- 返回类型:
Union[function, Object]
- class tvm.relay.backend.interpreter.Interpreter(mod, device, target)[源代码]#
Simple interpreter interface.
- 参数:
mod (tvm.IRModule) -- The module to support the execution.
device (Device) -- The runtime device to run the code on.
target (tvm.Target) -- The target option to build the function. Only homogeneous execution is supported.
CAUTION (Despite the API the module is prepared upon each call to evaluate)
create_executor. (rather than once in)
is (That)
code-block: (..) -- python: executor = relay.create_executor(kind="debug", mod=module) a = executor.evaluate(expr)(args1) b = executor.evaluate(expr)(args2)
efficiency (will prepare all the bindings in module twice. For)
hoist (try to)
possible (calls to evaluate as high as)
create_executor (preferably immediately after)
code-block: -- python: func = relay.create_executor(kind="debug", mod=module).evaluate(expr) a = func(args1) b = func(args2)
TE compiler engine (replacing legacy compile_engine).
- class tvm.relay.backend.te_compiler.CCacheKey(source_func, target)[源代码]#
Key in the TE Compiler.
- 参数:
source_func (tvm.relay.Function) -- The source function.
target (tvm.Target) -- The target we want to run the function on.
- class tvm.relay.backend.te_compiler.CCacheValue[源代码]#
Value in the TE Compiler, including usage statistics.
- class tvm.relay.backend.te_compiler.TECompiler[源代码]#
TECompiler to get lowered code.
- items()[源代码]#
List items in the cache. :returns: item_list -- The list of items. :rtype: List[Tuple[CCacheKey, CCacheValue]]
- tvm.relay.backend.te_compiler.get()[源代码]#
Get the global TE Compiler.
- 返回:
engine -- The TE Compiler.
- 返回类型:
tvm.relay.backend.TECompiler
- tvm.relay.backend.te_compiler.get_valid_implementations(op, attrs, inputs, out_type, target)[源代码]#
Get all valid implementations from the op strategy.
Note that this function doesn't support op with symbolic input shapes.
- 参数:
op (tvm.ir.Op) -- Relay operator.
attrs (object) -- The op attribute.
inputs (List[tvm.te.Tensor]) -- Input tensors to the op.
out_type (relay.Type) -- The output type.
target (tvm.target.Target) -- The target to compile the op.
- 返回:
ret -- The list of all valid op implementations.
- 返回类型:
List[relay.op.OpImplementation]
- tvm.relay.backend.te_compiler.lower_to_primfunc(relay_func, target)[源代码]#
Lower Relay Function to TIR PrimFunc.
- 参数:
relay_func (relay.Function) -- The source primitive function, created by FuseOps.
target (Target) -- The compilation target.
- 返回:
prim_func -- The created prim func.
- 返回类型:
- tvm.relay.backend.te_compiler.select_implementation(op, attrs, inputs, out_type, target, use_autotvm=True)[源代码]#
Select the best implementation from the op strategy.
If use_autotvm is True, it'll first try to find the best implementation based on AutoTVM profile results. If no AutoTVM profile result is found, it'll choose the implementation with highest plevel.
If use_autotvm is False, it'll directly choose the implementation with highest plevel.
Note that this function doesn't support op with symbolic input shapes.
- 参数:
op (tvm.ir.Op) -- Relay operator.
attrs (object) -- The op attribute.
inputs (List[tvm.te.Tensor]) -- Input tensors to the op.
out_type (relay.Type) -- The output type.
target (tvm.target.Target) -- The target to compile the op.
use_autotvm (bool) -- Whether query AutoTVM to pick the best.
- 返回:
ret -- The best op implementation and the corresponding output tensors.
- 返回类型:
tuple(relay.op.OpImplementation, List[tvm.te.Tensor])
A compiler from a Relay expression to TVM's graph executor.
The compiler is built from a few pieces.
First we define a compiler from a single Relay expression to the graph language. We require the expression to be a function. The function's parameters correspond to the placeholder/inputs and model parameters found in the computation graph representation. The body of the function represents the computation graph.
The compiler's output is a program in the graph language, which is composed of Node, NodeRef, InputNode, OpNode. This "little language" represents programs in TVM's graph format.
To connect to the graph executor, we use a printer that converts our graph format into TVM's JSON format. The resulting string can be loaded by contrib.graph_executor or any other TVM runtime compatible systems.
- class tvm.relay.backend.graph_executor_codegen.GraphExecutorCodegen(mod, target)[源代码]#
The compiler from Relay to the TVM runtime system.
- codegen(ir_module, func)[源代码]#
Compile a single function into a graph.
- 参数:
ir_module (tvm.ir.Module) -- The module to compile
func (tvm.relay.Expr) -- The function to compile.
- 返回:
graph_json (str) -- The graph json that can be consumed by runtime.
mod (IRModule or Dict[Target, IRModule]) -- The lowered functions.
params (Dict[str, tvm.nd.NDArray]) -- Additional constant parameters.
The Relay Virtual Machine.
Implements a Python interface to compiling and executing on the Relay VM.
- class tvm.relay.backend.vm.VMCompiler[源代码]#
Compiler that compiles Relay module to VM executable.
- get_exec()[源代码]#
Get the VM executable.
- 返回:
exec -- The VM executable that contains both library code and bytecode.
- 返回类型:
tvm.runtime.vm.Executable
- lower(mod, target=None, target_host=None)[源代码]#
Lower the module to VM bytecode.
- 参数:
mod (tvm.IRModule) -- The Relay module to build.
target (any multi-target like object, see Target.canon_multi_target) -- For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets.
target_host (any target-like object, see Target.canon_target) -- Host compilation target, if target is device.
- optimize(mod, target=None, target_host=None, params=None)[源代码]#
Helper method that optimizes a Relay module via VM.
- 参数:
mod (tvm.IRModule)
target (any multi-target like object, see Target.canon_multi_target) -- For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets.
target_host (any target-like object, see Target.canon_target) -- Host compilation target, if target is device.
params (dict of str to NDArray) -- Input parameters to the graph that do not change during inference time. Used for constant folding.
- 返回:
mod (tvm.IRModule) -- The optimized relay module.
params (dict) -- The parameters of the final module.
- class tvm.relay.backend.vm.VMExecutor(mod, device, target)[源代码]#
An implementation of the executor interface for the Relay VM.
Useful interface for experimentation and debugging the VM can also be used directly from the API. supported by tvm.runtime.vm.
- 参数:
mod (
IRModule
) -- The module to support the execution.device (
Device
) -- The runtime device to run the code on.target (any multi-target like object, see Target.canon_multi_target) -- For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets.
- tvm.relay.backend.vm.compile(mod, target=None, target_host=None, params=None)[源代码]#
Compile the module to VM executable. A helper function for VMCompiler.
- 参数:
mod (tvm.IRModule) -- The Relay module to build.
target (any multi-target like object, see Target.canon_multi_target) -- For homogeneous compilation, the unique build target. For heterogeneous compilation, a dictionary or list of possible build targets.
target_host (None, or any target-like object, see Target.canon_target) -- Host compilation target, if target is device. When TVM compiles device specific program such as CUDA, we also need host(CPU) side code to interact with the driver to setup the dimensions and parameters correctly. target_host is used to specify the host side codegen target. By default, llvm is used if it is enabled, otherwise a stackvm intepreter is used.
params (dict of str to NDArray) -- Input parameters to the graph that do not change during inference time. Used for constant folding.
- 返回:
exec -- The VM executable that contains both library code and bytecode.
- 返回类型:
tvm.runtime.vm.Executable