tvm.driver

目录

tvm.driver#

Namespace for driver APIs

tvm.lower(inp, args=None, name='main', binds=None, simple_mode=False)[源代码]#

Lowering step before build into target.

Parameters#

inpUnion[tvm.te.schedule.Schedule, tvm.tir.PrimFunc, IRModule]

The TE schedule or TensorIR PrimFunc/IRModule to be built

argsOptional[List[Union[tvm.tir.Buffer, tensor.Tensor, tir.Var]]]

The argument lists to the function for TE schedule.

It should be None if we want to lower TensorIR.

namestr

The name of the result function.

bindsOptional[Mapping[tensor.Tensor, tvm.tir.Buffer]]

Dictionary that maps the Tensor to Buffer which specified the data layout requirement of the function. By default, a new compact buffer is created for each tensor in the argument.

simple_modebool

Whether only output simple and compact statement, this will skip LoopPartition, api wrapper generation and Unrolling.

Returns#

mIRModule

The result IRModule

参数:
返回类型:

IRModule

tvm.build(inputs, args=None, target=None, target_host=None, runtime=None, name='default_function', binds=None)[源代码]#

Build a function with arguments as signature. Code will be generated for devices coupled with target information.

Parameters#

inputsUnion[tvm.te.schedule.Schedule, tvm.tir.PrimFunc, IRModule, Mapping[str, IRModule]]

The input to be built

argsOptional[List[Union[tvm.tir.Buffer, tensor.Tensor, tir.Var]]]

The argument lists to the function.

targetOptional[Union[str, Target]]

The target and option of the compilation.

target_hostOptional[Union[str, Target]]

Host compilation target, if target is device. When TVM compiles device specific program such as CUDA, we also need host(CPU) side code to interact with the driver setup the dimensions and parameters correctly. target_host is used to specify the host side codegen target. By default, llvm is used if it is enabled, otherwise a stackvm interpreter is used.

runtimeOptional[Runtime]

Runtime to generate artifacts for

nameOptional[str]

The name of result function.

bindsOptional[Mapping[tensor.Tensor, tvm.tir.Buffer]]

Dictionary that maps the binding of symbolic buffer to Tensor. By default, a new buffer is created for each tensor in the argument.

Returns#

rettvm.module

A module that combines both host and device code.

Examples#

There are two typical example uses of this function depending on the type of the argument inputs: 1. it is an IRModule.

n = 2
A = te.placeholder((n,), name='A')
B = te.placeholder((n,), name='B')
C = te.compute(A.shape, lambda *i: A(*i) + B(*i), name='C')
s = tvm.te.create_schedule(C.op)
m = tvm.lower(s, [A, B, C], name="test_add")
rt_mod = tvm.build(m, target="llvm")
  1. it is a dict of compilation target to IRModule.

n = 2
A = te.placeholder((n,), name='A')
B = te.placeholder((n,), name='B')
C = te.compute(A.shape, lambda *i: A(*i) + B(*i), name='C')
s1 = tvm.te.create_schedule(C.op)
with tvm.target.cuda() as cuda_tgt:
  s2 = topi.cuda.schedule_injective(cuda_tgt, [C])
  m1 = tvm.lower(s1, [A, B, C], name="test_add1")
  m2 = tvm.lower(s2, [A, B, C], name="test_add2")
  rt_mod = tvm.build({"llvm": m1, "cuda": m2})

Note#

See the note on tvm.target on target string format.

参数: