tvm.contrib#
Contrib APIs of TVM python package.
Contrib API provides many useful not core features. Some of these are useful utilities to interact with thirdparty libraries and tools.
tvm.contrib.cblas#
External function interface to BLAS libraries.
- tvm.contrib.cblas.batch_matmul(lhs, rhs, transa=False, transb=False, iterative=False, **kwargs)[源代码]#
Create an extern op that compute batched matrix mult of A and rhs with CBLAS This function serves as an example on how to call external libraries.
tvm.contrib.clang#
Util to invoke clang in the system.
- tvm.contrib.clang.create_llvm(inputs, output=None, options=None, cc=None)[源代码]#
Create llvm text ir.
- 参数:
inputs (list of str) -- List of input files name or code source.
output (str, optional) -- Output file, if it is none a temporary file is created
options (list) -- The list of additional options string.
cc (str, optional) -- The clang compiler, if not specified, we will try to guess the matched clang version.
- 返回:
code -- The generated llvm text IR.
- 返回类型:
- tvm.contrib.clang.find_clang(required=True)[源代码]#
Find clang in system.
- 参数:
required (bool) -- Whether it is required, runtime error will be raised if the compiler is required.
- 返回:
valid_list -- List of possible paths.
- 返回类型:
备注
This function will first search clang that matches the major llvm version that built with tvm
tvm.contrib.cc#
Util to invoke C/C++ compilers in the system.
- tvm.contrib.cc.create_executable(output, objects, options=None, cc=None, cwd=None, ccache_env=None)[源代码]#
Create executable binary.
- 参数:
output (str) -- The target executable.
objects (List[str]) -- List of object files.
options (List[str]) -- The list of additional options string.
cc (Optional[str]) -- The compiler command.
cwd (Optional[str]) -- The urrent working directory.
ccache_env (Optional[Dict[str, str]]) -- The environment variable for ccache. Set None to disable ccache by default.
Create shared library.
- 参数:
output (str) -- The target shared library.
objects (List[str]) -- List of object files.
options (List[str]) -- The list of additional options string.
cc (Optional[str]) -- The compiler command.
cwd (Optional[str]) -- The current working directory.
ccache_env (Optional[Dict[str, str]]) -- The environment variable for ccache. Set None to disable ccache by default.
- tvm.contrib.cc.cross_compiler(compile_func, options=None, output_format=None, get_target_triple=None, add_files=None)[源代码]#
Create a cross compiler function by specializing compile_func with options.
This function can be used to construct compile functions that can be passed to AutoTVM measure or export_library.
- 参数:
compile_func (Union[str, Callable[[str, str, Optional[str]], None]]) -- Function that performs the actual compilation
options (Optional[List[str]]) -- List of additional optional string.
output_format (Optional[str]) -- Library output format.
get_target_triple (Optional[Callable]) -- Function that can target triple according to dumpmachine option of compiler.
add_files (Optional[List[str]]) -- List of paths to additional object, source, library files to pass as part of the compilation.
- 返回:
fcompile -- A compilation function that can be passed to export_library.
- 返回类型:
示例
from tvm.contrib import cc, ndk # export using arm gcc mod = build_runtime_module() mod.export_library(path_dso, fcompile=cc.cross_compiler("arm-linux-gnueabihf-gcc")) # specialize ndk compilation options. specialized_ndk = cc.cross_compiler( ndk.create_shared, ["--sysroot=/path/to/sysroot", "-shared", "-fPIC", "-lm"]) mod.export_library(path_dso, fcompile=specialized_ndk)
- tvm.contrib.cc.get_cc()[源代码]#
Return the path to the default C/C++ compiler.
- 返回:
out -- The path to the default C/C++ compiler, or None if none was found.
- 返回类型:
Optional[str]
tvm.contrib.cublas#
External function interface to cuBLAS libraries.
- tvm.contrib.cublas.batch_matmul(lhs, rhs, transa=False, transb=False, dtype=None)[源代码]#
Create an extern op that compute batch matrix mult of A and rhs with cuBLAS
tvm.contrib.dlpack#
Wrapping functions to bridge frameworks with DLPack support to TVM
tvm.contrib.emcc#
Util to invoke emscripten compilers in the system.
tvm.contrib.miopen#
External function interface to MIOpen library.
- tvm.contrib.miopen.conv2d_forward(x, w, stride_h=1, stride_w=1, pad_h=0, pad_w=0, dilation_h=1, dilation_w=1, conv_mode=0, data_type=1, group_count=1)[源代码]#
Create an extern op that compute 2D convolution with MIOpen
- 参数:
x (Tensor) -- input feature map
w (Tensor) -- convolution weight
stride_h (int) -- height stride
stride_w (int) -- width stride
pad_h (int) -- height pad
pad_w (int) -- weight pad
dilation_h (int) -- height dilation
dilation_w (int) -- width dilation
conv_mode (int) -- 0: miopenConvolution 1: miopenTranspose
data_type (int) -- 0: miopenHalf (fp16) 1: miopenFloat (fp32)
group_count (int) -- number of groups
- 返回:
y -- The result tensor
- 返回类型:
- tvm.contrib.miopen.log_softmax(x, axis=-1)[源代码]#
Compute log softmax with MIOpen
- 参数:
x (tvm.te.Tensor) -- The input tensor
axis (int) -- The axis to compute log softmax over
- 返回:
ret -- The result tensor
- 返回类型:
- tvm.contrib.miopen.softmax(x, axis=-1)[源代码]#
Compute softmax with MIOpen
- 参数:
x (tvm.te.Tensor) -- The input tensor
axis (int) -- The axis to compute softmax over
- 返回:
ret -- The result tensor
- 返回类型:
tvm.contrib.mxnet#
MXNet bridge wrap Function MXNet's async function.
- tvm.contrib.mxnet.to_mxnet_func(func, const_loc=None)[源代码]#
Wrap a TVM function as MXNet function
MXNet function runs asynchrously via its engine.
- 参数:
- 返回:
async_func -- A function that can take MXNet NDArray as argument in places that used to expect TVM NDArray. Run asynchrously in MXNet's async engine.
- 返回类型:
tvm.contrib.ndk#
Util to invoke NDK compiler toolchain.
Create shared library.
tvm.contrib.nnpack#
External function interface to NNPACK libraries.
- tvm.contrib.nnpack.convolution_inference(data, kernel, bias, padding, stride, nthreads=1, algorithm=0)[源代码]#
Create an extern op to do inference convolution of 4D tensor data and 4D tensor kernel and 1D tensor bias with nnpack.
- 参数:
data (Tensor) -- data 4D tensor input[batch][input_channels][input_height][input_width] of FP32 elements.
kernel (Tensor) -- kernel 4D tensor kernel[output_channels][input_channels][kernel_height] [kernel_width] of FP32 elements.
bias (Tensor) -- bias 1D array bias[output_channels][input_channels][kernel_height] [kernel_width] of FP32 elements.
padding (list) -- padding A 4-dim list of [pad_top, pad_bottom, pad_left, pad_right], which indicates the padding around the feature map.
stride (list) -- stride A 2-dim list of [stride_height, stride_width], which indicates the stride.
- 返回:
output -- output 4D tensor output[batch][output_channels][output_height][output_width] of FP32 elements.
- 返回类型:
- tvm.contrib.nnpack.convolution_inference_weight_transform(kernel, nthreads=1, algorithm=0, dtype='float32')[源代码]#
Create an extern op to do inference convolution of 3D tensor data and 4D tensor kernel and 1D tensor bias with nnpack.
- tvm.contrib.nnpack.convolution_inference_without_weight_transform(data, transformed_kernel, bias, padding, stride, nthreads=1, algorithm=0)[源代码]#
Create an extern op to do inference convolution of 4D tensor data and 4D pre-transformed tensor kernel and 1D tensor bias with nnpack.
- 参数:
data (Tensor) -- data 4D tensor input[batch][input_channels][input_height][input_width] of FP32 elements.
transformed_kernel (Tensor) -- transformed_kernel 4D tensor kernel[output_channels][input_channels][tile] [tile] of FP32 elements.
bias (Tensor) -- bias 1D array bias[output_channels][input_channels][kernel_height] [kernel_width] of FP32 elements.
padding (list) -- padding A 4-dim list of [pad_top, pad_bottom, pad_left, pad_right], which indicates the padding around the feature map.
stride (list) -- stride A 2-dim list of [stride_height, stride_width], which indicates the stride.
- 返回:
output -- output 4D tensor output[batch][output_channels][output_height][output_width] of FP32 elements.
- 返回类型:
tvm.contrib.nvcc#
Utility to invoke nvcc compiler in the system
- tvm.contrib.nvcc.compile_cuda(code, target_format='ptx', arch=None, options=None, path_target=None)[源代码]#
Compile cuda code with NVCC from env.
- tvm.contrib.nvcc.find_cuda_path()[源代码]#
Utility function to find cuda path
- 返回:
path -- Path to cuda root.
- 返回类型:
- tvm.contrib.nvcc.have_fp16(compute_version)[源代码]#
Either fp16 support is provided in the compute capability or not
- 参数:
compute_version (str) -- compute capability of a GPU (e.g. "6.0")
- tvm.contrib.nvcc.have_int8(compute_version)[源代码]#
Either int8 support is provided in the compute capability or not
- 参数:
compute_version (str) -- compute capability of a GPU (e.g. "6.1")
- tvm.contrib.nvcc.have_tensorcore(compute_version=None, target=None)[源代码]#
Either TensorCore support is provided in the compute capability or not
- 参数:
compute_version (str, optional) -- compute capability of a GPU (e.g. "7.0").
target (tvm.target.Target, optional) -- The compilation target, will be used to determine arch if compute_version isn't specified.
tvm.contrib.pickle_memoize#
Memoize result of function via pickle, used for cache testcases.
- class tvm.contrib.pickle_memoize.Cache(key, save_at_exit)[源代码]#
A cache object for result cache.
- 参数:
- property cache#
Return the cache, initializing on first use.
tvm.contrib.random#
External function interface to random library.
- tvm.contrib.random.normal(loc, scale, size)[源代码]#
Draw samples from a normal distribution.
Return random samples from a normal distribution.
- tvm.contrib.random.randint(low, high, size, dtype='int32')[源代码]#
Return random integers from low (inclusive) to high (exclusive). Return random integers from the "discrete uniform" distribution of the specified dtype in the "half-open" interval [low, high).
- tvm.contrib.random.uniform(low, high, size)[源代码]#
Draw samples from a uniform distribution.
Samples are uniformly distributed over the half-open interval [low, high) (includes low, but excludes high). In other words, any value within the given interval is equally likely to be drawn by uniform.
- 参数:
low (float) -- Lower boundary of the output interval. All values generated will be greater than or equal to low.
high (float) -- Upper boundary of the output interval. All values generated will be less than high.
size (tuple of ints) -- Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn.
- 返回:
out -- A tensor with specified size and dtype.
- 返回类型:
tvm.contrib.relay_viz#
Relay IR Visualizer
- class tvm.contrib.relay_viz.RelayVisualizer(relay_mod: IRModule, relay_param: Dict[str, NDArray] = None, plotter: Plotter = None, parser: VizParser = None)[源代码]#
Relay IR Visualizer
- 参数:
relay_mod (tvm.IRModule) -- Relay IR module.
relay_param (None | Dict[str, tvm.runtime.NDArray]) -- Relay parameter dictionary. Default None.
plotter (Plotter) -- An instance of class inheriting from Plotter interface. Default is an instance of terminal.TermPlotter.
parser (VizParser) -- An instance of class inheriting from VizParser interface. Default is an instance of terminal.TermVizParser.
Visualize Relay IR by Graphviz DOT language.
- class tvm.contrib.relay_viz.dot.DotGraph(name: str, graph_attr: Dict[str, str] = None, node_attr: Dict[str, str] = None, edge_attr: Dict[str, str] = None, get_node_attr: Callable[[VizNode], Dict[str, str]] = None)[源代码]#
DOT graph for relay IR.
See also
tvm.contrib.relay_viz.dot.DotPlotter
- 参数:
name (str) -- name of this graph.
graph_attr (Optional[Dict[str, str]]) -- key-value pairs for the graph.
node_attr (Optional[Dict[str, str]]) -- key-value pairs for all nodes.
edge_attr (Optional[Dict[str, str]]) -- key-value pairs for all edges.
get_node_attr (Optional[Callable[[VizNode], Dict[str, str]]]) -- A callable returning attributes for the node.
- class tvm.contrib.relay_viz.dot.DotPlotter(graph_attr: Dict[str, str] = None, node_attr: Dict[str, str] = None, edge_attr: Dict[str, str] = None, get_node_attr: Callable[[VizNode], Dict[str, str]] = None, render_kwargs: Dict[str, Any] = None)[源代码]#
DOT language graph plotter
The plotter accepts various graphviz attributes for graphs, nodes, and edges. Please refer to https://graphviz.org/doc/info/attrs.html for available attributes.
- 参数:
graph_attr (Optional[Dict[str, str]]) -- key-value pairs for all graphs.
node_attr (Optional[Dict[str, str]]) -- key-value pairs for all nodes.
edge_attr (Optional[Dict[str, str]]) -- key-value pairs for all edges.
get_node_attr (Optional[Callable[[VizNode], Dict[str, str]]]) -- A callable returning attributes for a specific node.
render_kwargs (Optional[Dict[str, Any]]) -- keyword arguments directly passed to graphviz.Digraph.render().
示例
from tvm.contrib import relay_viz from tvm.relay.testing import resnet mod, param = resnet.get_workload(num_layers=18) # graphviz attributes graph_attr = {"color": "red"} node_attr = {"color": "blue"} edge_attr = {"color": "black"} # VizNode is passed to the callback. # We want to color NCHW conv2d nodes. Also give Var a different shape. def get_node_attr(node): if "nn.conv2d" in node.type_name and "NCHW" in node.detail: return { "fillcolor": "green", "style": "filled", "shape": "box", } if "Var" in node.type_name: return {"shape": "ellipse"} return {"shape": "box"} # Create plotter and pass it to viz. Then render the graph. dot_plotter = relay_viz.DotPlotter( graph_attr=graph_attr, node_attr=node_attr, edge_attr=edge_attr, get_node_attr=get_node_attr) viz = relay_viz.RelayVisualizer( mod, relay_param=param, plotter=dot_plotter, parser=relay_viz.DotVizParser()) viz.render("hello")
Visualize Relay IR in AST text-form.
- class tvm.contrib.relay_viz.terminal.TermGraph(name: str)[源代码]#
Terminal graph for a relay IR Module
- 参数:
name (str) -- name of this graph.
- edge(viz_edge: VizEdge) None [源代码]#
Add an edge to the terminal graph.
- 参数:
viz_edge (VizEdge) -- A VizEdge instance.
- class tvm.contrib.relay_viz.terminal.TermNode(viz_node: VizNode)[源代码]#
TermNode is aimed to generate text more suitable for terminal visualization.
- class tvm.contrib.relay_viz.terminal.TermPlotter[源代码]#
Terminal plotter
- class tvm.contrib.relay_viz.terminal.TermVizParser[源代码]#
TermVizParser parse nodes and edges for TermPlotter.
Abstract class used by tvm.contrib.relay_viz.RelayVisualizer
.
- class tvm.contrib.relay_viz.interface.DefaultVizParser[源代码]#
DefaultVizParser provde a set of logics to parse a various relay types. These logics are inspired and heavily based on visualize function in https://tvm.apache.org/2020/07/14/bert-pytorch-tvm
- get_node_edges(node: RelayExpr, relay_param: Dict[str, NDArray], node_to_id: Dict[RelayExpr, str]) Tuple[VizNode | None, List[VizEdge]] [源代码]#
Get VizNode and VizEdges for a relay.Expr.
- 参数:
- 返回:
rv1 (Union[VizNode, None]) -- VizNode represent the relay.Expr. If the relay.Expr is not intended to introduce a node to the graph, return None.
rv2 (List[VizEdge]) -- a list of VizEdges to describe the connectivity of the relay.Expr. Can be empty list to indicate no connectivity.
- class tvm.contrib.relay_viz.interface.Plotter[源代码]#
Plotter can render a collection of Graph interfaces to a file.
- class tvm.contrib.relay_viz.interface.VizEdge(start_node: str, end_node: str)[源代码]#
VizEdge connect two VizNode.
- class tvm.contrib.relay_viz.interface.VizGraph[源代码]#
Abstract class for graph, which is composed of nodes and edges.
- class tvm.contrib.relay_viz.interface.VizNode(node_id: str, node_type: str, node_detail: str)[源代码]#
VizNode carry node information for VizGraph interface.
- class tvm.contrib.relay_viz.interface.VizParser[源代码]#
VizParser parses out a VizNode and VizEdges from a relay.Expr.
- abstract get_node_edges(node: RelayExpr, relay_param: Dict[str, NDArray], node_to_id: Dict[RelayExpr, str]) Tuple[VizNode | None, List[VizEdge]] [源代码]#
Get VizNode and VizEdges for a relay.Expr.
- 参数:
- 返回:
rv1 (Union[VizNode, None]) -- VizNode represent the relay.Expr. If the relay.Expr is not intended to introduce a node to the graph, return None.
rv2 (List[VizEdge]) -- a list of VizEdges to describe the connectivity of the relay.Expr. Can be empty list to indicate no connectivity.
tvm.contrib.rocblas#
External function interface to rocBLAS libraries.
- tvm.contrib.rocblas.batch_matmul(lhs, rhs, transa=False, transb=False)[源代码]#
Create an extern op that compute matrix mult of A and rhs with rocBLAS
tvm.contrib.rocm#
Utility for ROCm backend
- tvm.contrib.rocm.find_lld(required=True)[源代码]#
Find ld.lld in system.
- 参数:
required (bool) -- Whether it is required, runtime error will be raised if the compiler is required.
- 返回:
valid_list -- List of possible paths.
- 返回类型:
备注
This function will first search ld.lld that matches the major llvm version that built with tvm
- tvm.contrib.rocm.find_rocm_path()[源代码]#
Utility function to find ROCm path
- 返回:
path -- Path to ROCm root.
- 返回类型:
- tvm.contrib.rocm.have_matrixcore(compute_version=None)[源代码]#
Either MatrixCore support is provided in the compute capability or not
- tvm.contrib.rocm.parse_compute_version(compute_version)[源代码]#
Parse compute capability string to divide major and minor version
- 参数:
compute_version (str) -- compute capability of a GPU (e.g. "6.0")
- 返回:
major (int) -- major version number
minor (int) -- minor version number
tvm.contrib.sparse#
Tensor and Operation class for computation declaration.
- class tvm.contrib.sparse.CSRNDArray(arg1, device=None, shape=None)[源代码]#
Sparse tensor object in CSR format.
- class tvm.contrib.sparse.CSRPlaceholderOp(shape, nonzeros, dtype, name)[源代码]#
Placeholder class for CSR based sparse tensor representation.
- class tvm.contrib.sparse.SparsePlaceholderOp(shape, nonzeros, dtype, name)[源代码]#
Placeholder class for sparse tensor representations.
- tvm.contrib.sparse.array(source_array, device=None, shape=None, stype='csr')[源代码]#
Construct a sparse NDArray from numpy.ndarray
tvm.contrib.spirv#
Utility for Interacting with SPIRV Tools
tvm.contrib.tar#
Util to invoke tarball in the system.
- tvm.contrib.tar.normalize_file_list_by_unpacking_tars(temp, file_list)[源代码]#
Normalize the file list by unpacking tars in list.
When a filename is a tar, it will untar it into an unique dir in temp and return the list of files in the tar. When a filename is a normal file, it will be simply added to the list.
This is useful to untar objects in tar and then turn them into a library.
- 参数:
temp (tvm.contrib.utils.TempDirectory) -- A temp dir to hold the untared files.
file_list (List[str]) -- List of path
- 返回:
ret_list -- An updated list of files
- 返回类型:
List[str]
tvm.contrib.utils#
Common system utilities
- exception tvm.contrib.utils.DirectoryCreatedPastAtExit[源代码]#
Raised when a TempDirectory is created after the atexit hook runs.
- class tvm.contrib.utils.FileLock(path)[源代码]#
File lock object
- 参数:
path (str) -- The path to the lock
- class tvm.contrib.utils.TempDirectory(custom_path=None, keep_for_debug=None)[源代码]#
Helper object to manage temp directory during testing.
Automatically removes the directory when it went out of scope.
- tvm.contrib.utils.filelock(path)[源代码]#
Create a file lock which locks on path
- 参数:
path (str) -- The path to the lock
- 返回:
lock
- 返回类型:
File lock object
tvm.contrib.xcode#
Utility to invoke Xcode compiler toolchain
- tvm.contrib.xcode.compile_coreml(model, model_name='main', out_dir='.')[源代码]#
Compile coreml model and return the compiled model path.
- tvm.contrib.xcode.compile_metal(code, path_target=None, sdk='macosx', min_os_version=None)[源代码]#
Compile metal with CLI tool from env.