torch.ao.quantization.quantize#
Prepares a copy of the model for quantization calibration or quantization-aware training. |
|
Converts submodules in input module to a different module according to mapping by calling from_float method on the target module class. |
|
Quantize the input float model with post training static quantization. |
|
Prepares a copy of the model for quantization calibration or quantization-aware training and converts it to quantized version. |
|
Do quantization aware training and output a quantized model |
|
将浮点模型转换为动态模型(即 |