torch.ao.quantization.quantize.quantize#

torch.ao.quantization.quantize.quantize(model, run_fn, run_args, mapping=None, inplace=False)[源代码]#

Quantize the input float model with post training static quantization.

First it will prepare the model for calibration, then it calls run_fn which will run the calibration step, after that we will convert the model to a quantized model.

参数
  • model – input float model

  • run_fn – a calibration function for calibrating the prepared model

  • run_args – positional arguments for run_fn

  • inplace – carry out model transformations in-place, the original module is mutated

  • mapping – correspondence between original module types and quantized counterparts

返回

Quantized model.