torch.ao.quantization.quantize.quantize_qat#

torch.ao.quantization.quantize.quantize_qat(model, run_fn, run_args, inplace=False)[源代码]#

Do quantization aware training and output a quantized model

参数
  • model – input model

  • run_fn – a function for evaluating the prepared model, can be a function that simply runs the prepared model or a training loop

  • run_args – positional arguments for run_fn

返回

Quantized model.