Automatic Quantization
- automatic_quantization(self, input_model_path: str, output_dir: str, dataset_path: str | None, weight_precision: QuantizationPrecision = QuantizationPrecision.INT8, activation_precision: QuantizationPrecision = QuantizationPrecision.INT8, metric: SimilarityMetric = SimilarityMetric.SNR, threshold: float | int = 0, input_layers: List[Dict[str, int]] | None = None, wait_until_done: bool = True, sleep_interval: int = 30) QuantizerMetadata
Apply automatic quantization to a model, specifying precision for weight & activation.
This method quantizes layers in the model based on the specified precision levels for weights and activations, while evaluating the quality of quantization using the defined metric. Only layers that meet the specified quality threshold are quantized; layers that do not meet this threshold remain unquantized to preserve model accuracy.
- Parameters:
input_model_path (str) – The file path where the model is located.
output_dir (str) – The local folder path to save the quantized model.
dataset_path (str) – Path to the dataset. Useful for certain quantizations.
weight_precision (QuantizationPrecision) – Weight precision
activation_precision (QuantizationPrecision) – Activation precision
metric (SimilarityMetric) – Quantization quality metrics.
threshold (Union[float, int]) – Quality threshold for quantization. Layers that do not meet this threshold based on the metric are not quantized.
input_layers (List[InputShape], optional) – Target input shape for quantization (e.g., dynamic batch to static batch).
wait_until_done (bool) – If True, wait for the quantization result before returning the function. If False, request the quantization and return the function immediately.
- Raises:
e – If an error occurs during the model quantization.
- Returns:
Quantize metadata.
- Return type:
QuantizerMetadata
Example
from netspresso import NetsPresso
from netspresso.enums import QuantizationPrecision
netspresso = NetsPresso(email="YOUR_EMAIL", password="YOUR_PASSWORD")
quantizer = netspresso.quantizer()
quantization_result = quantizer.automatic_quantization(
input_model_path="./examples/sample_models/test.onnx",
output_dir="./outputs/quantized/automatic_quantization",
dataset_path="./examples/sample_datasets/pickle_calibration_dataset_128x128.npy",
weight_precision=QuantizationPrecision.INT8,
activation_precision=QuantizationPrecision.INT8,
threshold=0,
)