Convert Model

convert_model(self, input_model_path: str | ~pathlib.Path, output_dir: str, target_device_name: ~qai_hub.client.Device | ~typing.List[~qai_hub.client.Device], input_shapes: ~typing.Mapping[str, ~typing.Tuple[int, ...] | ~typing.Tuple[~typing.Tuple[int, ...], str]] | None = None, options: ~netspresso.np_qai.options.compile.CompileOptions | str = CompileOptions(compute_unit=None, target_runtime=<Runtime.TFLITE: 'tflite'>, output_names=None, truncate_64bit_tensors=False, truncate_64bit_io=False, force_channel_last_input=None, force_channel_last_output=None, quantize_full_type=None, quantize_weight_type=None, quantize_io=False, quantize_io_type=None, qnn_graph_name=None, qnn_context_binary_vtcm=None, qnn_context_binary_optimization_level=None), job_name: str | None = None, single_compile: bool = True, calibration_data: ~qai_hub.client.Dataset | ~typing.Mapping[str, ~typing.List[~numpy.ndarray]] | str | None = None, retry: bool = True) ConverterMetadata | List[ConverterMetadata]

Convert a model in the QAI hub.

Parameters:
  • input_model_path – The path to the input model.

  • output_dir – The directory to save the converted model.

  • target_device_name – The device to compile the model for.

  • input_shapes – The input shapes of the model.

  • options – The options to use for the conversion.

  • job_name – The name of the job.

  • single_compile – Whether to compile the model in a single step.

  • calibration_data – The calibration data to use for the conversion.

  • retry – Whether to retry the conversion if it fails.

Returns:

Returns a converter metadata object if successful.

Return type:

Union[ConverterMetadata, List[ConverterMetadata]]

Note

For details, see submit_compile_job in QAI Hub API.

Example

from netspresso import NPQAI
from netspresso.np_qai import Device
from netspresso.np_qai.options import CompileOptions, Runtime, ComputeUnit, QuantizeFullType

QAI_HUB_API_TOKEN = "YOUR_QAI_HUB_API_TOKEN"
np_qai = NPQAI(api_token=QAI_HUB_API_TOKEN)

converter = np_qai.converter()

convert_options = CompileOptions(
    target_runtime=Runtime.TFLITE,
    compute_unit=[ComputeUnit.NPU],
    quantize_full_type=QuantizeFullType.INT8,
    quantize_io=True,
    quantize_io_type=QuantizeFullType.INT8,
)

IMG_SIZE = 640
INPUT_MODEL_PATH = "YOUR_INPUT_MODEL_PATH"
OUTPUT_DIR = "YOUR_OUTPUT_DIR"
JOB_NAME = "YOUR_JOB_NAME"
DEVICE_NAME = "QCS6490 (Proxy)"
converted_result = converter.convert_model(
    input_model_path=INPUT_MODEL_PATH,
    output_dir=OUTPUT_DIR,
    target_device_name=Device(DEVICE_NAME),
    options=convert_options,
    input_shapes=dict(image=(1, 3, IMG_SIZE, IMG_SIZE)),
    job_name=JOB_NAME,
)

print("Conversion task started")

# Monitor task status
while True:
    status = converter.get_convert_task_status(converted_result.convert_task_info.convert_task_uuid)
    if status.finished:
        converted_result = converter.update_convert_task(converted_result)
        print("Conversion task completed")
        break
    else:
        print("Conversion task is still running")