onnx-mlir

Logo

Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure

View the Project on GitHub onnx/onnx-mlir

How-Tos

Inference Using Python
Inference Using C/C++
Inference Using Java

References

ONNX Dialect
OMTensor C99 Runtime API
OMTensorList C99 Runtime API
OMTensor Java Runtime API
OMTensorList Java Runtime API
Generate ONNX Dialect
About Documentation

Development

Add an Operation
Testing Guidelines
Error Handling
Command-line Options
Instrumentation
Constant Propagation
Add an Accelerator

Tools

Tools

RunONNXModel.py
DocCheck

This project is maintained by onnx

Hosted on GitHub Pages — Theme by orderedlist

Define and Use Command-line Options for ONNX-MLIR

Command-line options can be used to alter the default behavior of onnx-mlir, or onnx-mlir-opt, and help user experimenting, debugging or performance tuning. We implemented command-line in ONNX-MLIR based on the command-line utility provided by LLVM. We did not define Option or ListOption with MLIR pass classes(see discussion).

Organize Options

Refer llvm document for basic idea of how to define an option. In ONNX-MLIR, options are put into groups (llvm::cl::OptionCategory). All command-line options for onnx-mlir are in the OnnxMlirOptions group.

Code structure

Command-line options should be placed in src/Compiler/CompilerOptions.cpp and declared in src/Compiler/CompilerOptions.hpp.

Define an option

Define an option local to a transformation

Use MLIR’s Pass Options to configure passes.