onnx.model_container¶
ModelContainer¶
- class onnx.model_container.ModelContainer[source]¶
Implements an API to store large tensors outside the main ModelProto, it avoids copying large initializers when defining the model and these initializers are never serialized through protobuf. No tensor is stored on disk until the user explicitly saves the model.
- enumerate_graph_protos() Iterable[GraphProto] [source]¶
Enumerates all GraphProtos in a model.
- is_in_memory_external_initializer(name: str) bool [source]¶
Tells if an initializer name is an external initializer stored in memory. The name must start with ‘#’ in that case.
- load(file_path: str, load_large_initializers: bool = True)[source]¶
Load the large model.
- Parameters:
file_path – model file
load_large_initializers – loads the large initializers, if not done, the model is incomplete but it can be used to look into the model without executing it and method
_load_large_initializers()
can be used to load them later
- save(file_path: str, all_tensors_to_one_file: bool = False) ModelProto [source]¶
Save the large model. The function returns a ModelProto, the current one if the model did not need any modification, a modified copy of it if it required changes such as giving file names to every external tensor.
- Parameters:
file_path – model file
all_tensors_to_one_file – saves all large tensors in one file or one file per lerge tensor
- Returns:
the saved ModelProto
make_large_model¶
- onnx.model_container.make_large_model(graph: GraphProto, large_initializers: dict[str, ndarray] | None = None, **kwargs: Any) ModelContainer [source]¶
Construct a ModelContainer
C API and Python API of protobuf do not operate without serializing the protos. This function uses the Python API of ModelContainer.
- Parameters:
graph – make_graph returns
large_initializers – dictionary name: large tensor, large tensor is any python object supporting the DLPack protocol, the ownership the tensor is transferred to the ModelContainer, the tensor must define method tobytes like numpy tensors
**kwargs – any attribute to add to the returned instance
- Returns:
ModelContainer
make_large_tensor_proto¶
- onnx.model_container.make_large_tensor_proto(location: str, tensor_name: str, tensor_type: int, shape: tuple[int, ...]) TensorProto [source]¶
Create an external tensor.
- Parameters:
location – unique identifier (not necessary a path)
tensor_name – tensor name in the graph
tensor_type – onnx type
shape – shape the of the initializer
- Returns:
the created tensor