Add support for torch.export exported models#1499
Add support for torch.export exported models#1499tolleybot wants to merge 1 commit intodotnet:mainfrom
Conversation
|
@dotnet-policy-service agree |
1f64f5b to
af266bd
Compare
|
Build Failures : Missing LibTorch 2.9.0 Packages I believe the CI builds are failing because the build system requires .sha files for LibTorch package validation, and these are missing for LibTorch 2.9.0 Missing SHA files:
Package availability check:
Why my local tests passed: I was building against the PyTorch Python installation at Should we wait for PyTorch to publish all LibTorch 2.9.0 packages? |
|
|
@masaru-kimura-hacarus Thank you for the detailed investigation and the Gemini Deep Research report! You're absolutely right. I was looking for the wrong package name. I've just pushed the correct SHA files using the new naming convention. Let's see if the CI builds pass now |
|
@dotnet-policy-service agree |
|
👋 Friendly ping on this PR! It's been open for a little while and I wanted to check if there's anything I can do to help move it forward. Happy to address any feedback or make adjustments as needed. |
|
f5d82b7 to
b1c3dac
Compare
|
Rebased onto latest main with libtorch 2.10 backend. Regenerated all .pt2 test models with PyTorch 2.10. Ready for review. |
There was a problem hiding this comment.
Pull request overview
Adds a new TorchSharp integration for running PyTorch torch.export / AOTInductor-packaged .pt2 models (via LibTorch 2.9+ torch::inductor::AOTIModelPackageLoader), enabling inference-only execution from .NET.
Changes:
- Introduces native (C++) bindings to load and run
.pt2packages and wires them into TorchSharp via P/Invoke. - Adds a managed
torch.exportAPI (ExportedProgram+ generic typed returns) to load/run exported programs. - Adds
.pt2test fixtures, a Python generator script, and new unit tests covering basic load/run scenarios.
Reviewed changes
Copilot reviewed 11 out of 17 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
src/Native/LibTorchSharp/THSExport.h |
Declares native API for loading/running AOTI .pt2 exported programs. |
src/Native/LibTorchSharp/THSExport.cpp |
Implements the wrapper over torch::inductor::AOTIModelPackageLoader and marshals tensor inputs/outputs. |
src/Native/LibTorchSharp/Utils.h |
Adds ExportedProgram module typedef (and currently the AOTI header include). |
src/Native/LibTorchSharp/THSJIT.h |
Exposes helper declarations intended for sharing with export support. |
src/Native/LibTorchSharp/CMakeLists.txt |
Adds new export source/header to the native build. |
src/TorchSharp/PInvoke/LibTorchSharp.THSExport.cs |
Adds P/Invoke declarations for the new native export APIs. |
src/TorchSharp/Export/ExportedProgram.cs |
Adds managed torch.export.load() + ExportedProgram runtime wrapper and typed-return convenience API. |
test/TorchSharpTest/TestExport.cs |
Adds unit tests covering load/run with single output, multi-input, tuple output, and array output. |
test/TorchSharpTest/generate_export_models.py |
Adds a script to generate AOTInductor-packaged .pt2 test fixtures. |
test/TorchSharpTest/TorchSharpTest.csproj |
Ensures .pt2 fixtures are copied to test output directory. |
RELEASENOTES.md |
Notes the new torch.export support under API changes. |
Comments suppressed due to low confidence (2)
test/TorchSharpTest/TestExport.cs:75
ExportedProgram<TResult>adds special handling forValueTuple<,,>(3 tensor outputs), but the current tests only cover single output,Tensor[], andValueTuple<,>. Add a unit test (and a small generated.pt2fixture) that returns 3 tensors to ensure theValueTuple<,,>path works end-to-end.
public void TestLoadExport_TupleOutput()
{
// Test loading a model that returns a tuple
using var exported = torch.export.load<(Tensor, Tensor)>(@"tuple_out.export.pt2");
Assert.NotNull(exported);
src/Native/LibTorchSharp/Utils.h:8
Utils.his included by most native binding files; addingtorch/csrc/inductor/aoti_package/model_package_loader.hhere makes the entire native build depend on this internal header even when torch.export support isn’t used. SinceExportedProgramModuleis just a pointer typedef, consider forward-declaringtorch::inductor::AOTIModelPackageLoaderand/or moving the include + typedef intoTHSExport.hto keep compile dependencies localized.
#include "torch/torch.h"
#include "torch/csrc/inductor/aoti_package/model_package_loader.h"
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| } | ||
|
|
||
| // Free the native array (tensors are now owned by managed Tensor objects) | ||
| Marshal.FreeHGlobal(result_ptr); | ||
|
|
There was a problem hiding this comment.
result_ptr is freed with Marshal.FreeHGlobal, but the native side allocates the returned pointer array with C++ new[] (new Tensor[...]). This allocator/free mismatch can crash or corrupt the heap. Expose a native free API that uses delete[] (and call it here), or change the native allocation to malloc/CoTaskMemAlloc to match FreeHGlobal.
There was a problem hiding this comment.
Fixed. Added a dedicated THSExport_Module_run_free_results() native function that uses delete[] to free the array. The C# side now calls this instead of Marshal.FreeHGlobal.
| // Allocate output array and copy results | ||
| *result_length = outputs.size(); | ||
| *result_tensors = new Tensor[outputs.size()]; | ||
|
|
There was a problem hiding this comment.
The returned pointer array is allocated with new Tensor[outputs.size()] but there is no corresponding exported API to free it from managed code (and FreeHGlobal is not compatible with new[]). Add an exported free function that delete[]s this array (or switch to a caller-provided allocator callback), and consider using size_t/int64_t for result_length to avoid truncation from outputs.size().
There was a problem hiding this comment.
Fixed both issues. Added THSExport_Module_run_free_results() for proper delete[] cleanup, and changed result_length from int to int64_t to avoid truncation.
b1c3dac to
9427efe
Compare
|
Hey @tolleybot can you also address the copilot comments ? Also, there are some test failures from TestExport |
Add functionality to load and execute PyTorch models exported via torch.export (.pt2 files) using AOTInductor compilation, enabling .NET applications to run ExportedProgram models. Native layer: - THSExport.h/.cpp C++ wrappers using AOTIModelPackageLoader API - ExportedProgramModule typedef localized in THSExport.h - CMakeLists.txt updated to include THSExport sources - Proper memory management with dedicated free function for result arrays Managed layer: - LibTorchSharp.THSExport.cs PInvoke declarations - ExportedProgram and ExportedProgram<TResult> classes in Export namespace - torch.export.load() API following PyTorch conventions - Correct allocator pairing (native delete[] via THSExport_Module_run_free_results) Capabilities: - Load .pt2 files compiled with torch._inductor.aoti_compile_and_package() - Inference-only forward pass with type-safe generics - Single tensor, array, and 2/3-tuple output support - IDisposable resource cleanup Tests: - 8 unit tests covering load, execute, multi-input, tuple/list/3-tuple outputs - 7 test .pt2 models generated with PyTorch 2.10 - generate_export_models.py for model regeneration Fixes dotnet#1498
9427efe to
4b97275
Compare
|
@alinpahontu2912 Thanks for the review! Addressed the Copilot comments inline in each thread. TestExport CI failures — fixed: |



Add support for torch.export exported models (#1498)
Implements functionality to load and execute PyTorch models exported via torch.export (.pt2 files), enabling .NET applications to run ExportedProgram models as the PyTorch ecosystem transitions from ONNX to torch.export.
Summary
This PR adds support for loading and running AOTInductor-compiled
.pt2models in TorchSharp usingtorch::inductor::AOTIModelPackageLoaderfrom LibTorch 2.9+.Key Points:
torch._inductor.aoti_compile_and_package()in PythonImplementation
Native Layer (C++)
Files:
src/Native/LibTorchSharp/Utils.h- Added AOTIModelPackageLoader header includesrc/Native/LibTorchSharp/THSExport.h- C++ API declarationssrc/Native/LibTorchSharp/THSExport.cpp- Implementation usingtorch::inductor::AOTIModelPackageLoaderKey Changes:
Managed Layer (C#)
Files:
src/TorchSharp/PInvoke/LibTorchSharp.THSExport.cs- PInvoke declarationssrc/TorchSharp/Export/ExportedProgram.cs- High-level C# APIAPI Design:
Features:
IDisposablefor proper resource cleanupExportedProgram<TResult>for type-safe returnsrun(),forward(), andcall()methods (all equivalent)Testing
Files:
test/TorchSharpTest/TestExport.cs- 7 comprehensive unit teststest/TorchSharpTest/generate_export_models.py- Python script to generate test modelstest/TorchSharpTest/*.pt2- 6 test modelsTest Coverage:
All 7 tests pass successfully.
Dependencies
Updated:
build/Dependencies.props- Updated LibTorch from 2.7.1 to 2.9.0LibTorch 2.9.0 includes the
torch::inductor::AOTIModelPackageLoaderimplementation that was previously only available in PyTorch source code.Technical Details
Two .pt2 Formats
PyTorch has two different .pt2 export formats:
Python-only (from
torch.export.save()):AOTInductor-compiled (from
torch._inductor.aoti_compile_and_package()):Python Model Generation
To create compatible .pt2 files:
Limitations
Performance
According to PyTorch documentation, AOTInductor provides:
Testing
Migration Guide
For users currently using TorchScript:
Before (TorchScript):
After (torch.export):
References
torch/csrc/inductor/aoti_package/model_package_loader.hFixes #1498