On-device training with CUDA enabled #26109
Unanswered
ruijinUofM
asked this question in
Training Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, I have a use case for the ONNX on-device training I'm interested in. Specifically this line in the onnx training documentation: "Improving data privacy and security, especially when working with sensitive data that cannot be shared with a server or a cloud."
I'd like to be able to create training artifacts and then send them to an isolated environment to train on data that have sensitive restrictions, so this sounds perfect. However, I still want to be able to use accelerators, specifically NVIDIA GPUs on the training device. Is this possible today? I see the offline phase described here, but the training phase leads to instructions here, and it's not super clear to me whether the training runtime has CUDA support.
Otherwise, if this isn't supported, can I do something similar using the large model training framework? Tutorials seem to hint that the graph management is internal to ORTModule, so I'm not sure if there's a clean way to separate the training graph creation and the training execution in that workflow.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions