[ONNX] Add dynamic shapes support (& in-browser inference w/ Transformers.js)#79
[ONNX] Add dynamic shapes support (& in-browser inference w/ Transformers.js)#79xenova wants to merge 2 commits intoroboflow:developfrom
Conversation
|
|
|
Hi @xenova 👋🏻 this looks very interesting! Could I ask you to accept the CLA? Without it, I won't be able to merge the PR. I understand that now, after the changes to the export, it will be possible to perform batch inference with any number of images. These images need to have the same width and height, and the width and height must be divisible by 56? Additionally, since the Roboflow app relies on the ONNX export output, I'll need to sync with them before merging this PR. |
That is correct!
Sounds good :) The default export code will produce models with the same signature as before. However, the models I've uploaded to the HF Hub just have some of the input and output node names updated to fit with the transformers/transformers.js standards. So, if you'd like them to be 100% backwards compatible, you can just re-export with the steps outlined in the README.
Sure thing! I'm having some issues with the link given by the bot above, but according to CONTRIBUTING.md, it should be okay to add a comment stating: Let me know if that works! |
|
@xenova I believe that needs to be a separate comment. "I have read the CLA Document and I sign the CLA." But without the |
|
I have read the CLA Document and I sign the CLA. |
|
@xenova our automated system is having trouble right now. We accept #79 (comment) as agreement so your PR can be merged. |
|
@Matvezy it's the ONNX PR we spoke about yesterday. any chance you could take a look and confirm there are no blockers for merging? |
|
@xenova Can you try to accept the CLA again at https://cla-assistant.io/roboflow/rf-detr?pullRequest=79? Apologies for the inconvenience. |
|
hi! pulling out the precomputed interpolated positional embeddings will make the onnx graph slower for any given image size, which is why we precomputed them in the first place. have you tested latency compared to the existing version? |
|
CLA should be good now! 🤗 Let me know if I need to sign again or anything 👍
Latency tests for exactly 560x560 input sizes would be interesting to see if anyone has an environment to do benchmarking (unfortunately, latency tests for other dimensions wouldn't be possible). Another option could be to pre-compute 560x560 positional embeddings and using that as the default, but the problem then would be that any use of other input sizes would be slower. |
|
You don't need the exact hardware to do a latency test, although of course it won't be comparable to the official result without it. You can use other hardware to provide SOME evidence that there is or is not a slowdown. But without evidence that this does not cause a slowdown (and it is likely to), this will not be merged as-is. One other option is to use a flag to enable it on export that is default false. But still I'd like to see numbers. We could run the benchmark at some point ourselves but it will be a bit before we have the bandwidth. |
|
@xenova Can you try to sign the CLA again? Apologies for all of the inconvenience. Of note, you will need JavaScript enabled to sign the CLA. |
|
Hi @xenova, I am trying to export a custom onnx model using dynamic batch and creating a .engine file after. However, I am facing some problems when generating the engine. I have achieved generating with the fixed batch size but not with the dynamic. Have you test this? Thanks, |
10ea6b1 to
60b16c1
Compare
a6e6ca0 to
0485141
Compare

Description
This PR adds support for exporting RF-DETR to ONNX with dynamic input and output shapes. This means you can now supply images of variable batch_size, height, and width, both for backbone-only and full exports (provided width and height are divisible by 56).
I have uploaded the checkpoints (along with various quantizations) to the Hugging Face Hub:
Type of change
How has this change been tested, please provide a testcase or example of how you tested the change?
Exporting:
Similarly for
RFDETRLarge.Testing:
Running with Transformers.js, the model outputs the correct response (i.e., valid model, and valid dynamic shape support). See PR at huggingface/transformers.js#1260.
Any specific deployment considerations
I would recommend further optimizing with the amazing onnxslim library; you can reduce model size quite a bit (eliminating tied weights and redundant ops).
Base:
Large:
Docs