Conversation
|
Let's wait this PR until we'll merge #129. |
|
@valavanisleonidas #129 got merged. We can work on this PR now. |
Yes I saw that thank you. should we also have dynamic batch size for onnx models since the predict function support batch inference ? |
|
@SkalskiP I think the PR is mostly ready. Two things to mention.
|
|
Hey @SkalskiP, any news on this PR ? |
|
Hi, dear developer, how is going on ? I also need to use the ONNX model to predict. |
this PR is not up to date with the latest changes. I can do it if @SkalskiP and the team is still interested in merging it. |
|
Hello @valavanisleonidas any update on this PR? It's a super interesting PR which alot of people are waiting for I guess. |
|
Hello @MustafaKiwaPI , it’s not up to me to merge this. In order to use it because I wanted it as well I forked the code with the change and use the forked version. If the maintainers of the project are still interested we can update the branch in order to merge this.
|
60b16c1 to
523f9df
Compare
a6e6ca0 to
0485141
Compare
Description
This PR adds support for ONNX-based inference in RF-DETR, enabling faster model execution compared to PyTorch CPU inference. This change introduces an alternative inference path using the onnxruntime backend.
Ticket : #64
Type of change
How has this change been tested, please provide a testcase or example of how you tested the change?
Python test file
The bboxes and confidence scores of the image i tested are the same.
The time using GPU is
0,14 secfor cuda with pth model and0,11 secfor onnx modelAny specific deployment considerations
Docs