We can easily train on our custom dataset and then export an optimized Onnx, TensorRT(engine) and Int8 model. This makes deployment on our edge devices like the Jetson ORIN incredibly efficient. Review collected by and hosted on G2.com.
While generally good, the documentation for very specific, advanced deployment scenarios can be lacking.
I have had issues with RTSP pipeline with h264/265 codec supports. Review collected by and hosted on G2.com.



