Skip to content

DALI backend on Jetson AGX #271

@jilongliao

Description

@jilongliao

Hi,

I'm planning to run tritonserver on Jetson. I was able to pull nvcr.io/nvidia/tritonserver:25.05-py3-igpu and run the server on jetson. However, i realize that the DALI backend is not available in the igpu image.

docker run --runtime=nvidia --network=host --rm -p 8000:8000 -p 8001:8001 -p 8002:8002 -v /mnt/data/models:/models -e YOLO_VERBOSE=false -it nvcr.io/nvidia/tritonserver:25.05-py3-igpu ls /opt/tritonserver/backends/
WARNING: Published ports are discarded when using host network mode

=============================
== Triton Inference Server ==
=============================

NVIDIA Release 25.05 (build 170551412)
Triton Server Version 2.58.0

Copyright (c) 2018-2025, NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

GOVERNING TERMS: The software and materials are governed by the NVIDIA Software License Agreement
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-license-agreement/)
and the Product-Specific Terms for NVIDIA AI Products
(found at https://www.nvidia.com/en-us/agreements/enterprise-software/product-specific-terms-for-ai-products/).

fil  identity  onnxruntime  python  pytorch  tensorrt

I am not sure how to make DALI backend on Jetson. Can someone help?

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions