Skip to content
#

tensor-rt

Here are 6 public repositories matching this topic...

A minimal, high-performance starter kit for running AI model inference on NVIDIA GPUs using CUDA. Includes environment setup, sample kernels, and guidance for integrating ONNX/TensorRT pipelines for fast, optimized inference on modern GPU hardware.

  • Updated Nov 2, 2025
  • Cuda

Improve this page

Add a description, image, and links to the tensor-rt topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the tensor-rt topic, visit your repo's landing page and select "manage topics."

Learn more