Skip to content

[Question] Tensor parallelism for tensorrt_llm #79

@JoeLiu996

Description

@JoeLiu996

Is your feature request related to a problem? Please describe.
I am aware that PyTriton already have an example for using PyTriton with tensorrt_llm. But I noticed that the example only support single gpu inference. Therefore, may I ask is there any other examples or reference docs which using tensorrt_llm with PyTriton and support tensor parallelism.

Describe the solution you'd like
I think right now the example is excellent, but will be more comprehensive if can add multiple gpu inference(tensor parallelism inference) examples since this will be one of the widely use case.

Metadata

Metadata

Assignees

Labels

non-staleThis label can be used to prevent marking issues or PRs as Stale

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions