We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 449da6b commit 438343dCopy full SHA for 438343d
src/transformers/integrations/eager_paged.py
@@ -23,7 +23,6 @@ def eager_paged_attention_forward(
23
value: torch.Tensor,
24
attention_mask: Optional[torch.Tensor], # shape [seqlen_q, seqlen_k]
25
scaling: float,
26
- dropout: float = 0.0,
27
**kwargs,
28
):
29
# Add KV cache to the key and value tensors
0 commit comments