Skip to content

Commit 438343d

Browse files
authored
Don't list dropout in eager_paged_attention_forward (#40924)
Remove dropout argument Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>
1 parent 449da6b commit 438343d

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

src/transformers/integrations/eager_paged.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,6 @@ def eager_paged_attention_forward(
2323
value: torch.Tensor,
2424
attention_mask: Optional[torch.Tensor], # shape [seqlen_q, seqlen_k]
2525
scaling: float,
26-
dropout: float = 0.0,
2726
**kwargs,
2827
):
2928
# Add KV cache to the key and value tensors

0 commit comments

Comments
 (0)