Skip to content

Commit 82a28be

Browse files
committed
fix after changing importance score to compressed attention values
1 parent 1de7c94 commit 82a28be

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

native_sparse_attention_pytorch/native_sparse_attention.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -225,7 +225,7 @@ def forward(
225225
if self.use_diff_topk:
226226
gates = selected_importance_values + (1. - selected_importance_values).detach()
227227

228-
fmask = selected_importance_values > mask_value
228+
fmask = selected_importance_values > 1e-10
229229

230230
fq = q
231231
fk = k

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "native-sparse-attention-pytorch"
3-
version = "0.0.5"
3+
version = "0.0.6"
44
description = "Native Sparse Attention"
55
authors = [
66
{ name = "Phil Wang", email = "lucidrains@gmail.com" }

0 commit comments

Comments
 (0)