Skip to content

Commit 7f567fc

Browse files
committed
last fix for today
1 parent 41dbb54 commit 7f567fc

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

native_sparse_attention_pytorch/native_sparse_attention.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -206,7 +206,7 @@ def forward(
206206
ck_seq = ((arange(num_compress_blocks, device = device) + 1) * self.compress_block_size) - 1
207207
ck_seq = F.pad(ck_seq, (num_mem_compress_kv, 0), value = -1)
208208

209-
cmask = einx.less('j, i -> i j', ck_seq, cq_seq)
209+
cmask = einx.less_equal('j, i -> i j', ck_seq, cq_seq)
210210

211211
mask_value = -torch.finfo(csim.dtype).max
212212

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "native-sparse-attention-pytorch"
3-
version = "0.0.7"
3+
version = "0.0.8"
44
description = "Native Sparse Attention"
55
authors = [
66
{ name = "Phil Wang", email = "lucidrains@gmail.com" }

0 commit comments

Comments
 (0)