Skip to content

Releases: lucidrains/native-sparse-attention-pytorch

0.2.3

15 Aug 14:51
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.2.2...0.2.3

0.2.2

11 Jun 03:26
Compare
Choose a tag to compare
address https://github.com/lucidrains/native-sparse-attention-pytorch…

0.2.1

16 May 00:12
Compare
Choose a tag to compare
fix maximum tracking in triton

0.2.0

25 Mar 02:36
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.1.27...0.2.0

0.1.27

24 Mar 01:34
Compare
Choose a tag to compare

What's Changed

  • Small change so token embeddings aren't looked up for past tokens during inference by @Pasewark in #24

Full Changelog: 0.1.26...0.1.27

0.1.26

23 Mar 14:53
Compare
Choose a tag to compare

Full Changelog: 0.1.25...0.1.26

0.1.25

22 Mar 14:57
Compare
Choose a tag to compare
make it right for selection block sizes > 16

0.1.24

20 Mar 14:38
Compare
Choose a tag to compare

What's Changed

  • Added CompressTransformer, a simple transformer to compress the blocks by @Pasewark in #21

New Contributors

Full Changelog: 0.1.23...0.1.24

0.1.23

19 Mar 23:53
Compare
Choose a tag to compare

Full Changelog: 01.22...0.1.23

0.1.21

19 Mar 15:08
Compare
Choose a tag to compare
seq parallel for backwards nsa