You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* update docs with paper and real model
* nit
* Apply suggestions from code review
Thanks to @stevhlui!
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Remove usage examples, add quantization
---------
Co-authored-by: oweller2 <oweller2@dsailogin.mgmt.ai.cluster>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Copy file name to clipboardExpand all lines: docs/source/en/model_doc/modernbert-decoder.md
+51-18Lines changed: 51 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,14 +24,18 @@ rendered properly in your Markdown viewer.
24
24
25
25
# ModernBERT Decoder
26
26
27
-
ModernBERT Decoder is the same architecture as [ModernBERT](https://huggingface.co/papers/2412.13663) but trained from scratch with a causal language modeling (CLM) objective. This allows for using the same architecture for comparing encoders and decoders. This is the decoder architecture implementation of ModernBERT, designed for autoregressive text generation tasks.
27
+
ModernBERT Decoder has the same architecture as [ModernBERT](https://huggingface.co/papers/2412.13663) but it is trained from scratch with a causal language modeling objective from the [Ettin paper](https://huggingface.co/papers/2507.11412). This allows for using the same architecture to compare encoders and decoders. This model is the decoder architecture implementation of ModernBERT, designed for autoregressive text generation tasks.
28
28
29
-
Like the encoder version, ModernBERT Decoder incorporates modern architectural improvements such as rotary positional embeddings to support sequences of up to 8192 tokens, unpadding to avoid wasting compute on padding tokens, GeGLU layers, and alternating attention patterns. However, it uses causal (unidirectional) attention to enable autoregressive generation.
29
+
ModernBERT Decoder uses sliding window attention and rotary positional embeddings for efficiency and to handle longer sequences.
30
+
31
+
You can find all the original ModernBERT Decoder checkpoints under the [jhu-clsp](https://huggingface.co/collections/jhu-clsp/encoders-vs-decoders-the-ettin-suite-686303e16142257eed8e6aeb) collection.
30
32
31
33
> [!TIP]
34
+
> This model was contributed by [orionw](https://huggingface.co/orionweller).
35
+
>
32
36
> Click on the ModernBERT Decoder models in the right sidebar for more examples of how to apply ModernBERT Decoder to different text generation tasks.
33
37
34
-
The example below demonstrates how to use ModernBERT Decoder for text generation with [`Pipeline`], [`AutoModel`], and from the command line.
38
+
The example below demonstrates how to use ModernBERT Decoder for text generation with [`Pipeline`], [`AutoModel`] (with and without quantization), and from the command line.
35
39
36
40
<hfoptionsid="usage">
37
41
<hfoptionid="Pipeline">
@@ -42,7 +46,7 @@ from transformers import pipeline
The ModernBertDecoder model can be fine-tuned for various text generation tasks using the HuggingFace Transformers library. It supports efficient inference with features like:
148
-
149
-
-**Causal attention**: Ensures autoregressive generation by masking future tokens
150
-
-**Sliding window attention**: Alternates between local and global attention patterns for efficiency
151
-
-**Rotary positional embeddings**: Enables handling of longer sequences up to 8000 tokens
152
-
-**FlashAttention support**: Optimized attention computation for faster training and inference
0 commit comments