Skip to content

Commit 1179a15

Browse files
authored
update README.md (#587)
* Update README.md * Update README.md * update README.md
1 parent b1e9a00 commit 1179a15

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ Nexa SDK is an on-device inference framework that runs any model on any device,
5353
#### 📣 **2025.09.23: Intel NPU Support**
5454
- LLM inference with [DeepSeek-r1-distill-Qwen-1.5B](https://sdk.nexa.ai/model/DeepSeek-R1-Distill-Qwen-1.5B-Intel-NPU) and [Llama3.2-3B](https://sdk.nexa.ai/model/Llama3.2-3B-Intel-NPU) on Intel NPU
5555

56-
#### 📣 **2025.09.23: Apple Neural Engine (ANE) Support**
56+
#### 📣 **2025.09.22: Apple Neural Engine (ANE) Support**
5757
- Real-time speech recognition with [Parakeet v3 model](https://sdk.nexa.ai/model/parakeet-v3-ane)
5858

5959
#### 📣 **2025.09.15: New Models Support**
@@ -63,7 +63,7 @@ Nexa SDK is an on-device inference framework that runs any model on any device,
6363
- [Phi4-mini turbo](https://sdk.nexa.ai/model/phi4-mini-npu-turbo) and [Phi3.5-mini](https://sdk.nexa.ai/model/phi3.5-mini-npu) for Qualcomm NPU
6464
- [Parakeet V3 model](https://sdk.nexa.ai/model/parakeet-v3-npu) for Qualcomm NPU
6565

66-
#### 📣 **2025.09.15: Turbo Engine & Unified Interface**
66+
#### 📣 **2025.09.05: Turbo Engine & Unified Interface**
6767
- [Nexa ML Turbo engine](https://nexa.ai/blogs/nexaml-turbo) for optimized NPU performance
6868
- Try [Phi4-mini turbo](https://sdk.nexa.ai/model/phi4-mini-npu-turbo) and [Llama3.2-3B-NPU-Turbo](https://sdk.nexa.ai/model/Llama3.2-3B-NPU-Turbo)
6969
- 80% faster at shorter contexts (<=2048), 33% faster at longer contexts (>2048) than current NPU solutions
@@ -76,7 +76,7 @@ Nexa SDK is an on-device inference framework that runs any model on any device,
7676
- Check the model and demos at [Hugging Face repo](https://huggingface.co/NexaAI/OmniNeural-4B)
7777
- Check our [OmniNeural-4B technical blog](https://nexa.ai/blogs/omnineural-4b)
7878

79-
#### 📣 **2025.08.12: ASR & TTS Support in MLX format
79+
#### 📣 **2025.08.12: ASR & TTS Support in MLX format**
8080
- Parakeet and Kokoro models support in MLX format.
8181
- new `/mic` mode to transcribe live speech directly in your terminal.
8282

@@ -98,7 +98,7 @@ curl -fsSL https://github.com/NexaAI/nexa-sdk/releases/latest/download/nexa-cli_
9898

9999
## Supported Models
100100

101-
You can run any compatible GGUFMLX, or nexa model from 🤗 Hugging Face by using the `<full repo name>`.
101+
You can run any compatible GGUF, MLX, or nexa model from 🤗 Hugging Face by using the `<full repo name>`.
102102

103103
### Qualcomm NPU models
104104
> [!TIP]

0 commit comments

Comments
 (0)