You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,7 +53,7 @@ Nexa SDK is an on-device inference framework that runs any model on any device,
53
53
#### 📣 **2025.09.23: Intel NPU Support**
54
54
- LLM inference with [DeepSeek-r1-distill-Qwen-1.5B](https://sdk.nexa.ai/model/DeepSeek-R1-Distill-Qwen-1.5B-Intel-NPU) and [Llama3.2-3B](https://sdk.nexa.ai/model/Llama3.2-3B-Intel-NPU) on Intel NPU
55
55
56
-
#### 📣 **2025.09.23: Apple Neural Engine (ANE) Support**
56
+
#### 📣 **2025.09.22: Apple Neural Engine (ANE) Support**
57
57
- Real-time speech recognition with [Parakeet v3 model](https://sdk.nexa.ai/model/parakeet-v3-ane)
58
58
59
59
#### 📣 **2025.09.15: New Models Support**
@@ -63,7 +63,7 @@ Nexa SDK is an on-device inference framework that runs any model on any device,
63
63
-[Phi4-mini turbo](https://sdk.nexa.ai/model/phi4-mini-npu-turbo) and [Phi3.5-mini](https://sdk.nexa.ai/model/phi3.5-mini-npu) for Qualcomm NPU
64
64
-[Parakeet V3 model](https://sdk.nexa.ai/model/parakeet-v3-npu) for Qualcomm NPU
0 commit comments