You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/en/FAQ.md
+69-1Lines changed: 69 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,18 @@
1
+
<!-- omit in toc -->
1
2
# FAQ
3
+
-[Install](#install)
4
+
-[Q: ImportError: /lib/x86\_64-linux-gnu/libstdc++.so.6: version GLIBCXX\_3.4.32' not found](#q-importerror-libx86_64-linux-gnulibstdcso6-version-glibcxx_3432-not-found)
5
+
-[Q: DeepSeek-R1 not outputting initial token](#q-deepseek-r1-not-outputting-initial--token)
6
+
-[Usage](#usage)
7
+
-[Q: If I got more VRAM than the model's requirement, how can I fully utilize it?](#q-if-i-got-more-vram-than-the-models-requirement-how-can-i-fully-utilize-it)
8
+
-[Q: If I don't have enough VRAM, but I have multiple GPUs, how can I utilize them?](#q-if-i-dont-have-enough-vram-but-i-have-multiple-gpus-how-can-i-utilize-them)
9
+
-[Q: How to get the best performance?](#q-how-to-get-the-best-performance)
10
+
-[Q: My DeepSeek-R1 model is not thinking.](#q-my-deepseek-r1-model-is-not-thinking)
11
+
-[Q: Loading gguf error](#q-loading-gguf-error)
12
+
-[Q: Version \`GLIBCXX\_3.4.30' not found](#q-version-glibcxx_3430-not-found)
13
+
-[Q: When running the bfloat16 moe model, the data shows NaN](#q-when-running-the-bfloat16-moe-model-the-data-shows-nan)
14
+
-[Q: Using fp8 prefill very slow.](#q-using-fp8-prefill-very-slow)
15
+
-[Q: Possible ways to run graphics cards using volta and turing architectures](#q-possible-ways-to-run-graphics-cards-using-volta-and-turing-architectures)
2
16
## Install
3
17
### Q: ImportError: /lib/x86_64-linux-gnu/libstdc++.so.6: version GLIBCXX_3.4.32' not found
4
18
```
@@ -96,4 +110,58 @@ RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
96
110
97
111
### Q: Using fp8 prefill very slow.
98
112
99
-
The FP8 kernel is build by JIT, so the first run will be slow. The subsequent runs will be faster.
113
+
The FP8 kernel is build by JIT, so the first run will be slow. The subsequent runs will be faster.
114
+
115
+
### Q: Possible ways to run graphics cards using volta and turing architectures
1. First, download the latest source code using git.
120
+
2. Then, modify the DeepSeek-V3-Chat-multi-gpu-4.yaml in the source code and all related yaml files, replacing all instances of KLinearMarlin with KLinearTorch.
121
+
3. Next, you need to compile from the ktransformer source code until it successfully compiles on your local machine.
122
+
4. Then, install flash-attn. It won't be used, but not installing it will cause an error.
123
+
5. Then, modify local_chat.py, replacing all instances of flash_attention_2 with eager.
124
+
6. Then, run local_chat.py. Be sure to follow the official tutorial's commands and adjust according to your local machine's parameters.
125
+
7. During the running process, check the memory usage. Observe its invocation through the top command. The memory capacity on a single CPU must be greater than the complete size of the model. (For multiple CPUs, it's just a copy.)
126
+
Finally, confirm that the model is fully loaded into memory and specific weight layers are fully loaded into the GPU memory. Then, try to input content in the chat interface and observe if there are any errors.
127
+
128
+
Attention, for better perfomance, you can check this [method](https://github.com/kvcache-ai/ktransformers/issues/374#issuecomment-2667520838) in the issue
| DataSet | CPU Weight Format | CPU Kernel | GPU Weight Format | GEMM Kernel | MLA Kernel |[Siliconflow](https://cloud.siliconflow.cn/models)<br> | Ktrans Point |
@@ -37,9 +37,11 @@ Given that we have only tested 1,000 cases, which provides only a preliminary ju
@@ -54,6 +56,8 @@ By default, The MLA kernel uses triton in linux and torch in windows. But we nee
54
56
4.[v3-chat_yaml](https://github.com/kvcache-ai/ktransformers/blob/main/ktransformers/optimize/optimize_rules/DeepSeek-V3-Chat.yaml). You don't need to change the source code as they both use q4km. But note the yaml file [here](https://github.com/kvcache-ai/ktransformers/blob/main/ktransformers/optimize/optimize_rules/DeepSeek-V3-Chat.yaml#L29) and [here](https://github.com/kvcache-ai/ktransformers/blob/main/ktransformers/optimize/optimize_rules/DeepSeek-V3-Chat.yaml#L18), below these lines you need to add `num_bits: 8` (in other words: add this kwargs to all that use `KLinearMarlin`). The weight file for q4km is [here](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q4_K_M)
55
57
5.[v3-chat_yaml](https://github.com/kvcache-ai/ktransformers/blob/main/ktransformers/optimize/optimize_rules/DeepSeek-V3-Chat.yaml). No need to change yaml, just use the default. The weight file for q4km is [here](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q4_K_M)
56
58
6. You should check the [doc](./fp8_kernel.md) to learn how to test this case. This is a mixture tensor case.
59
+
7. You should check the [doc](./fp8_kernel.md) to learn how to test this case. This is a mixture tensor case.
57
60
- MMLU-pro test
58
61
1. You should check the [doc](./fp8_kernel.md) to learn how to test this case. This is a mixture tensor case.
59
-
2.[v3-chat_yaml](https://github.com/kvcache-ai/ktransformers/blob/main/ktransformers/optimize/optimize_rules/DeepSeek-V3-Chat.yaml). No need to change yaml, just use the default. The weight file for q4km is [here](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q4_K_M)
62
+
2.[v3-chat_yaml](https://github.com/kvcache-ai/ktransformers/blob/main/ktransformers/optimize/optimize_rules/DeepSeek-V3-Chat.yaml). No need to change yaml, just use the default. The weight file for q4km is [here](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q4_K_M)
63
+
3. You should check the [doc](./fp8_kernel.md) to learn how to test this case. This is a mixture tensor case.
In this document, we will show you how to install and run KTransformers on your local machine. There are two versions:
4
10
* V0.2 is the current main branch.
5
11
* V0.3 is a preview version only provides binary distribution for now.
@@ -56,6 +62,8 @@ Some preparation:
56
62
- At the same time, you should download and install the corresponding version of flash-attention from https://github.com/Dao-AILab/flash-attention/releases.
57
63
58
64
## Installation
65
+
### Attention
66
+
If you want to use numa support, not only do you need to set USE_NUMA=1, but you also need to make sure you have installed the libnuma-dev (`sudo apt-get install libnuma-dev` may help you).
59
67
60
68
<!-- 1. ~~Use a Docker image, see [documentation for Docker](./doc/en/Docker.md)~~
returnf"""Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nSolve the following math problem without any tests or explanation only one answer surrounede by '$\\boxed{{}}$'\n{prompt}\n\n### Response:"""
0 commit comments