Skip to content

Commit 41091ec

Browse files
FIX Bug when dequantizing 4bit bnb weights (#2847)
Fixes some failing GPU tests in CI. A bug was introduced in #2797 where state.SCB was accessed while dequantizing 4bit bnb weights even though state is None. This would occur, for instance, when using DoRA, which needs to dequantize the weight. The attribute access is now restricted to 8bit bnb weights.
1 parent 6bf24ac commit 41091ec

File tree

1 file changed

+10
-3
lines changed

1 file changed

+10
-3
lines changed

src/peft/utils/integrations.py

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -88,16 +88,23 @@ def dequantize_bnb_weight(weight: torch.nn.Parameter, state=None):
8888
"""Helper function to dequantize 4bit or 8bit bnb weights."""
8989
import bitsandbytes as bnb
9090

91-
if state.SCB is None:
92-
state.SCB = weight.SCB
93-
9491
device = weight.device
9592

9693
cls_name = weight.__class__.__name__
9794
if cls_name == "Params4bit":
9895
dequantized = bnb.functional.dequantize_4bit(weight.data, weight.quant_state)
9996
return dequantized
10097

98+
# 8bit case
99+
if state is None:
100+
raise ValueError(
101+
"No `state` was passed for bnb 8bit quantized weights. Please open an issue on the PEFT repository and "
102+
"report the error: https://github.com/huggingface/peft/issues"
103+
)
104+
105+
if state.SCB is None:
106+
state.SCB = weight.SCB
107+
101108
if hasattr(bnb.functional, "int8_vectorwise_dequant"):
102109
# Use bitsandbytes API if available (requires v0.45.0+)
103110
dequantized = bnb.functional.int8_vectorwise_dequant(weight.data, state.SCB)

0 commit comments

Comments
 (0)