Skip to content

need help!!! #1

@coco-XW

Description

@coco-XW

I encountered the following error during operation. I don’t know where the problem is. Can you help me take a look?

Traceback (most recent call last):

File "/SCFlow-main/scflow/trainer_module.py", line 342, in get_sample_to_vis
fake_ims = self.unclip_sample(clip_pred, ddim_steps=40, unormalize=False)
File "/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/SCFlow-main/scflow/trainer_module.py", line 308, in unclip_sample
uc = self.unclip_model.get_learned_conditioning(batch_size * [negative_prompt])
File "/SCFlow-main/scflow/ldm/models/diffusion/ddpm.py", line 665, in get_learned_conditioning
c = self.cond_stage_model.encode(c)
File "/SCFlow-main/scflow/ldm/modules/encoders/modules.py", line 248, in encode
return self(text)
File "/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/SCFlow-main/scflow/ldm/modules/encoders/modules.py", line 225, in forward
z = self.encode_with_transformer(tokens.to(self.device))
File "/SCFlow-main/scflow/ldm/modules/encoders/modules.py", line 232, in encode_with_transformer
x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask)
File "/SCFlow-main/scflow/ldm/modules/encoders/modules.py", line 244, in text_transformer_forward
x = r(x, attn_mask=attn_mask)
File "/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/python3.10/site-packages/open_clip/transformer.py", line 298, in forward
x = q_x + self.ls_1(self.attention(q_x=self.ln_1(q_x), k_x=k_x, v_x=v_x, attn_mask=attn_mask))
File "/python3.10/site-packages/open_clip/transformer.py", line 283, in attention
return self.attn(
File "/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/python3.10/site-packages/torch/nn/modules/activation.py", line 1373, in forward
attn_output, attn_output_weights = F.multi_head_attention_forward(
File "/python3.10/site-packages/torch/nn/functional.py", line 6264, in multi_head_attention_forward
raise RuntimeError(
RuntimeError: The shape of the 2D attn_mask is torch.Size([77, 77]), but should be (1, 1).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions