-
Notifications
You must be signed in to change notification settings - Fork 16
fix: Repair PyTorch Compatibility issue with torch.compile in flex_attention #16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Unfortunately, after replacing the qwen2_navit.py file, the error did not go away.
|
@spookandrey1 actually I haven't got enough info to help u, the traceback contained little, while other reason we all discussed. |
Hi, I'm moving the discussion to this PR to avoid the noise the issue can get, perhaps here we'll be able to communicate more efficiently, here's my latest response:
Thanks for the quick response! I checked out the branch I still get
when running. The one thing that seems different is that before, ComfyUI would point that the custom_node failed to load, but now it seems to be pointing to be properly loading:
This is the
|
Easiest workaround the issue I found was to completely comment out the line 40 from qwen2_navit.py I'm trying to get the model to run and I think it'll work because importing the flex_attention is no problem, the problem is torch.compiling it I'm facing extra problems getting the model to run, I'll document that in a separated issue, but I think they might be easy to fix |
These are the diffs from my local to branch
enumerating:
|
@HDANILO good to hear that u solve it! |
@HDANILO actually I mismatch u two's problems together, the first commit in this PR is to solve urs while it cannot help @spookandrey1. Thus I notice the issue #7 is not same problems. |
No description provided.