Skip to content

Conversation

64-bit
Copy link
Contributor

@64-bit 64-bit commented Apr 9, 2023

I have added progressive output to the inference tab by converting the generate function in app.py to a Python generator and producing tokens one at at time until either the output remains the same (end of stream reached) or the max tokens have been generated.

@64-bit
Copy link
Contributor Author

64-bit commented Apr 9, 2023

I think I found a bug in this, I'm going to close the PR until I can figure out what is going on

@64-bit 64-bit closed this Apr 9, 2023
@64-bit
Copy link
Contributor Author

64-bit commented Apr 9, 2023

The bug in question can be reproduced on the original repo, Separate from this I will look into providing at the least detailed reproduction steps if not a resolution.

@64-bit 64-bit reopened this Apr 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant