-
-
Notifications
You must be signed in to change notification settings - Fork 106
Open
Description
When implementing a user defined task I find the response times to be extremely slow even while using the bulk document submission in spacy-llm i.e. "list(nlp.pipe(docs))"
When using this technique I find that each document is submitted one at a time in a queue like structure as opposed to sending a bulk query to Azure and having the entire result returned at once? Is this the expected behavior? I have looked into Azure LLM api's and it does appear that they have bulk submission capabilities built into their system. I would love to be able to update my code in order to speed the processing time up with this implementation.
Metadata
Metadata
Assignees
Labels
No labels