Skip to content

Conversation

diogoazevedo15
Copy link
Contributor

Summary

Updated the method input_to_string in provider.py to ensure compatibility with vision models.

input_to_string:

  1. Now appends text from messages containing images.
  2. Also adds the base64 string to the token count.

To be discussed: Should we include the base64 string in the token count?

diogoazevedo15 and others added 8 commits August 30, 2024 14:35
1. Update the llama parsing for Llama calls with functions, where the functions are not used to produce the response.
2. Remove useless chunk code from provider.py
* Update azure.py llama function call parsing

1. Update the llama parsing for Llama calls with functions, where the functions are not used to produce the response.
2. Remove useless chunk code from provider.py

* Solve Lint issues

* Update azure.py
Updated the method input_to_string to ensure compatibility with vision models.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant