-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Hello. I found that modifying the prompt to extract answer using Llama-3.1-8B-Instruct worked better (fewer -1 values returned) when modified to the slightly clearer "You are an answer extractor. When given someone's answer to some question, you will only extract the number in their answer and will respond with just the number. If there is no exact number answer, respond with -1."
I also found while trying to integrate your dataset into vlmevalkit that appending '\nPlease try to answer the question with short words or phrases if possible." to the question prompt leads to models used in the qwen2.py, molmo.py and intern.py scripts to respond with counts seemingly obviating the need to use another model to extract the answer.
Finally is it really necessary to use different prompts, in this case, for separate models?
You can take a look at the PR to add your dataset to vlmevalkit here open-compass/VLMEvalKit#974.