[ICCVW 25] LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning
-
Updated
Aug 8, 2025 - Python
[ICCVW 25] LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning
Fine-tuning of Gemma 2 model in Google Competition using a dataset of Chinese poetry. The goal is to adapt the model to generate Chinese poetry in a classical style by training it on a subset of poems. The fine-tuning process leverages LoRA (Low-Rank Adaptation) for efficient model adaptation.
This repository contains the code and benchmarks for the paper "Assessing the Impact of QLoRA Fine-Tuning on Sub-10B Parameter LLMs for Reasoning and Fact Recall." We evaluate LLMs like Llama 3.1, Mistral, and Gemma 2 on Math and BioMedical tasks.
⚗️ Gemma 2 9B model instruct repository
Add a description, image, and links to the gemma-2 topic page so that developers can more easily learn about it.
To associate your repository with the gemma-2 topic, visit your repo's landing page and select "manage topics."