-
Couldn't load subscription status.
- Fork 537
Open
Description
The following code snippet on this google colab does not work for me: google colab
ValueError Traceback (most recent call last)
[/tmp/ipython-input-1300603513.py](https://localhost:8080/#) in <cell line: 0>()
2 from transformers import pipeline
3
----> 4 pipe = pipeline("image-text-to-text", model="openbmb/MiniCPM-V-4_5", trust_remote_code=True)
5 messages = [
6 {
1 frames
[/usr/local/lib/python3.12/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs)
331 for class_name, trace in all_traceback.items():
332 error += f"while loading with {class_name}, an error is thrown:\n{trace}\n"
--> 333 raise ValueError(
334 f"Could not load model {model} with any of the following classes: {class_tuple}. See the original errors:\n\n{error}\n"
335 )
ValueError: Could not load model openbmb/MiniCPM-V-4_5 with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForImageTextToText'>,). See the original errors:
while loading with AutoModelForImageTextToText, an error is thrown:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/transformers/pipelines/base.py", line 293, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers_modules.openbmb.MiniCPM_hyphen_V_hyphen_4_5.343b490578adc7e03b687846c33cadd8a00cb80d.configuration_minicpm.MiniCPMVConfig'> for this kind of AutoModel: AutoModelForImageTextToText.
Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, Cohere2VisionConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, Florence2Config, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, Glm4vMoeConfig, GotOcr2Config, IdeficsConfig, Idefics2Config, Idefics3Config, InstructBlipConfig, InternVLConfig, JanusConfig, Kosmos2Config, Kosmos2_5Config, Lfm2VlConfig, Llama4Config, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, Mistral3Config, MllamaConfig, Ovis2Config, PaliGemmaConfig, PerceptionLMConfig, Pix2StructConfig, PixtralVisionConfig, Qwen2_5_VLConfig, Qwen2VLConfig, Qwen3VLConfig, Qwen3VLMoeConfig, ShieldGemma2Config, SmolVLMConfig, UdopConfig, VipLlavaConfig, VisionEncoderDecoderConfig.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/transformers/pipelines/base.py", line 311, in infer_framework_load_model
model = model_class.from_pretrained(model, **fp32_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers_modules.openbmb.MiniCPM_hyphen_V_hyphen_4_5.343b490578adc7e03b687846c33cadd8a00cb80d.configuration_minicpm.MiniCPMVConfig'> for this kind of AutoModel: AutoModelForImageTextToText.
Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, Cohere2VisionConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, Florence2Config, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, Glm4vMoeConfig, GotOcr2Config, IdeficsConfig, Idefics2Config, Idefics3Config, InstructBlipConfig, InternVLConfig, JanusConfig, Kosmos2Config, Kosmos2_5Config, Lfm2VlConfig, Llama4Config, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, Mistral3Config, MllamaConfig, Ovis2Config, PaliGemmaConfig, PerceptionLMConfig, Pix2StructConfig, PixtralVisionConfig, Qwen2_5_VLConfig, Qwen2VLConfig, Qwen3VLConfig, Qwen3VLMoeConfig, ShieldGemma2Config, SmolVLMConfig, UdopConfig, VipLlavaConfig, VisionEncoderDecoderConfig.
Metadata
Metadata
Assignees
Labels
No labels