Skip to content

Something Error #72

@gy-xinchen

Description

@gy-xinchen

I did not build the model based on docker

m3/demo/gradio_m3.py
Line 295-297
##############
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
model_path, model_name
)
##############

I think load_pretrained_model missing parameters
then I repaired with
##############
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
model_path, None, model_name
)
##############

But I got an error when executing the code:
OSError: Can't load tokenizer for '/root/autodl-tmp/cache/hub/models--MONAI--Llama3-VILA-M3-8B/snapshots/df60e0276e2ae10624c86dabe909847a03b2a5cb'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/root/autodl-tmp/cache/hub/models--MONAI--Llama3-VILA-M3-8B/snapshots/df60e0276e2ae10624c86dabe909847a03b2a5cb' is the correct path to a directory containing all relevant files for a LlamaTokenizer tokenizer.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions