Skip to content

Support for keeping models cached in the memory! #110

@cihangoksu

Description

@cihangoksu

Hi,

I am trying to use Comfyscript and run multiple workflows with a manager class. Each workflow uses different flux models. Unfortunately every time I call these run_model1 or run_model2 functions, manager class "Request to load flux" models and that takes quite long and exhauistive. Is there a way to cache these models properly? Why setting them in the init function as follows does not work? Is it because I use "with Workflow()" for each method? I am using Real runtime "from comfy_script.runtime.real import *"

Thank you very much in advance!

Cheers,
Cihan

class COMFYUIMANAGER(object):
def init(self):
super(COMFYUIMANAGER, self).init()
self.model1 = UNETLoader("flux1-schnell.safetensors", "fp8_e4m3fn")
self.model2 = UnetLoaderGGUF("flux1-dev-Q8_0.gguf")
def run_model1(self, *):
with Workflow():
......some code here.............
model = self.model1
def run_model2(self, *
):
with Workflow():
......some code here.............
model = self.model2

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentationenhancementNew feature or requestruntime

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions