You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The fully generated .dict models take 70mb, which bloats binaries substantially. This is compared to 3.4mb for the targz.
It would be nice to support lazily doing this translation, which takes a modest hit to runtime performance in return for dramatically smaller binary, and the ability to only load certain models reducing memory consumption.
It would also be nice to have feature flags to turn off certain models; this is less important if we add lazy loading, though, as it would just allow you to save 1-2mb off the binary.