You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* feat: making generate_from_raw public in openai
* feat: making generate_from_raw public in ollama
* feat: making generate_from_raw public in ollama
* feat: making generate_from_raw public in hf, watsonx
* feat: making generate_from_raw public in vllm
* adding warning to hf backend
* attempt at standardizing usage metrics
Copy file name to clipboardExpand all lines: mellea/backends/__init__.py
+5-3Lines changed: 5 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -58,19 +58,21 @@ def generate_from_context(
58
58
...
59
59
60
60
@abc.abstractmethod
61
-
def_generate_from_raw(
61
+
defgenerate_from_raw(
62
62
self,
63
63
actions: list[Component|CBlock],
64
+
ctx: Context,
64
65
*,
65
66
format: type[BaseModelSubclass] |None=None,
66
67
model_options: dict|None=None,
67
-
generate_logs: list[GenerateLog] |None=None,
68
+
tool_calls: bool=False,
68
69
) ->list[ModelOutputThunk]:
69
70
"""Generates a model output from the provided input. Does not use context or templates.
70
71
71
72
Args:
72
73
actions: list of actions to generate responses for. Each action is separate.
74
+
ctx: context passed to generation. Currently not used in generate_from_raw
73
75
format: A response format to used for structured outputs / constrained decoding. Note: some backends do not support this parameter. They will log warnings and continue to generate.
74
76
model_options: Any model options to upsert into the defaults for this call.
75
-
generate_logs: a `GenerateLog` instance to add log information to.
77
+
tool_calls: Always set to false unless supported by backend.
0 commit comments