Skip to content

Commit fcff61b

Browse files
committed
chore: use ainvoke in samples, improve interrupt models docs
1 parent 9c18bd4 commit fcff61b

File tree

8 files changed

+176
-23
lines changed

8 files changed

+176
-23
lines changed

docs/interrupt_models.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,11 @@ Upon completion of the invoked process, the current agent will automatically res
1919

2020
#### Example:
2121
```python
22+
from uipath.models import InvokeProcess
2223
process_output = interrupt(InvokeProcess(name="MyProcess", input_arguments={"arg1": "value1"}))
2324
```
2425

25-
For a practical implementation of the `InvokeProcess` model, refer to the sample usage in the [planner.py](../../samples/multi-agent-planner-researcher-coder-distributed/src/multi-agent-distributed/planner.py#L184) file. This example demonstrates how to invoke a process with dynamic input arguments, showcasing the integration of the interrupt functionality within a multi-agent system or a system where an agent integrates with RPA processes and API workflows.
26+
For a practical implementation of the `InvokeProcess` model, refer to the sample usage in the [planner.py](../samples/multi-agent-planner-researcher-coder-distributed/src/multi-agent-distributed/planner.py#L181) file. This example demonstrates how to invoke a process with dynamic input arguments, showcasing the integration of the interrupt functionality within a multi-agent system or a system where an agent integrates with RPA processes and API workflows.
2627

2728
---
2829

@@ -36,6 +37,7 @@ The `WaitJob` model is used to wait for a job completion. Unlike `InvokeProcess`
3637

3738
#### Example:
3839
```python
40+
from uipath.models import WaitJob
3941
job_output = interrupt(WaitJob(job=my_job_instance))
4042
```
4143

@@ -57,8 +59,10 @@ For more information on UiPath apps, refer to the [UiPath Apps User Guide](https
5759

5860
#### Example:
5961
```python
62+
from uipath.models import CreateAction
6063
action_output = interrupt(CreateAction(name="AppName", title="Escalate Issue", data={"key": "value"}, app_version=1, assignee="user@example.com"))
6164
```
65+
For a practical implementation of the `CreateAction` model, refer to the sample usage in the [ticket-classification/main.py](../samples/ticket-classification/main.py#L116) file. This example demonstrates how to create an action with dynamic input.
6266

6367
---
6468

@@ -71,6 +75,7 @@ The `WaitAction` model is used to wait for an action to be handled. This model i
7175

7276
#### Example:
7377
```python
78+
from uipath.models import WaitAction
7479
action_output = interrupt(WaitAction(action=my_action_instance))
7580
```
7681

samples/company-research-agent/graph.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ class GraphOutput(BaseModel):
4343
response: str
4444

4545

46-
def research_node(state: GraphInput) -> GraphOutput:
46+
async def research_node(state: GraphInput) -> GraphOutput:
4747
# Format the user message with the company name
4848
user_message = f"""Please provide a comprehensive analysis and outreach strategy for the company: {state.company_name}. Use the TavilySearchResults tool to gather information. Include detailed research on the company's background, organizational structure, key decision-makers, and a tailored outreach strategy. Format your response using the following section headers:
4949
@@ -58,7 +58,7 @@ def research_node(state: GraphInput) -> GraphOutput:
5858

5959
new_state = MessagesState(messages=[{"role": "user", "content": user_message}])
6060

61-
result = research_agent.invoke(new_state)
61+
result = await research_agent.ainvoke(new_state)
6262

6363
return GraphOutput(response=result["messages"][-1].content)
6464

samples/multi-agent-planner-researcher-coder-distributed/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ dependencies = [
1212
"langchain-anthropic>=0.3.8",
1313
"langchain-experimental>=0.3.4",
1414
"tavily-python>=0.5.0",
15-
"uipath-langchain==0.0.88"
15+
"uipath-langchain==0.0.90"
1616
]
1717

1818
[project.optional-dependencies]

samples/multi-agent-planner-researcher-coder-distributed/src/multi-agent-distributed/coder.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,8 +36,8 @@ def python_repl_tool(
3636
code_agent = create_react_agent(llm, tools=[python_repl_tool])
3737

3838

39-
def code_node(state: MessagesState) -> GraphOutput:
40-
result = code_agent.invoke(state)
39+
async def code_node(state: MessagesState) -> GraphOutput:
40+
result = await code_agent.ainvoke(state)
4141
return GraphOutput(answer=result["messages"][-1].content)
4242

4343

samples/multi-agent-planner-researcher-coder-distributed/src/multi-agent-distributed/planner.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ def input(state: GraphInput):
7070
llm = ChatAnthropic(model="claude-3-5-sonnet-latest")
7171

7272

73-
def create_plan(state: State) -> Command:
73+
async def create_plan(state: State) -> Command:
7474
"""Create an execution plan based on the user's question."""
7575
parser = PydanticOutputParser(pydantic_object=ExecutionPlan)
7676

@@ -104,7 +104,7 @@ def create_plan(state: State) -> Command:
104104
format_instructions=parser.get_format_instructions(),
105105
)
106106

107-
plan_response = llm.invoke(formatted_prompt)
107+
plan_response = await llm.ainvoke(formatted_prompt)
108108

109109
try:
110110
plan_output = parser.parse(plan_response.content)

samples/multi-agent-planner-researcher-coder-distributed/src/multi-agent-distributed/researcher.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,8 @@ class GraphOutput(BaseModel):
1818
answer: str
1919

2020

21-
def research_node(state: MessagesState) -> GraphOutput:
22-
result = research_agent.invoke(state)
21+
async def research_node(state: MessagesState) -> GraphOutput:
22+
result = await research_agent.ainvoke(state)
2323
return GraphOutput(answer=result["messages"][-1].content)
2424

2525

0 commit comments

Comments
 (0)