Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 47 additions & 0 deletions examples/cotprompt/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

## Overview

This example contains code to implement the basic COT in the paper [Chain-of-Thought Prompting Elicits Reasoning in Large Language Models](https://arxiv.org/abs/2201.11903) by Wei, et al.

## Running the Example

For terminal/CLI usage: Input and output are in the terminal window

```bash
cd examples/cotprompt
python run_cli.py
```

An example input and output(by GPT-4o-mini).

```
> Input: Which is bigger, 9.9 or 9.11?
> Reasoning output:

1. **Analyze the Question**: The question asks which of the two numbers, 9.9 or 9.11, is larger. To answer this, we must understand the place values and numerical representation of both numbers.

2. **Consider the Values**: 9.9 can be broken down into 9 plus 0.9, while 9.11 can be broken down into 9 plus 0.11. This shows that both numbers start with the same whole number part (9).

3. **Compare the Decimal Parts**: We need to compare the decimal portions: 0.9 and 0.11. The number 0.9 can also be expressed as 0.90 for comparison purposes, thus making it easier to analyze alongside 0.11.

4. **Evaluate the Decimal Comparison**: Now we compare the decimals: 0.90 (equivalent to 0.9) and 0.11. Here, we observe that 0.90 is greater than 0.11 because in the tenths place, 0.90 has a 9 compared to 0.11’s 1, which indicates a larger value.

5. **Conclude Comparison Process**: Since both numbers share the same whole number part of 9, the larger value will depend solely on the larger decimal part, which we have determined through comparison.

At this point, we have thoroughly analyzed the components of each number and identified that the key to determining which is larger lies in the comparison of their decimal parts.

>Final answer: 9.9 is bigger than 9.11.
```

## Citation
```
@article{wei2022chain,
title={Chain-of-thought prompting elicits reasoning in large language models},
author={Wei, Jason and Wang, Xuezhi and Schuurmans, Dale and Bosma, Maarten and Xia, Fei and Chi, Ed and Le, Quoc V and Zhou, Denny and others},
journal={Advances in neural information processing systems},
volume={35},
pages={24824--24837},
year={2022}
}
```
Empty file.
57 changes: 57 additions & 0 deletions examples/cotprompt/agent/cot_agent/cot_conclude.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
from omagent_core.engine.worker.base import BaseWorker
from omagent_core.utils.registry import registry
from omagent_core.models.llms.openai_gpt import OpenaiGPTLLM
from omagent_core.models.llms.schemas import Message, Content
from omagent_core.models.llms.base import BaseLLMBackend
from omagent_core.utils.logger import logging
from pathlib import Path

CURRENT_PATH = Path(__file__).parents[0]

@registry.register_worker()
class COTConclusion(BaseLLMBackend, BaseWorker):
"""根据问题和推理步骤生成最终答案。"""

llm: OpenaiGPTLLM

def _run(self, user_question:str, reasoning_result:str, *args, **kwargs):
# 从工作流变量中提取问题和推理结果
# user_question = kwargs['user_question']
# reasoning_result = kwargs['reasoning_result']

# 生成最终答案的 prompt
final_answer_prompt = self.generate_final_answer_prompt(user_question, reasoning_result)

# 调用 LLM 生成最终答案
final_answer = self.call_llm_with_prompt(final_answer_prompt)

# 将最终答案存储到工作流变量中
self.stm(self.workflow_instance_id)['final_answer'] = final_answer

self.callback.send_answer(self.workflow_instance_id, msg=final_answer)

return {'final_answer': final_answer}

def generate_final_answer_prompt(self, question: str, reasoning_result: str):
"""生成最终答案的 prompt"""

return f"""
You have completed the reasoning process. Based on the following question and reasoning result, generate a concise and clear final answer.

Question: {question}
Reasoning steps: {reasoning_result}

Please provide the final answer:
"""

def call_llm_with_prompt(self, prompt: str):
"""调用 LLM 生成最终答案"""
chat_message = []

# Add text question as first message
chat_message.append(Message(role="user", message_type='text', content=prompt))

#print("chat_message++++++++++++++++++++++++++++",chat_message)

response = self.llm.generate(chat_message)
return response['choices'][0]['message']['content']
21 changes: 21 additions & 0 deletions examples/cotprompt/agent/cot_agent/cot_input_interface.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
from omagent_core.engine.worker.base import BaseWorker
from omagent_core.utils.registry import registry
from omagent_core.utils.logger import logging
from pathlib import Path

CURRENT_PATH = Path(__file__).parents[0]

@registry.register_worker()
class COTInputInterface(BaseWorker):
"""处理用户输入问题并将其存储到工作流变量中。"""

def _run(self, *args, **kwargs):
# 读取用户输入的问题
user_input = self.input.read_input(workflow_instance_id=self.workflow_instance_id, input_prompt='请输入你的问题:')

question = user_input['messages'][-1]['content'][0]['data']

# 将问题存储到工作流的变量中
self.stm(self.workflow_instance_id)['user_question'] = question

return {'user_question': question}
64 changes: 64 additions & 0 deletions examples/cotprompt/agent/cot_agent/cot_reasoning.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
from omagent_core.models.llms.openai_gpt import OpenaiGPTLLM
from omagent_core.engine.worker.base import BaseWorker
from omagent_core.utils.registry import registry
from omagent_core.models.llms.base import BaseLLMBackend
from omagent_core.models.llms.schemas import Message, Content
from omagent_core.utils.logger import logging
from pathlib import Path

CURRENT_PATH = Path(__file__).parents[0]

@registry.register_worker()
class COTReasoning(BaseLLMBackend, BaseWorker):
"""根据问题生成推理过程的中间数据,不给出答案。"""

llm: OpenaiGPTLLM

def _run(self, user_question:str,*args, **kwargs):
# 从工作流变量中提取问题
#user_question = kwargs['user_question']

# 生成推理过程的 prompt
reasoning_prompt = self.generate_reasoning_prompt(user_question)
#print("reasoning_prompt++++++++++++++++++++++++++++",reasoning_prompt)

# 调用 LLM 生成推理步骤
reasoning_result = self.call_llm_with_prompt(reasoning_prompt)

# 将推理结果存储到工作流变量中
self.stm(self.workflow_instance_id)['reasoning_result'] = reasoning_result

self.callback.send_answer(self.workflow_instance_id, msg=reasoning_result)

return {'reasoning_result': reasoning_result}

def generate_reasoning_prompt(self, question: str):
"""生成推理过程的 prompt"""
return f"""
Please reason step by step based on the following question and provide the reasoning process:

question: {question}

Reasoning Steps:
1. Analyze the question and consider possible solutions.
2. Reason through the different aspects of the question step by step.
3. Only provide the reasoning process, do not give the final answer.

Please begin reasoning:
"""

def call_llm_with_prompt(self, prompt: str):
"""调用 LLM 生成推理步骤"""


chat_message = []

# Add text question as first message
chat_message.append(Message(role="user", message_type='text', content=prompt))

#print("chat_message++++++++++++++++++++++++++++",chat_message)

response = self.llm.generate(chat_message)
if response is None:
raise ValueError("LLM inference returned None.")
return response['choices'][0]['message']['content']
17 changes: 17 additions & 0 deletions examples/cotprompt/compile_container.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Import core modules and components
from omagent_core.utils.container import container

# Import workflow related modules
from pathlib import Path
from omagent_core.utils.registry import registry

# Set up path and import modules
CURRENT_PATH = root_path = Path(__file__).parents[0]
registry.import_module()

# Register required components
container.register_callback(callback='AppCallback')
container.register_input(input='AppInput')
container.register_stm("RedisSTM")
# Compile container config
container.compile_config(CURRENT_PATH)
7 changes: 7 additions & 0 deletions examples/cotprompt/configs/llms/gpt4o.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
name: OpenaiGPTLLM
model_id: gpt-4o-mini
api_key: ${env| custom_openai_key, openai_api_key}
endpoint: ${env| custom_openai_endpoint, https://api.openai.com/v1}
temperature: 0
vision: true
response_format: json_object
6 changes: 6 additions & 0 deletions examples/cotprompt/configs/llms/json_res.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
name: OpenaiGPTLLM
model_id: gpt-4o-mini
api_key: ${env| custom_openai_key, openai_api_key}
endpoint: ${env| custom_openai_endpoint, https://api.openai.com/v1}
temperature: 0
#vision: true
6 changes: 6 additions & 0 deletions examples/cotprompt/configs/llms/text_res.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
name: OpenaiGPTLLM
model_id: gpt-4o-mini
api_key: ${env| custom_openai_key, openai_api_key}
endpoint: ${env| custom_openai_endpoint, https://api.openai.com/v1}
temperature: 0
response_format: text
10 changes: 10 additions & 0 deletions examples/cotprompt/configs/tools/all_tools.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
llm: ${sub|text_res}
tools:
- Calculator
- CodeInterpreter
- ReadFileContent
- WriteFileContent
- ShellTool
- name: WebSearch
bing_api_key: ${env|bing_api_key, null}
llm: ${sub|text_res}
5 changes: 5 additions & 0 deletions examples/cotprompt/configs/workers/cot_conclude.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
name: COTConclusion
llm: ${sub|json_res}
tool_manager: ${sub|all_tools}
output_parser:
name: StrParser
5 changes: 5 additions & 0 deletions examples/cotprompt/configs/workers/cot_input_interface.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
name: COTInputInterface
llm: ${sub|json_res}
#tool_manager: ${sub|all_tools}
output_parser:
name: StrParser
5 changes: 5 additions & 0 deletions examples/cotprompt/configs/workers/cot_reasoning.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
name: COTReasoning
llm: ${sub|json_res}
#tool_manager: ${sub|all_tools}
output_parser:
name: StrParser
84 changes: 84 additions & 0 deletions examples/cotprompt/container.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
conductor_config:
name: Configuration
base_url:
value: http://localhost:8080
description: The Conductor Server API endpoint
env_var: CONDUCTOR_SERVER_URL
auth_key:
value: null
description: The authorization key
env_var: AUTH_KEY
auth_secret:
value: null
description: The authorization secret
env_var: CONDUCTOR_AUTH_SECRET
auth_token_ttl_min:
value: 45
description: The authorization token refresh interval in minutes.
env_var: AUTH_TOKEN_TTL_MIN
debug:
value: false
description: Debug mode
env_var: DEBUG
connectors:
redis_stream_client:
name: RedisConnector
host:
value: localhost
env_var: HOST
port:
value: 6379
env_var: PORT
password:
value: null
env_var: PASSWORD
username:
value: null
env_var: USERNAME
db:
value: 0
env_var: DB
redis_stm_client:
name: RedisConnector
host:
value: localhost
env_var: HOST
port:
value: 6379
env_var: PORT
password:
value: null
env_var: PASSWORD
username:
value: null
env_var: USERNAME
db:
value: 1
env_var: DB
components:
RedisSTM:
name: RedisSTM
DefaultCallback:
name: DefaultCallback
bot_id:
value: ''
env_var: BOT_ID
start_time:
value: 2024-11-10_20:51:13
env_var: START_TIME
folder_name:
value: ./running_logs/2024-11-10_20:51:13
env_var: FOLDER_NAME
AppInput:
name: AppInput
AppCallback:
name: AppCallback
bot_id:
value: ''
env_var: BOT_ID
start_time:
value: 2024-11-10_20:51:13
env_var: START_TIME
folder_name:
value: ./running_logs/2024-11-10_20:51:13
env_var: FOLDER_NAME
Loading