Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions pages/techniques/art.en.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ import ART2 from '../../img/ART2.png'
Combining CoT prompting and tools in an interleaved manner has shown to be a strong and robust approach to address many tasks with LLMs. These approaches typically require hand-crafting task-specific demonstrations and carefully scripted interleaving of model generations with tool use. [Paranjape et al., (2023)](https://arxiv.org/abs/2303.09014) propose a new framework that uses a frozen LLM to automatically generate intermediate reasoning steps as a program.

ART works as follows:
- given a new task, it select demonstrations of multi-step reasoning and tool use from a task library
- given a new task, it selects demonstrations of multi-step reasoning and tool use from a task library
- at test time, it pauses generation whenever external tools are called, and integrate their output before resuming generation

ART encourages the model to generalize from demonstrations to decompose a new task and
Expand All @@ -22,4 +22,4 @@ ART substantially improves over few-shot prompting and automatic CoT on unseen t
Below is a table demonstrating ART's performance on BigBench and MMLU tasks:

<Screenshot src={ART2} alt="ART2" />
Image Source: [Paranjape et al., (2023)](https://arxiv.org/abs/2303.09014)
Image Source: [Paranjape et al., (2023)](https://arxiv.org/abs/2303.09014)