Featured image
Prompt Engineering

Step-Back Prompting: A New Technique for Abstraction and Reasoning in Large Language Models

avatar

Sven

October 23rd, 2023

~ 3 min read

In the field of natural language processing, techniques that enable large language models (LLMs) to perform complex reasoning tasks are in high demand. One such technique that has gained attention is Step-Back Prompting. This prompting approach allows LLMs to derive high-level concepts and first principles from which accurate answers can be derived. In this blog post, we will explore the concept of Step-Back Prompting, its benefits, and its potential applications.

Understanding Step-Back Prompting

Step-Back Prompting is a prompt engineering technique that interfaces with one LLM in an iterative process. It involves distilling the original question into a stepback question and using the stepback answer to arrive at the final answer. This approach requires the LLM to step back and paraphrase the question into a more generic form, making it easier to answer.

Examples of Step-Back Prompting

To illustrate the concept of Step-Back Prompting, let's consider a few examples:

1. Original Question: Which position did Knox Cunningham hold from May 1955 to Apr 1956?

Stepback Question: Which positions have Knox Cunningham held in his career?

2. Original Question: Who was the spouse of Anna Karina from 1968 to 1974?

Stepback Question: Who were the spouses of Anna Karina?

3. Original Question: Which team did Thierry Audel play for from 2007 to 2008?

Stepback Question: Which teams did Thierry Audel play for in his career?

These stepback questions allow the LLM to generate more comprehensive answers by considering the broader context of the original question.

Benefits and Applications of Step-Back Prompting

Step-Back Prompting offers several benefits in improving the performance of LLMs in complex reasoning tasks. By enabling abstractions and reasoning schemes, this technique leads to significant improvements in a wide range of tasks. It allows LLMs to tackle intricate, multi-step reasoning by breaking down the problem into more manageable components. Step-Back Prompting can be combined with other prompting techniques like Retrieval Augmented Generation (RAG) to enhance its effectiveness.

Performance of Step-Back Prompting

A study comparing Step-Back Prompting to Chain-of-Thought (COT) prompting demonstrated the strong performance of Step-Back Prompting in various complex tasks. The study showed that Step-Back Prompting fixed a significant number of incorrect predictions made by baseline models, while introducing a small percentage of errors. Additionally, when combined with RAG, Step-Back Prompting further improved the accuracy of predictions.

Conclusion

Step-Back Prompting is an innovative technique that allows large language models to perform abstraction and reasoning tasks more effectively. By distilling complex questions into stepback questions, LLMs can generate accurate and comprehensive answers. The versatility of Step-Back Prompting makes it a valuable tool for enhancing the capabilities of LLMs in a wide range of applications.

https://arxiv.org/pdf/2310.06117.pdf