Generate better prompts in the developer console

You can now generate production-ready prompt templates in the Anthropic Console. Describe what you want to achieve, and Claude will use prompt engineering techniques such as chain-of-thought reasoning to create an effective, precise, and reliable prompt.

This feature is designed to help users who are new to prompt engineering, as well as save time for experienced prompt engineers. You will get the best results by providing the prompt generator with detailed information about your task and desired output formatting.

Although the generated prompts do not always produce perfect results, they often outperform hand-written prompts created by those who are new to prompt engineering. The generated prompt templates are also editable, allowing you to quickly tweak them for optimal performance.

Prompting best practices

The prompt templates generated by this new feature make use of many of our prompt engineering best practices. One such practice is role setting, where Claude is encouraged to take on the characteristics of an expert at the chosen task. In our content moderation example, the role setting looks like this:

You will be acting as a content moderator to classify chat transcripts as either approved or rejected based on a provided content moderation policy.

Another practice is chain of thought reasoning, in which Claude is given time and space to collect its thoughts before answering. This allows for more thorough and well-reasoned responses to complex queries. When asked to generate a prompt for product recommendations based on a customer's previous transactions, this is implemented as follows:

In a <scratchpad>, brainstorm 3 different product recommendations you could make to this customer based on their transaction history. For each potential recommendation, provide a brief rationale explaining why you think it would be a good fit for this customer.

Additionally, the templates often place the “variables”—input fields where custom data can be inserted—between XML tags. This follows another key best practice by clearly delineating different parts of the prompt by providing a clear structure. When asked for a prompt that translates code to Python, we see that the longer and more ambiguous {{CODE}} variable is marked by XML tags, while the simple {{LANGUAGE}} variable is positioned inline.

Your task is to translate a piece of code from another programming language into Python.

Here is the code to translate:


The code is written in {{LANGUAGE}}.

In some cases, you’ll see Claude writing example inputs and outputs to give itself clear direction around the types of answers it thinks you want. You can edit these examples to match your desired output formatting.

Behind the scenes

The prompt generator is based on a long prompt that itself uses many of the techniques already mentioned.

  • It contains numerous examples of task descriptions and prompt templates to show Claude how to go from a task description to a prompt template.
  • It encourages Claude to plan out the structure of the template it will produce before writing that template, allowing Claude time to collect its thoughts.
  • It has a strong “spine” composed of XML tags that mark the beginning and end of each section to enhance legibility.

You can see the full prompt in this Colab notebook.

Prompt templates as an evaluation tool

Variables in the templates you’ll get from the prompt generator will be in handlebars notation, as shown in the earlier content moderation example:

Here is the policy you should enforce:

Here is the chat transcript to review and classify:

In this example, you could then upload your content policy and a range of different chat transcripts to see how Claude behaves. This process allows you to ensure that your application will respond appropriately across a variety of situations.

Customer spotlight: ZoomInfo

Go-to-market platform ZoomInfo uses Claude to make actionable recommendations and drive value for their customers. Their use of prompt generation helped significantly reduce the time it took to build an MVP of their RAG application, all while improving output quality.

“Anthropic’s new prompt generator feature enabled us to reach production-ready outputs much faster. It highlighted techniques I hadn't been using to boost performance, and significantly reduced the time spent tuning our app," said Spencer Fox, Principal Data Scientist at ZoomInfo. "We built a new RAG application and reached MVP in just a few days, reducing the time it took to refine prompts by 80%.”

Get started

To get started building production-ready prompts with Claude, visit the Anthropic API.