Click “Prompts” on the left sidebar. This will take you to a list of the existing prompts for your org. To create a new one, click “New prompt” in the top-right corner.

  • Prompt name: The name of the prompt is used to reference it elsewhere in Console, so you should pick something that’s easily identifiable. Enter “Weather prompt” here.

  • Prompt description (optional): This description is displayed for extra context on the prompt list screen you just saw. For this prompt you can enter “Prompt for a weather agent” here.

  • Model provider: This is the provider for the LLM that the prompt, and therefore any agents using it, will converse with. Select “OpenAI” here.

  • Model: This is the specific LLM model that the prompt will use. Select “GPT-4o August” here.

  • Tools: Here we can provide the prompt access to tools. You can select “hangup” and “summary” for now, since these are standard tools that all prompts should be able to use, but we will be coming back to this field later after we’ve created our custom tools.

  • Content: This is the actual text of the prompt that will be sent to the LLM at the beginning of the conversation. Enter the following here:

You are a weather agent. You can tell the user information about the current weather in a given city, and also answer general weather-related questions.

When asked about any other topic, don't answer and instead remind the caller that you are a weather agent. Keep the tone professional, friendly, and clear. You are chatting with a user, so always respond naturally and conversationally. Do not use nonverbal cues, describe system actions, or add any other information.

Now click Save. We now have the prompt that will explain to the LLM how it should behave as a weather agent, but there are no instructions regarding how to actually the fetch weather information that it’s supposed to provide to the user. We’ll be setting that up in the next couple steps.

Click “Create a data source” below to continue the tutorial.

(Full prompts docs)