A prompt is a set of natural language instructions to an AI agent that detail everything the agent does, how it does those things, and what it shouldn’t do.
  • Through the prompt, you also give the agent access to tools, which are APIs that allow the agent to do useful work like schedule appointments or relay messages.
  • A prompt can be reused by multiple agents.
prompts_list.png

Creating a prompt

To create a prompt, click “New prompt” in the top right corner. prompts_create.png
  • Name: Name of the prompt. Must be a unique name.
  • Description (optional): Description of the prompt.
  • Provider: Select the LLM provider. Syllable currently supports Azure Open AI, Google, Open AI.
  • Model: Select the LLM model. (e.g., ChatGPT-4o, Gemini 2.0 Flash Lite)
  • Version: Depending on the provider and model, select the version. Some models do not have a version, in which case, this field is disabled.
  • Seed (optional): The “seed” is a starting point for the random number generator used by the LLM during text generation. 
    • When the temperature is greater than 0 (meaning randomness is introduced), the seed ensures that this randomness is reproducible.
    • If you use the same seed, prompt, and other parameters (including temperature), you will consistently get the same output.
    • Changing the seed, while keeping other settings the same, will produce different outputs. 
    • If “seed” is not set, it will default to “null”. Must be an integer.
  • Temperature (optional): The “temperature” primarily determines the randomness and creativity of the model’s output. It controls how the model selects the next word in a sequence, influencing the overall predictability and variation of the generated text.
    • Low temperature (closer to 0) leads to more deterministic and predictable outputs, prioritizing the most probable tokens according to the model’s training data.
    • High temperature (closer to 1 or higher) increases the randomness and creativity, making the model more likely to select less probable tokens, resulting in diverse and sometimes surprising responses.
    • A temperature of 0 theoretically implies a fully deterministic output (always selecting the most probable token), but in practice, some minor variations might still occur due to factors like race conditions in the model’s execution environment. 
    • If you want to minimize creativity (sometimes hallucinations), set the temperature to 0. 
    • If “temperature” is not set, will default to “null”. Must be an integer.
  • Tools: Add tools and APIs to the prompt to give it the ability to do useful work like schedule appointments, or relay messages. See the system tools Syllable provides.
  • Prompt text: This is the actual text of the prompt that will be sent to the LLM at the beginning of the conversation. If you want an agent using the prompt to call the tools you added above, you should explicitly tell it how and when to call them (see below).
Available models Here is the list of Syllable-supported LLMs:
ProviderModelVersionAPI Version
Azure OpenAIgpt-4o2024-08-062024-06-01
Azure OpenAIgpt-4o-mini2024-07-182024-06-01
Googlegemini-1.5-pro002v1beta
Googlegemini-2.0-flash001v1beta
Googlegemini-2.0-flash-lite001v1beta
OpenAIgpt-4o2024-08-06
OpenAIgpt-4o-mini2024-07-18
OpenAIgpt-4.1

Writing a prompt

Writing a good prompt for a Large Language Model (LLM) to follow is the same as writing good instructions for a human to follow. It’s essentially a written communication challenge that requires the prompt writer to write clear, unambiguous, and explicit instructions. Writing a good prompt for your use case is an iterative process that requires lots of trial and error, writing and testing. 
  • Give the agent a role: Tell the agent exactly what role you want it to assume, like “You are a conscientious customer service agent who is dedicated to helping callers out with their appointment related requests.”
  • State the context: Define the situations when the agent would be used, such as people calling in to manage an appointment, or ask a question about their bill.
  • State the desired behavior: Specify how you want the agent to respond to callers, and give examples of how you want it to respond. For example, “Please use phrases like ‘Thank you,’ ‘You’re welcome,’ and ‘How can I assist you?’ during the call.” and “Keep responses short and to the point to avoid confusion.”
  • Set parameters: Include specific guidelines about error handling and edge cases, like “Do not give out medical advice or attempt to diagnose any condition, medical or otherwise.” 
  • Give the prompt access to a tool: Explicitly mention the name of the tool, its purpose, and usage conditions. For example, “Use the ScheduleNewAppointment tool to schedule new appointments for callers. Only use this tool if the day is on or between Monday-Friday AND the time is between 9am-5pm PST.” 
Here’s an example of a low-quality prompt and an example of a high-quality prompt: Example Prompt 1 "Reply to a customer who is mad about their late order." Low-quality prompt attributes:
  • Lack of clarity: The instruction is vague and does not specify what to include in the response.
  • Lack of conciseness: Although short, it is too brief to provide meaningful guidance.
  • Irrelevance: The prompt does not guide the LLM on how to address the customer’s feelings or the issue.
  • Ambiguousness: The term “mad” is vague, and the prompt does not specify the tone or content required.
  • Lack of explicitness: There are no explicit instructions on what elements the response should contain, leading to potential gaps in the reply.
Example Prompt 2 "As an AI customer service agent, craft a response to a customer who is upset because their order did not arrive on time. Your response should include an apology, an explanation of possible reasons for the delay, a reassurance that the issue will be addressed, and an offer for compensation or a solution. Keep the response professional, empathetic, and concise." High-quality prompt attributes:
  • Clarity: The task is clearly defined with specific elements to include (apology, explanation, reassurance, and offer for compensation).
  • Conciseness: The prompt is direct and to the point, without unnecessary details.
  • Relevance: All instructions are related to addressing the customer’s complaint.
  • Unambiguousness: There is no room for misinterpretation about what the response should contain.
  • Explicitness: Each part of the desired response is explicitly described, ensuring the LLM understands the structure and tone needed.

Using variables in prompts

Variables enable dynamic, personalized content in your prompts. Use the {{ variable }} syntax to insert context-specific information that gets resolved at runtime. Examples of variable usage:
You are a customer service agent for {{ vars.company_name }}. 
Today's date is {{ vars.session.date }}.
Please assist the caller in {{ vars.session.language }}.
For complete variable documentation, syntax, and examples, see Variables.

The cost of a prompt

Each word and punctuation mark in your prompt counts toward the token limit. Longer prompts and responses use more tokens, increasing the cost. The more concise your instructions, the cheaper (and faster) it will be to run.  You can estimate the cost of your prompt with OpenAI’s tokenizer. If you have a lot of static information to which the LLM needs to have access, such as the addresses and hours of your business’s locations, consider storing it in a data source instead of in your prompt, as this will reduce the number of tokens, and therefore the cost and latency. Prompt-writing resources
If you’re interested in learning more about how to write effective prompts for your AI Agent, check out some of these resources:

Testing a prompt

Once your prompt has been created, you can test to see how well your prompt behaves by assigning it to an agent and chatting with it. See how to create agents.

Prompt versions

A prompt version refers to a specific, saved iteration or state of a prompt at a particular point in time. When changes are made to a prompt and saved, a new version is automatically created and added to the prompt’s version history, while the previous version is preserved.  This includes any changes to:
  • Prompt config (e.g. prompt name, description)
  • Tools added/removed to the prompt
  • Prompt text and instructions
With prompt version tracking, this allows users to:
  • Track changes: See what modifications were made, when, and by whom.
  • Compare versions: Identify differences between various iterations of the prompt.
  • Restore previous versions: Restore the prompt to an earlier version if needed, effectively undoing unwanted changes or recovering lost content.
Each version in the history represents a snapshot of the prompt’s content and metadata at the moment it was saved or checked in, providing a chronological record of its evolution. Viewing prompt versions
To view a prompt’s versions, go to “Prompts”, select a prompt, and click on the “Versions” tab.
prompts_versions.png For each prompt, you’ll be able to view:
  • Version number: The version number is automatically incremented by the system when a change is saved.
  • Comment: A user is required to enter a comment anytime there are changes made to a prompt in the Save dialog.
  • Last modified: Last modified includes the timestamp of day and time the version was created, along with which user email made the change.
Previewing a version
To preview a prompt version, click on a version row. You will see the snapshot of the prompt at that point in time, along with its configuration, tools, and the prompt instructions.
prompts_version.png Comparing versions
To compare versions, select a version and click “Compare”. 
  • A dialog will appear where you can select another version to compare to. 
  • You’ll be able to compare the difference across the two versions - differences in configuration settings, attached tools, and prompt text instructions.
  • You can restore either version.
prompts_compare.png Restoring a version
To restore a version, click “Restore” on the version you’d like to restore. You’ll be able to review the configuration and prompt text before saving and going live.
  • If tools have been added or removed since this prompt version, you will need to make sure the references in the prompt instructions are updated or removed.
  • If a tool has been deleted since it was used in this prompt version, the tool will not be available and you will have to make sure any references to the deleted tool is removed from the prompt.
Once you’ve updated the prompt configuration and instructions, click “Restore & Save” to save the prompt and go live across all agents using this prompt. Once a version is restored, it is saved as a new version and the version number is incremented.