Blog

AI prompting techniques

May 13, 2025

AI prompting techniques

One of the foundational skills in AI engineering is writing effective prompts. Whether you're building agents, implementing RAG pipelines, or designing multi-step workflows, writing great prompts helps LLMs follow instructions and produce reliable outputs.

Types of AI prompts

There are several common prompt formats used in AI development, each offering a different level of control over the model’s output. The most basic is zero shot prompting, where you simply ask a question without giving the model any examples.

Single-shot prompting and few-shot prompting build on that by giving the model example inputs and outputs which improve structure, tone, and reliability.

You can also guide the model’s behavior more directly, influencing role or personality, using system prompts.

Zero shot prompting

Zero-shot prompting gives the LLM maximum freedom in its response. You prompt the LLM and hope for the best.

This approach works for simple queries but offers limited control over the format and quality. It's the fastest but least reliable method.

Prompt example:

Explain how recursion works in programming.

Single shot prompting

Single shot prompts provide one example with input and output.

Single shot prompting establishes a pattern the model can follow, making it more reliable—especially when format matters.

Prompt example:

Summarize these paragraphs in one sentence:

Input: The study examined the effects of caffeine on cognitive performance. Participants who consumed caffeine showed improved reaction times and better focus on attention-based tasks compared to the control group. However, these benefits diminished after 4-5 hours.

Output: Caffeine temporarily improved reaction times and focus in study participants, with effects lasting 4-5 hours.

Input: Recent advancements in renewable energy technology have led to significant cost reductions. Solar panel efficiency has increased while manufacturing costs have decreased by 75% over the last decade. This has made solar energy competitive with fossil fuels in many markets.

Output:

Few shot prompting

Few shot prompts give multiple examples for more precise control over the output. More examples = more guidance, but also more expensive. Choose your approach based on how much precision you need.

Prompt example:

Classify these sentences as positive, neutral, or negative:

Text: The service was quick and the staff was friendly.  
Sentiment: Positive

Text: It arrived on time but wasn't exactly what I expected.  
Sentiment: Neutral

Text: I waited an hour and the product was damaged.  
Sentiment: Negative

Text: The website was easy to navigate.  
Sentiment:

The Seed Crystal approach

Not sure where to start? Ask the model to generate a prompt for you.

Generate a prompt for requesting a picture of a dog playing with a whale.

This gives you a solid V1 to refine. You can also ask the model how to improve it.

Use the same model you're prompting for best results (e.g., Claude for Claude, GPT-4 for GPT-4).


System prompts

When accessing models via API, they usually have the ability to set a system prompt, which gives the model characteristics that you want it to have. This will be in addition to the specific "user prompt" that gets passed in.

You can ask the model to answer the same question as different personas, like Steve Jobs or Harry Potter, or pretend you’re a cat expert assistant. Here’s an example of what that looks like in Mastra:

import { openai } from "@ai-sdk/openai";
import { Agent } from "@mastra/core/agent";
import { createTool } from "@mastra/core/tools";
import { z } from "zod";

const instructions = `You are a helpful cat expert assistant. When discussing cats, you should always include an interesting cat fact.

Your main responsibilities:
1. Answer questions about cats
2. Use the catFact tool to provide verified cat facts
3. Incorporate the cat facts naturally into your responses

Always use the catFact tool at least once in your responses to ensure accuracy.`;

const getCatFact = async () => {
  const { fact } = await fetch("https://catfact.ninja/fact").then((res) =>
    res.json()
  );
  return fact;
};

const catFact = createTool({
  id: "Get cat facts",
  inputSchema: z.object({}),
  description: "Fetches cat facts",
  execute: async () => {
    console.log("using tool to fetch cat fact");
    return {
      catFact: await getCatFact(),
    };
  },
});

const catOne = new Agent({
  instructions,
  tools: {
    catFact,
  },
});

const result = await catOne.generate("Tell me a cat fact");

console.log(result.text);

System prompts are useful for tone and persona shaping, but don’t usually improve factual accuracy.

Prompt formatting tricks

AI models are sensitive to formatting. You can use this to your advantage:

  • Use CAPITALIZATION for emphasis on certain words
  • Use XML-like structure to provide a clear path for instruction-following
  • Use structured sections for clarity and detail (e.g., context, task, constraints)

Tweak and iterate—small formatting changes can yield big results.

Example: A code generation prompt

Here's a simple code generation prompt that utilizes formatting to improve the model’s output:

# CONTEXT
You are helping a developer build a React component.

# TASK
Create a responsive navigation bar component.

# REQUIREMENTS
- Use functional components with hooks
- Include mobile and desktop views
- Support dark/light modes
- Implement accessibility features

# OUTPUT FORMAT
Provide the complete code with comments explaining key parts.

Prompting for AI agents

When building AI applications, it's important to consider how you will handle prompt output as well as evaluate its performance. Mastra provides structured output and evals to make this easy for developers building their first AI agent.

Structured prompt output

By default, LLMs return unstructured text as their repsonse. With Mastra, the output of your prompts is returned in a structured format instead of unstructured text. Mastra supports structured output in JSON or Zod.

Evaluating prompt performance

Mastra provides comprehensive eval capabilities. Each eval returns a normalized score between 0-1 that can be logged and compared, helping you measure the quality of your prompts.

Mastra supports various eval metrics for assessing agent outputs, including metrics for answer relevancy, completeness, and prompt alignment. Read more about Mastra evals.

Conclusion

Effective prompt engineering is clear communication.

Start simple. Iterate. Refine. Great prompts are rarely perfect on the first try.

Experimenting with prompting techniques helps you get the most out of LLMs.

For a comprehensive introduction to building AI agents, download the book Principles of Building AI Agents.

Share

Stay up to date