Learn how to write clear, effective text prompts that help AI models like ChatGPT, Gemini or Claude deliver the results you actually want.
Writing a good AI prompt isn’t about magic words, it’s about clarity.
When you learn how to talk to AI models effectively, you can turn rough ideas into accurate, useful and creative results. A vague, short prompt often leads to average results, but adding context, clear instructions and format requirements can deliver exactly what you need.
This guide will help you understand the basics of text prompt engineering and give you practical examples you can start using right away. This guide applies to all modern AI models like GPT-5, Gemini 2.5 or Claude 4.5.
Prompt engineering is the art of telling AI exactly what you want and providing just enough context for it to understand you.
Think of it like talking to a super-smart assistant who does exactly what you ask, but only if you phrase your request clearly.
1. Control the content: What should the AI say? In what tone or level of detail? This includes the topic (like a blog post or code snippet) and the tone, length and level of detail.
2. Control the format: How should the AI say it? Should it respond in plain text, Markdown, JSON, a table or bullet points with a specific structure?
A strong prompt doesn't just make your results better. It saves time, reduces frustration and helps you build repeatable workflows. Once you know the structure, you can apply it to nearly any use case.
There is a limitation for your prompts though. AI models can only “see” a limited amount of text at once, this is called the context window. If your conversation or prompt gets very long (lots of pasted text, many follow-up messages), the model may start to forget earlier parts.
To avoid this:
Keep prompts focused on the current task.
When a chat gets very long, start a new conversation and paste only the most important context.
For big documents, summarize or chunk them instead of pasting everything at once.
You usually won’t hit this limit in normal use, but it’s good to know why the model sometimes “forgets” things from much earlier in the chat.
A Note on Customization: Some tools, like ChatGPT, offer "Custom Instructions" or settings where you can provide general rules (e.g., "Always be brief," "Avoid politics"). These are helpful, but this guide focuses on prompt-by-prompt techniques that work across all models.
Each technique below shows what it is, when to use it and includes an easy example you can copy. You don't need to use all of them at once; learn to pick the right one for your task.
1. Instruction Prompts
What it is: A clear, single-task instruction.
When to use: Anytime you need a simple, focused output.
Explain "prompt engineering” for beginners in about 120 words. Use plain language and one real-world example.
2. Role (Persona) Prompts
What it is: Ask the AI to respond from a specific perspective.
When to use: To set tone, expertise, or style.
You are a senior legal writer for non-lawyers. Summarize this contract clause in plain English (max 120 words).
or:
You are an expert Python developer writing for a team of junior engineers.
3. Zero-Shot vs. Example-Based (Few-Shot) Prompts
What it is:
Zero-Shot: You give the AI instructions but no examples.
When to use: simple tasks where following a certain style is not important.
Few-Shot: You give the AI instructions plus 1–3 example outputs so it can mirror the tone, style or structure.
When to use: When you need highly consistent formatting, a specific tone or a precise task completed.
Best Practice: Use delimiters (like """ or <example>...</example>) to clearly separate your examples from your final instruction.
You’re a copywriter creating short motivational quotes.
Follow the same style as the examples below:
<example>“Don’t wait for inspiration. Start, and it will find you.” “Progress beats perfection — every single day.”
</example>Now write one about learning new skills.
Keep in mind that too many or very long examples can:
Note: Few-Shot vs. Fine-Tuning Prompting with examples (few-shot) is not the same as "fine-tuning." Fine-tuning is a complex and expensive technical process to create a new, custom version of a model. Few-shot prompting is just a clever way to guide the existing model in your prompt.
4. Provide Context / Data (Context Injection)
What it is: Give the AI the exact information it needs to complete the task. This is a crucial technique to prevent hallucinations (making things up).
When to use: When you need the AI to summarize an article, answer questions about a document or use specific information.
Summarize the key points from the following article. Use only the provided text.
<article> [...pasted article text...] </article>
5. Structured Output Prompts
What it is: Make the AI return data in a fixed, predictable format.
When to use: For automations, integrations or structured summaries.
Return ONLY valid JSON with keys: "title", "summary", "key_points" Generate 5 rows of dummy sales data in CSV format. Do not include any other text or explanations. List three ideas, using this exact format: Title: [Title of idea] Explanation: [Short explanation]
6. Ask-Before-Answer Prompts
What it is: Encourage the AI to ask clarifying questions before it generates the full answer.
When to use: When your request might be unclear or too broad.
I need to write a marketing email for a new product. Before you draft it, ask me three clarifying questions about the product's target audience and the email's main goal.
7. Negative Prompts
What it is: Tell the AI what not to do or include.
When to use: To exclude common but unwanted themes, topics, or "fluffy" language.
Explain the advantages of generative AI over a traditional search engine. Do not cover any ads-related advantages. Write a product description. Do not use any emojis or hashtags.
8. Self-Reflective / Iterative Prompts
What it is: Ask the AI to critique and improve its own previous response.
When to use: When an answer is good but not great, or to refine a draft step-by-step.
That's a good start. Now, analyze your response: what's good about it, what's bad, and how could it be improved? Then, provide the improved version.
These are great once you’re comfortable with the basics.
Reasoning Models and Built-In Chain of Thought
One common way to improve the prompt quality is adding instructions like “think step by step”.
Some newer AI models, like OpenAI’s o-series, are reasoning models. They already perform step-by-step reasoning internally, even if you don’t ask for it.
With these models, adding such instructions can actually hurt performance because it interferes with their built-in process.
Best practice:
Reasoning models: Don’t include chain-of-thought prompts. Just state the task and how detailed the final answer should be.
Regular models: Chain-of-thought instructions can improve logical or multi-step tasks (“think step by step”, “explain your reasoning”).
There are other helpful techniques that go beyond the basic prompting concepts and improve the quality of your prompts though:
| Technique | When to Use | Tip |
|---|---|---|
| Delimiters | For any complex prompt with multiple parts (instructions, examples, context). | Use clear separators like """, ---, or XML tags (e.g., <doc>...</doc>) to separate instructions from content. This vastly improves the AI's ability to understand your request. |
| Self-Consistency | For better accuracy on complex reasoning tasks. | Let the model explore multiple reasoning paths before deciding on the best one. |
| Tree-of-Thoughts / ReAct | For complex workflows or when the AI needs to use tools. | Encourage structured reasoning or the use of tools (like search or code execution) as part of its process. |
LLMs don’t calculate. They predict tokens. They often get simple math right because they’ve seen similar examples, but they can be confidently wrong on complex problems.
Some systems improve accuracy by letting the model generate and run code, but that depends on the platform.
How to reduce math errors:
Use chain-of-thought with regular models so they write out their logic.
For important numbers, verify with a real calculator.
When supported, ask the model to generate code to perform the calculation.
The bottom line: AI can explain math well and assist with reasoning, but it’s not a reliable calculator for complex or high-stakes tasks.