Author: Olaf Kopp
Reading time: 5 Minutes

Prompt Engineering Guide: Tutorial, best practises, examples

5/5 - (2 votes)

Prompt engineering is an essential skill to maximize LLM potential, providing methods to control and guide outputs without altering model parameters. Future advancements may focus on automated optimization, integration with education, and hybrid approaches like combining chain-of-thought reasoning with plan-and-solve frameworks. This domain holds promise for applications in education, software development, and broader AI collaboration.

Here is a guide for your prompt engineering based on several research papers about prompt engineering of 2023, 2024, 2025 and my own experience in prompting.

Guide: Best practises for prompting and examples for prompting

Based on the  research papers, the best practices for prompting Large Language Models (LLMs) can be summarized into practical tips and case-specific examples. Here’s a guide:

1. Define the Goal and Context Clearly

  • Best Practice: Start prompts with a clear, task-specific description using action verbs (e.g., “Summarize,” “Analyze,” “List”).
  • Tip: Provide sufficient background context so the model understands the scenario.
  • Case Example:
    • Scenario: Drafting a marketing email.
    • Prompt: “Write a friendly and persuasive email to promote a new eco-friendly product. Include a call to action, highlight environmental benefits, and limit the email to 150 words.”

2. Use Structured and Step-by-Step Instructions

  • Best Practice: Break tasks into logical steps or subtasks for better reasoning (e.g., Plan-and-Solve prompting).
  • Tip: Use phrases like “Think step by step” or “First plan the solution, then solve the problem.”
  • Case Example:
    • Scenario: Solving a math word problem.
    • Prompt: “Let’s first plan how to solve the problem step by step. Then execute the plan to calculate the total

3. Include Examples (Few-shot Prompting)

  • Best Practice: Use example queries and outputs to guide the model’s behavior (exemplar optimization).
  • Tip: Choose relevant examples and ensure they match the complexity of the task.
  • Case Example:
    • Scenario: Writing a product description.
    • Prompt:
      • Example 1: “Product: Bluetooth Headphones. Description: Lightweight, noise-cancelling headphones with up to 20 hours of battery life.”
      • Task: “Now write a description for: Portable Solar Charger.”

4. Optimize Tone and Style with Role-Based Prompts

  • Best Practice: Ask the model to adopt a specific persona or writing style for a tailored output.
  • Tip: Define tone (e.g., professional, casual) and audience (e.g., “a tech-savvy millennial”).
  • Case Example:
    • Scenario: Writing for a younger audience.
    • Prompt: “You are a tech blogger targeting young professionals. Write an engaging blog post explaining the benefits of using a standing desk. Use relatable language and examples.”

5. Iteratively Refine Prompts

  • Best Practice: Start with a basic prompt and iteratively refine based on intermediate outputs.
  • Tip: Use placeholders or delimiters for easy adjustments.
  • Case Example:
    • Scenario: Summarizing a report.
    • First Prompt: “Summarize the attached report in 100 words.”
    • Refined Prompt: “Summarize the key points of the attached report into three main sections: Background, Findings, and Recommendations.”

6. Leverage Advanced Techniques

  • Chain-of-Thought Prompting: Guide models to solve problems using intermediate reasoning steps.
    • Scenario: “Solve this logic puzzle step by step. Start with identifying the known variables and then deduce the rest.”
  • Self-Consistency: Use multiple reasoning paths and select the most consistent answer.
    • Scenario: “Provide three different ways to solve this problem. Then choose the most logical one.”

7. Adjust Model Parameters When Needed

  • Best Practice: Specify response length, temperature, or format (e.g., tables, markdown, or bullet points).
  • Tip: Tailor the response to the task.
  • Case Example:
    • Scenario: Preparing a structured report.
    • Prompt: “Summarize the findings in a table with columns: Category, Observations, and Recommendations.”

8. Focus on Emotional and Contextual Cues

  • Best Practice: Use emotional or persuasive language when needed (emotion-based prompting).
  • Case Example:
    • Scenario: Writing an apology letter.
    • Prompt: “Write a heartfelt apology letter for a delayed service. Express empathy and provide a solution.”

9. Explore Multimodal and Domain-Specific Applications

  • Best Practice: Combine prompts with domain-specific instructions or datasets.
  • Case Example:
    • Scenario: Generating legal advice.
    • Prompt: “As a legal assistant, summarize the implications of the attached contract clause for a non-legal audience.”

10. Use Prompt Libraries and Tools

  • Best Practice: Utilize resources like PromptPerfect or PromptHero to find optimized prompt templates.
  • Tip: Explore repositories of pre-designed prompts to save time.

Example Scenario enabling the best practices for prompt engineering

Here’s a concrete example of using best practices for Search Engine Optimization (SEO) tasks:

Scenario: Writing an SEO-optimized blog post for a tech startup offering cloud storage solutions.

Prompt Using Best Practices

Goal: Write an engaging, SEO-optimized blog post targeting the keyword “secure cloud storage for businesses.”

Prompt:

  • “You are an experienced SEO copywriter. Write a 500-word blog post targeting the keyword ‘secure cloud storage for businesses.’
  • Structure the post with the following sections: Introduction, Benefits, Use Cases, and Conclusion.
  • Use the keyword naturally 3-5 times throughout the content, including in the introduction and one subheading.
  • Optimize for readability using short paragraphs and bullet points where appropriate. Include a call to action encouraging readers to learn more about our cloud storage solution.”

Refinement Process

  1. First Output: The LLM generates a draft blog post.
  2. Refinement Prompt:
    • “Add a numbered list in the ‘Benefits’ section and include specific statistics or examples, such as the percentage of businesses adopting cloud storage in 2023.”
    • “Ensure the keyword ‘secure cloud storage for businesses’ appears in the conclusion.”

Limitations for prompting

Limitations and imponderables such as negation, exclusion, or counting are critical considerations when crafting prompts for precise and reliable results. Many research papers emphasize handling these aspects effectively to minimize ambiguity and ensure accurate outputs.

Key Takeaway

To address limitations like negation, exclusion, and counting:

  • Be explicit and structured in prompts.
  • Use advanced techniques like CoT or iterative refinement for complex tasks.
  • Clarify imponderables by providing clear boundaries and instructions.

This approach ensures better control over LLM outputs and minimizes errors caused by ambiguity.

1. Negation

  • Why Important: LLMs can misunderstand or ignore negation if not clearly specified, leading to responses that contradict user intent.
  • Prompting Technique:
    • Be explicit in stating what should not be included.
    • Use clear language like “Do not include” or “Exclude all references to.”
  • Example:
    • Scenario: Writing about cloud computing benefits but excluding cost savings.
    • Prompt: “Write an article on the benefits of cloud computing for businesses. Do not include cost-related benefits, and focus only on scalability, security, and accessibility.”

2. Exclusion

  • Why Important: Exclusion helps refine the focus of the response by filtering out irrelevant or undesired information.
  • Prompting Technique:
    • List what should be omitted using delimiters or explicit exclusions.
    • Include phrasing like “Avoid mentioning” or “Exclude from the response.”
  • Example:
    • Scenario: Writing an SEO blog on AI but excluding its application in gaming.
    • Prompt: “Write a blog post about the impact of artificial intelligence in healthcare and finance. Avoid mentioning its applications in gaming or entertainment.”

3. Counting and Quantitative Details

  • Why Important: Counting tasks require explicit instructions, as LLMs can miscalculate or overlook constraints without guidance.
  • Prompting Technique:
    • Specify the exact number of items or a range (e.g., “Provide three examples”).
    • Reinforce clarity by asking the model to verify or count its output.
  • Example:
    • Scenario: Listing advantages of cloud storage.
    • Prompt: “List exactly three advantages of using cloud storage for businesses. Do not exceed three points.”

4. Other Imponderables (Uncertainty or Approximation)

  • Why Important: Complex or uncertain tasks may confuse the model, leading to overly generic or incomplete responses.
  • Prompting Technique:
    • Include qualifiers like “Estimate” or “If uncertain, state assumptions.”
    • Encourage the model to explain its reasoning if exact answers are unavailable.
  • Example:
    • Scenario: Asking about population trends.
    • Prompt: “Provide an approximate figure for the global population in 2050. If estimates vary, mention the source of each estimate and explain the differences.”

Research Papers

This article based on the results of following research papers:

About Olaf Kopp

Olaf Kopp is Co-Founder, Chief Business Development Officer (CBDO) and Head of SEO & Content at Aufgesang GmbH. He is an internationally recognized industry expert in semantic SEO, E-E-A-T, LLMO, AI- and modern search engine technology, content marketing and customer journey management. As an author, Olaf Kopp writes for national and international magazines such as Search Engine Land, t3n, Website Boosting, Hubspot, Sistrix, Oncrawl, Searchmetrics, Upload … . In 2022 he was Top contributor for Search Engine Land. His blog is one of the most famous online marketing blogs in Germany. In addition, Olaf Kopp is a speaker for SEO and content marketing SMX, SERP Conf., CMCx, OMT, OMX, Campixx...

COMMENT ARTICLE



Content from the blog

LLMO / Generative Engine Optimization: How do you optimize for the answers of generative AI systems?

As more and more people prefer to ask ChatGPT rather than Google when searching for read more

Prompt Engineering Guide: Tutorial, best practises, examples

Prompt engineering is an essential skill to maximize LLM potential, providing methods to control and read more

Overview: Brand Monitoring Tools for LLMO / Generative Engine Optimization

Generative AI assistants like ChatGPT or Claude and AI search engines like Perplexity or Google read more

What is the Google Shopping Graph and how does it work?

The Google Shopping Graph is an advanced, dynamic data structure developed by Google to enhance read more

How Google can personalize search results?

The personalization of search results is one of the last steps in the ranking process read more

The dimensions of the Google ranking

The ranking factors at Google have become more and more multidimensional and diverse over the read more