Prompt Engineering Guide: Tutorial, best practises, examples
Prompt engineering is an essential skill to maximize LLM potential, providing methods to control and guide outputs without altering model parameters. Future advancements may focus on automated optimization, integration with education, and hybrid approaches like combining chain-of-thought reasoning with plan-and-solve frameworks. This domain holds promise for applications in education, software development, and broader AI collaboration.
Here is a guide for your prompt engineering based on several research papers about prompt engineering of 2023, 2024, 2025 and my own experience in prompting.
Contents
Guide: Best practises for prompting and examples for prompting
Based on the research papers, the best practices for prompting Large Language Models (LLMs) can be summarized into practical tips and case-specific examples. Here’s a guide:
1. Define the Goal and Context Clearly
- Best Practice: Start prompts with a clear, task-specific description using action verbs (e.g., “Summarize,” “Analyze,” “List”).
- Tip: Provide sufficient background context so the model understands the scenario.
- Case Example:
- Scenario: Drafting a marketing email.
- Prompt: “Write a friendly and persuasive email to promote a new eco-friendly product. Include a call to action, highlight environmental benefits, and limit the email to 150 words.”
2. Use Structured and Step-by-Step Instructions
- Best Practice: Break tasks into logical steps or subtasks for better reasoning (e.g., Plan-and-Solve prompting).
- Tip: Use phrases like “Think step by step” or “First plan the solution, then solve the problem.”
- Case Example:
- Scenario: Solving a math word problem.
- Prompt: “Let’s first plan how to solve the problem step by step. Then execute the plan to calculate the total
3. Include Examples (Few-shot Prompting)
- Best Practice: Use example queries and outputs to guide the model’s behavior (exemplar optimization).
- Tip: Choose relevant examples and ensure they match the complexity of the task.
- Case Example:
- Scenario: Writing a product description.
- Prompt:
- Example 1: “Product: Bluetooth Headphones. Description: Lightweight, noise-cancelling headphones with up to 20 hours of battery life.”
- Task: “Now write a description for: Portable Solar Charger.”
4. Optimize Tone and Style with Role-Based Prompts
- Best Practice: Ask the model to adopt a specific persona or writing style for a tailored output.
- Tip: Define tone (e.g., professional, casual) and audience (e.g., “a tech-savvy millennial”).
- Case Example:
- Scenario: Writing for a younger audience.
- Prompt: “You are a tech blogger targeting young professionals. Write an engaging blog post explaining the benefits of using a standing desk. Use relatable language and examples.”
5. Iteratively Refine Prompts
- Best Practice: Start with a basic prompt and iteratively refine based on intermediate outputs.
- Tip: Use placeholders or delimiters for easy adjustments.
- Case Example:
- Scenario: Summarizing a report.
- First Prompt: “Summarize the attached report in 100 words.”
- Refined Prompt: “Summarize the key points of the attached report into three main sections: Background, Findings, and Recommendations.”
6. Leverage Advanced Techniques
- Chain-of-Thought Prompting: Guide models to solve problems using intermediate reasoning steps.
- Scenario: “Solve this logic puzzle step by step. Start with identifying the known variables and then deduce the rest.”
- Self-Consistency: Use multiple reasoning paths and select the most consistent answer.
- Scenario: “Provide three different ways to solve this problem. Then choose the most logical one.”
7. Adjust Model Parameters When Needed
- Best Practice: Specify response length, temperature, or format (e.g., tables, markdown, or bullet points).
- Tip: Tailor the response to the task.
- Case Example:
- Scenario: Preparing a structured report.
- Prompt: “Summarize the findings in a table with columns: Category, Observations, and Recommendations.”
8. Focus on Emotional and Contextual Cues
- Best Practice: Use emotional or persuasive language when needed (emotion-based prompting).
- Case Example:
- Scenario: Writing an apology letter.
- Prompt: “Write a heartfelt apology letter for a delayed service. Express empathy and provide a solution.”
9. Explore Multimodal and Domain-Specific Applications
- Best Practice: Combine prompts with domain-specific instructions or datasets.
- Case Example:
- Scenario: Generating legal advice.
- Prompt: “As a legal assistant, summarize the implications of the attached contract clause for a non-legal audience.”
10. Use Prompt Libraries and Tools
- Best Practice: Utilize resources like PromptPerfect or PromptHero to find optimized prompt templates.
- Tip: Explore repositories of pre-designed prompts to save time.
Example Scenario enabling the best practices for prompt engineering
Here’s a concrete example of using best practices for Search Engine Optimization (SEO) tasks:
Scenario: Writing an SEO-optimized blog post for a tech startup offering cloud storage solutions.
Prompt Using Best Practices
Goal: Write an engaging, SEO-optimized blog post targeting the keyword “secure cloud storage for businesses.”
Prompt:
- “You are an experienced SEO copywriter. Write a 500-word blog post targeting the keyword ‘secure cloud storage for businesses.’
- Structure the post with the following sections: Introduction, Benefits, Use Cases, and Conclusion.
- Use the keyword naturally 3-5 times throughout the content, including in the introduction and one subheading.
- Optimize for readability using short paragraphs and bullet points where appropriate. Include a call to action encouraging readers to learn more about our cloud storage solution.”
Refinement Process
- First Output: The LLM generates a draft blog post.
- Refinement Prompt:
- “Add a numbered list in the ‘Benefits’ section and include specific statistics or examples, such as the percentage of businesses adopting cloud storage in 2023.”
- “Ensure the keyword ‘secure cloud storage for businesses’ appears in the conclusion.”
Limitations for prompting
Limitations and imponderables such as negation, exclusion, or counting are critical considerations when crafting prompts for precise and reliable results. Many research papers emphasize handling these aspects effectively to minimize ambiguity and ensure accurate outputs.
1. Negation
- Why Important: LLMs can misunderstand or ignore negation if not clearly specified, leading to responses that contradict user intent.
- Prompting Technique:
- Be explicit in stating what should not be included.
- Use clear language like “Do not include” or “Exclude all references to.”
- Example:
- Scenario: Writing about cloud computing benefits but excluding cost savings.
- Prompt: “Write an article on the benefits of cloud computing for businesses. Do not include cost-related benefits, and focus only on scalability, security, and accessibility.”
2. Exclusion
- Why Important: Exclusion helps refine the focus of the response by filtering out irrelevant or undesired information.
- Prompting Technique:
- List what should be omitted using delimiters or explicit exclusions.
- Include phrasing like “Avoid mentioning” or “Exclude from the response.”
- Example:
- Scenario: Writing an SEO blog on AI but excluding its application in gaming.
- Prompt: “Write a blog post about the impact of artificial intelligence in healthcare and finance. Avoid mentioning its applications in gaming or entertainment.”
3. Counting and Quantitative Details
- Why Important: Counting tasks require explicit instructions, as LLMs can miscalculate or overlook constraints without guidance.
- Prompting Technique:
- Specify the exact number of items or a range (e.g., “Provide three examples”).
- Reinforce clarity by asking the model to verify or count its output.
- Example:
- Scenario: Listing advantages of cloud storage.
- Prompt: “List exactly three advantages of using cloud storage for businesses. Do not exceed three points.”
4. Other Imponderables (Uncertainty or Approximation)
- Why Important: Complex or uncertain tasks may confuse the model, leading to overly generic or incomplete responses.
- Prompting Technique:
- Include qualifiers like “Estimate” or “If uncertain, state assumptions.”
- Encourage the model to explain its reasoning if exact answers are unavailable.
- Example:
- Scenario: Asking about population trends.
- Prompt: “Provide an approximate figure for the global population in 2050. If estimates vary, mention the source of each estimate and explain the differences.”
Research Papers
This article based on the results of following research papers:
- The Prompt Canvas: A Literature-Based Practitioner Guide for Creating Effective Prompts in Large Language Models
- Teach Better or Show Smarter? On Instructions and Exemplars in Automatic Prompt Optimization
- A SURVEY OF PROMPT ENGINEERING METHODS IN LARGE LANGUAGE MODELS FOR DIFFERENT NLP TASKS
- PROMPT DESIGN AND ENGINEERING: INTRODUCTION AND ADVANCED METHODS
- AI literacy and its implications for prompt engineering strategies
- Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
- A System Unleashing the potential of prompt engineering in Large Language Models: a comprehensive revie
- A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications
- LLMO / Generative Engine Optimization: How do you optimize for the answers of generative AI systems? - 10. February 2025
- Prompt Engineering Guide: Tutorial, best practises, examples - 27. January 2025
- Overview: Brand Monitoring Tools for LLMO / Generative Engine Optimization - 20. January 2025
- What is the Google Shopping Graph and how does it work? - 4. December 2024
- How Google can personalize search results? - 1. December 2024
- The dimensions of the Google ranking - 9. November 2024
- How Google evaluates E-E-A-T? 80+ signals for E-E-A-T - 4. November 2024
- E-E-A-T: More than an introduction to Experience ,Expertise, Authority, Trust - 4. November 2024
- Case Study: 1400% visibility increase in 6 months through E-E-A-T of the source entity - 24. September 2024
- The most important ranking methods for modern search engines - 2. September 2024