Is it Important to Write a ‘G~ood’ Prompt?

June 1, 2024
Tech

Intro

Hello, we're the Enhans.

Services based on large language models (LLM), such as ChatGPT, have not been around for long, but they are already being used by many people for a wide range of purposes, from everyday conversations to solving complex problems. However, there are times when users feel frustrated because they do not receive the desired answers. The questions or inputs we use to interact with an LLM are called prompts. As the saying goes, "It's all in how you ask," and creating good prompts is extremely important.

Creating and refining questions (prompts) to achieve the desired outcomes is known as prompt engineering. Prompt engineering is a key skill for maximizing the capabilities of LLMs. This field is currently being actively researched, and there have been over 400 papers registered on promptingguide.ai, the most famous site for prompt engineering guides, in the past two years alone (even when limited to prompt engineering and not LLMs in general). This highlights the importance of prompt engineering.

promptingguide.ai Number of papers published in

In this blog, I plan to introduce several techniques that can be easily used in chat-based LLM services like ChatGPT, even for non-developers. By utilizing the techniques introduced in this blog, we can explore how to communicate more effectively with LLMs and achieve better results. Furthermore, by learning how to craft good prompts, you, the readers, will also be able to fully leverage the powerful capabilities of LLMs. So, let's get familiar with LLMs and discover ways to achieve better results together!

Understanding the Structure of Prompts

The first step in prompt engineering is understanding the structure of prompts.
Prompts are mainly composed of two parts:
1. System Prompt and 2. User Prompt.

1. System Prompt

The system prompt sets the overall behavior and response style of the AI model. It acts as a fundamental guide for the model, playing a crucial role in determining the tone, style, and scope of the responses. For instance, if you set it to "Explain in a friendly and easy-to-understand manner within three paragraphs, and provide references for any opinions asserted," the AI model will generate answers according to this guide for all user inputs.

Generally, users cannot directly set the system prompt in ChatGPT. However, with the recent introduction of the "Custom GPT" feature, users can now easily create and customize their own system prompts. This allows users to personalize ChatGPT to better suit their needs.

This GPT is designed to assist users in writing technology blog posts about LLM and AI Agents in a way that is easy for the general public to understand.
It should provide clear, engaging explanations, avoiding overly technical jargon, and should use a friendly, conversational tone in Korean.
It should break down complex concepts into simple terms, using analogies and examples where appropriate, and offer tips on structuring the content to maintain reader interest.
It should use a more casual tone, using phrases like '해요' and '합니다' to make the content feel more approachable.
If more details or clarifications are needed, it should ask for more information or fill in the gaps as necessary.
It should ensure that drafts are easy for the general public, who may have only heard of LLM or chatbots, to understand.

In fact, I wrote this blog based on this system prompt!

2. User Prompt

In this blog, I want to delve a bit deeper into user prompts. A user prompt refers to the actual question or request that the LLM needs to process and respond to. In other words, it's the type of prompt we typically think of as a question. The more specific a user prompt is, the clearer and more accurate the response will be.

For example, a question like "What were the causes of the fall of the Roman Empire?" is a user prompt. User prompts can vary widely depending on the context of the conversation, and it is important to write them in a specific and clear manner to enable the LLM to provide precise answers.

To write better user prompts, consider the following points:

  • Specificity: The more specific the question, the more accurate the model's response will be. For example, instead of asking "What were the causes of the fall of the Roman Empire?" you could ask "What were the economic factors that contributed to the fall of the Roman Empire?" to receive a more detailed answer.
  • Clear Requirements: Clearly state the necessary information or the format of the expected answer. For example, if you ask, "Explain three economic factors that contributed to the fall of the Roman Empire," the model can provide more specific information.
  • Providing Context: When needed, clarifying the connection to previous conversations or questions can be helpful. For example, you could ask, "I'm studying the fall of the Roman Empire in my history class. Can you explain the reasons for its fall, focusing on economic factors? Oh, and my teacher oddly only recognizes the Holy Roman Empire as the Roman Empire," to help the model better understand the context and provide a clearer answer.

Easy-to-Use Prompt Techniques

Now that we understand the basics of prompts, let's explore various techniques to apply prompt engineering in practice. Here, I will introduce two techniques: Few-Shot Prompting and Chain-of-Thought (CoT). Using these techniques effectively can enhance your use of LLMs.

Few-Shot Prompting

Few-Shot Prompting is a technique where giving the LLM just a few examples enables it to solve new problems based on those examples. It is similar to how humans understand a new concept through a few examples and then solve similar problems.

For instance, if you want to summarize related information from a health blog and extract tags, followed by additional analysis, you can use Few-Shot Prompting. Simply input a straightforward question as shown below, and you can get the following results.

User Prompt:"This yoga increases flexibility and calms the mind." Extract the tags from this sentence.
Response:The tags extracted from the given sentence are as follows:
- Yoga
- Flexibility
- Mind
- Calm

This simple method allows for tag extraction, but it may not always match the user's desired format. Now, let's use Few-Shot Prompting to attempt a more refined approach to tag extraction. By providing additional example sentences and tags, we can help the model recognize the pattern.

User Prompt:"Extract the tags from this sentence: 'This yoga increases flexibility and calms the mind.' Please refer to the other sentences and tags as example:
- "Sentence: 'This meditation brings peace of mind and increases concentration.' -> Tags: 'peace of mind', 'increased concentration'"
- "Sentence: 'This diet aids digestion and boosts energy.' -> Tags: 'digestion aid', 'energy boost'"
- "Sentence: 'These sleep tips help with deep sleep and reduce fatigue.' -> Tags: 'deep sleep aid', 'fatigue reduction'"
- "Sentence: 'This stretching relaxes muscles and improves flexibility.' -> Tags: 'muscle relaxation', 'improved flexibility'"
- "Sentence: 'These vitamins boost immunity and help with fatigue recovery.' -> Tags: 'immunity boost', 'fatigue recovery'"
Response:"Sentence: 'This yoga increases flexibility and calms the mind.' -> Tags: 'improved flexibility', 'calm mind'"

By using Few-Shot Prompting in this way, we obtained a refined answer that matches the format of the original data, particularly utilizing the keyword 'improved flexibility'. This kind of result can be very useful for data analysis and summarization.

Chain-of-Thought (CoT)

The Chain-of-Thought (CoT) technique is a method for solving complex problems by breaking them down into smaller steps and solving each one sequentially. This approach is similar to solving a math problem by tackling it step-by-step rather than finding the answer all at once. It is particularly useful for complex logical problems or situations that require multi-step reasoning.

In the paper "Large Language Models are Zero-Shot Reasoners", it was discovered that simply adding the phrase "Let's think step by step" to the prompt significantly improved the model's performance. This is a very simple approach but has made a significant impact by greatly enhancing the LLM's logical reasoning and problem-solving abilities.

Chain-of-Thought can also be used in conjunction with Few-Shot Prompting. By providing example sentences and demonstrating how to solve problems step-by-step, the model can learn this method and apply a similar logical flow to new problems. For instance, when solving a math problem, presenting the solution process in stages helps the model follow these steps to derive the solution for similar problems.

In Conclusion

In this blog, we introduced two prompt engineering techniques: Few-Shot Prompting and Chain-of-Thought (CoT). Did you find these tips helpful for writing prompts? By using these two techniques, you can use LLMs more effectively and obtain desired results more easily. Specifically, we showed how these techniques could be applied in practice through an example of summarizing information and extracting tags from a health blog post.

There are many other prompt engineering techniques available, and by using them, we can make our valuable time more meaningful, just like the vision of Enhans. We hope this blog has been helpful to you, and we look forward to introducing more useful information and techniques through the Enhans blog in the future.

We appreciate your interest and support!

References

  1. Wei, Jason, et al. "Chain of Thought Prompting Elicits Reasoning in Large Language Models." arXiv preprint arXiv:2201.11903 (2022). Link
  2. Brown, Tom B., et al. "Language Models are Few-Shot Learners." arXiv preprint arXiv:2005.14165 (2020). Link
  3. Kojima, Takeshi, et al. "Large Language Models are Zero-Shot Reasoners." arXiv preprint arXiv:2205.11916 (2022). Link
  4. Wang, Yizhong, et al. "Self-Consistency Improves Chain of Thought Reasoning in Language Models." arXiv preprint arXiv:2203.11171 (2022). Link
  5. Prompting Guide: promptingguide.ai. Link

Interested in solving your
problems with Enhans?

  • 01.
    Tell us about yourself
  • 02.
    Which company or organization do you belong to?
  • 03.
    Which company or organization do you belong to?
Next
Next

Please see our Privacy Policy regarding how we will handle this information.

Thank you for your interest
in solving your problems with Enhans!
We'll contact you shortly!
Oops! Something went wrong while submitting the form.