Prompting is the process of giving instructions to an AI to perform a task through text.
The output will depend on the prompts you used, so prompt engineering is crucial to get optimal results.
I will give you an example I found on learnprompting.org (an excellent resource to learn about the topic): if we try to ask GPT-3 (text-davinci-003) to perform a calculation with the following prompt:
What is 965 * 590?
We will sometimes get this output:
569,050
Which is wrong! This is where prompt engineering makes the difference.
Let's try reformulating the question:
Make sure your answer is exactly correct. What is 965*590? Make sure your answer is exactly correct
In this case GPT-3 will answer the correct number (569350).
Why I needed to tell the AI twice to check the result?
What improvement prompting requires compared to natural language and conversation with other human beings? This is what prompt engineering is about.
Why learning to prompt?
Something that seems very obvious and clear when talking to a person might produce completely different results when talking to an AI.
For example, when trying a generative model like DALL-E asking "draw me a Teddy bear" it might generate something that does not fit our request, like a proper realistic bear.
Again on GPT-3, a simple request like "write me an email to my boss" may result in a generic, uninteresting message that doesn't convey your true intent.
We are more likely to get a decent result with something more specific and clear, like: "Write me a detailed, funny email for my boss. Make at least 2 jokes throughout".
LLMs (models that can process and generate language) are capable of "understanding" contexts and semantics, but they do not have reasoning capabilities; so talking to them is different than talking to humans.
Intuitively, computing over text or graphic information do not include "emotional connection", facial expression or body language to interpret.
So, when prompting, it is useful to know what needs to be clarified to the model and how to do it, because there is inevitable loss of information between what we mean and what the model understand.
Is prompt engineering a real job?
There is a lot of hype behind the "prompt engineer" position, but I believe including it in the engineering domain can lead to wrong expectation.
Looking at job descriptions, working just on prompts seems unrealistic; this is a useful skill to have from now on, but it is likely to require other skills like coding, teaching or similar.
A prompt engineer needs to be able to recognize the best task to scale and "delegate" to an AI and then refining their request until the output from the model is optimal; this means having the skills to manage time and expectations, and to understand what the model can and can't do.
One of the challenges of prompt engineering is that different AI models have different strengths and weaknesses. Therefore, the same prompt may not work equally well across different models.
Prompt engineers need to be familiar with the capabilities and limitations of different models to choose the most appropriate one for a given task.
Who can become a prompt engineer?
Apparently anyone. Practically, as I mentioned before, I still think some technical knowledge (even some basic python knowledge) would be beneficial to maximize the earning potential by combining technical knowledge and use of new AI technologies.
Expert knowledge is still necessary, as it is not clear how to measure the effectiveness of a prompt depending on the task it is performing.
The process of prompt engineering is not just about achieving a specific output from an AI model. It also involves analyzing the output and refining the prompts to improve the accuracy and effectiveness of the model over time. This feedback loop is crucial for continuously improving the performance of AI models.
Is prompt engineering here to stay?
I don't think this role as we know it now will be the same in the future.
The field is in fast evolution and it is not hard to imagine a human talking to a physical AI entity not just through a chat, so prompt engineering will be needed and will evolve consequently.
Conclusions
Prompt engineering is not just about choosing the right words to prompt an AI model; it also involves designing an appropriate format and structure for the prompt. For instance, adding context to the prompt can help the AI model better understand the task at hand and produce more accurate results.
The role of a prompt engineer may not be a standalone job position at this time, but as AI continues to evolve and become more prevalent in various industries, the demand for individuals who can effectively prompt AI models is likely to grow.
On the other hand, as AI models become more advanced and capable of understanding and generating natural language, the need for prompt engineering may diminish. However I think at present prompt engineering remains a useful and interesting skill to explore, together with other core skills.
If you are interested in great prompting example, check out this github repo!
These were my 2 cents on prompt engineering, what's your view instead? Let me know in the comments!
*Image prompt: "
It’s all about the logic and creative thinking !!
That’s definitely a big part of it, and needs to be addressed properly understanding how LLMs “think”