As soon as ChatGPT was released and exploded in popularity, people began talking about how to best phrase requests to get the answers they’re actually looking for. “Generate an itinerary for a trip to London” will yield different results than “Generate an itinerary for a 6-day trip to London, with a focus on museums and places of interest easily accessible from the Tube.”
People recognized that the quality of the output was tied to how clear and specific the input was.
Thus, the field of prompt engineering was born. According to Kim Herrington, senior analyst at Forrester, in the simplest terms, prompt engineering is “about helping people learn to better articulate what it is that they want back from the computer.”
She likened it to teaching people how to use a search engine for the first time. “When you are interacting with a search engine, you are essentially taking your human questions and putting them into a computer with the intent of getting a result back of something that can help you with your day or help you to perform an action or gather information. That’s really what prompt engineering training is about,” she explained.
In a November episode of Forrester’s podcast, “What It Means,” Herrington shared the prediction that “60% of employees will get prompt engineering training in 2024.”
“In order for us to capitalize on AI, not only are you going to have to fund your AI developments, but you’re also going to have to budget for AI search training and creation of those different prompts, as well as budget for data communicators to evangelize the AI tooling and act as analytics translators to help people adopt those new technologies that you’re offering,” she said on the podcast.
Herrington says that having a data literacy program in place is an important precursor to this type of training. People need to know where to go to find the information they need, and also have support from leadership in upskilling efforts.
In addition to companies jumping on the prompt engineering bandwagon, universities are also starting to think about factoring it into their curriculums, explained Greg Benson, chief scientist at SnapLogic and professor of computer science at the University of San Francisco.
He doesn’t necessarily envision a future where people are getting degrees specifically in prompt engineering, but views it as another tool people need to be experienced in. He likened it to Excel and how everyone uses it, and so there are courses at USF where you can gain knowledge and skills in Excel. Some are standalone courses, whereas others are incorporated into other courses that are requirements for a particular major, like business.
“I think where my head’s at now,” Benson said, “is that [prompt engineering] is more like a tool, and we’ll initially see courses that have a progression of, okay, what’s the landscape? What are the different sorts of chat UIs that you can use? But then going deeper, how do you structure prompts? And there’s different ways that you can lead the LLM to arrive at your answer. Then it gets more complicated, like how do you incorporate your own data? How do you bring that in to give examples so that you can basically teach it about your domain and then get it to give generative responses that are either maybe summaries or synthesis of information or even data analysis? And then it gets even more interesting when you start talking about fine tuning and that goes beyond prompt engineering. So I could see a course that has that progression.”
On the other hand, Arthur Hicken, chief evangelist at Parasoft, cautions people from investing too heavily in prompt engineering right now. He predicts that the next evolution of LLMs will understand, in a human way, what we’re asking of them, which will eliminate the need for prompt engineering.
“The wizard who has the skills right now can command the salary they want, but I wouldn’t hire the wizard,” he said. “This isn’t black magic; It’s understanding the domain you’re in and asking the question to get what you want. And then once someone knows how to ask those questions, sharing it with the team. Not ‘here’s the code I generated,’ but ‘here’s how we generated this kind of code,’ and explaining to the team in peer reviews or standups or whatever. This is how we’re getting there.”
He went on to explain that companies shouldn’t want a prompt engineering wizard, they should want a team of people who understand how to deal with AI together.
“I think of it as team knowledge, institutional knowledge. And as a career choice, don’t bet your career on this obscure skill that will not exist,” he explained. “And I say this as a person who came from the printing industry 30 years ago. I have this very specialized, obscure skill that I can do an analog translation of an image in color, so that it can print. This is not a useful skill. It was an extremely useful skill 30 years ago. But today, it has no value, right? Everybody can pop open a scanner, pop open a camera, adjust the image for what they want and send it on its way. But it used to be very, very hard and took all kinds of specialized knowledge.”