Prompt engineering and you: How to prepare for the work of the future
Priyanka Vergadia
Head of North America Cloud Developer Advocacy
Generative AI takes a new way of thinking, not just working. Here's where workers can start, and where organizations can help them get started.
With most enterprises exploring what generative AI and large language models (LLMs) have to offer, their potential is already being seen in case studies and real-world usage. Top executives are grappling with critical questions around not just which business units could gain the most from these technologies, but also how to give employees the most benefits, and in what ways.
A recent paper, for example, published by MIT, BCG, UPenn Wharton, and other institutions, shows the marked ways employees gain performance benefits when employing generative AI. While the highest performers saw a 17% increase in productivity, lower-performing workers gained a whopping 43% increase.
The full impact of LLMs on worker productivity is still unfolding, but it’s clear that these disruptive technologies will have a profound role in revolutionizing industries at all levels — as well as workers at all levels. Success will in many ways depend on workers’ understanding of generative AI and how they incorporate these technologies into their workflows. Even when using a gen AI assistant or embedded capabilities in enterprise productivity software, organizations may sometimes find the results they get are more ordinary than extraordinary.
As a result, prompt engineering — the technique used to refine and guide models to produce optimal results — is becoming a critical skill to develop and hone as more companies look to integrate this technology into every aspect of their business.
So, why is this process so important for working effectively with an LLM? Because the people creating prompts define the line between output that’s standard versus outstanding — it’s their words and ideas that generate your results. Here’s how you can help them get started.
The power of the prompt
Getting the most out of an LLM means understanding how to work with it to achieve your desired output.
Similar to how a search query returns specific results from a search engine, generative AI prompts guide the response you get back from a model. The words and phrases you choose, the prompt’s structure, and the context you provide act as instructions and guidance that shape the output. If a prompt is inaccurate, poorly-designed, or vague, the output you generate will likely be, too.
Put simply, LLMs don’t know what they don’t know.
For instance, Google's Pathways-based Language Model (PaLM) 2 has studied billions of pages of text, webpages, academic papers, and more to create a model that generates text and code in natural language. The model itself, though, does not have any conscious understanding or awareness of this information and how to apply it.
Consider the following prompt:
"Write an explanation of string theory."
This prompt will generate a response for you, but it offers no other guardrails to steer the model’s behavior. It doesn’t specify what task you are trying to accomplish or include additional instructions about the desired outcome. It’s a bit like a colleague asking you to complete a project without providing any details about the scope or specific deliverables.
As in all things, context matters when prompt engineering. You aren't just a producer but a director — the more specific and pointed you can be, the better for your "actor."
An LLM undertaking the string theory query doesn’t know if the task is to create a short paragraph summarizing the definition of string theory, a video script explaining the fundamental concepts, or a complete educational curriculum with several courses outlining the entire framework.
Context matters when prompt engineering. You aren't just a producer but a director — the more specific and pointed you can be, the better for your "actor."
A more effective prompt might look something like this:
“Write a 300-word blog post explaining the basic concepts of string theory to students aged 12-14, studying physics for the first time. Frame your answer using natural language and easy-to-understand sentences.”
Providing details about context, audience, tone, voice, and format gives the model a frame of reference to assist weighing its decisions when generating responses. In some cases, you might even provide representative examples or include additional information about your customers, market, and industry to help LLMs understand your ideal prospects when creating content.
Prompts act as blueprints that a model can follow on how its capabilities best meet your specific business and audience requirements. The key lies in crafting the right prompts to harness the model’s capabilities to elicit high-quality, relevant, and accurate results.
A cycle of prompt responses
Prompt engineering involves more than your initial prompt; your responses also play a crucial role. It’s a symbiotic relationship between input and output. Your initial prompt sets the direction, and the generated results allow you to provide feedback to fine-tune subsequent prompts.
Responses are like a mirror, reflecting the strengths and weaknesses of your prompt and training data. “Incorrect” results can often illuminate what you need to improve, providing useful clues about where you lack clarity or require more information to produce the response you want.
Using our string theory prompt from above, you might evaluate how the output can change when you add some additional context, such as:
- A persona the LLM should simulate when generating responses
- Formatting information, such as subheadings, bullet lists, and calls-to-action
- Instructions to add quotes, examples, or anecdotes
- Specific details about information to exclude from the content
Each time you enter an input, you can analyze the generated results against your expectations and revise accordingly, repeating the process until you are satisfied with the outcome. This continuous interaction loop allows you to test out different approaches, enabling you to identify how specific changes in structure and phrasing guides the quality, accuracy, and precision of the model’s responses.
Prompt engineering is as much a creative art as it is a science, but the foundation of success comes from being able to recognize the technical limitations of large language models so you can maximize performance and accuracy. Different models use different architectures, training methodologies, and datasets — what works for one may not be effective in another. Experimenting with your queries can help you recognize how different models make decisions, but also provide more control to help mitigate unwanted biases and hallucinations.
Prompting your way to a more productive workforce
It’s only the beginning, but already generative AI is helping shoulder many repetitive tasks that take time and energy away from more meaningful, valuable work.
Marketers are discovering how to draft contextually-relevant and engaging content in minutes. Developers are supercharging their productivity with generative AI assistants providing AI-driven code generation, code completion, debugging, and more. Contact center agents are reducing their handling times with support from generative AI interfaces and generated summaries, making their lives easier and improving the experiences of the customers they serve.
As more organizations continue to embrace generative AI, it’s more important than ever to ensure your workforce has the necessary skills and knowledge to leverage these models and technologies to their full potential. Investing in training programs or exploring free resources, such as Google Cloud’s Skill Boost for Generative AI, can help employees learn how to design and fine-tune their inputs while avoiding common pitfalls like biased or nonsensical outputs.
When employees begin using generative AI, with their insights and industry expertise, and infusing it with your organizational data, they amplify their capabilities — no matter their previous levels of performance. However, simply having the tools doesn’t guarantee uniform benefits across all employees or departments.
Rather, investing in training and free courses like Google Cloud’s generative AI learning path will yield the most returns. The value of highly-trained employees extends beyond their personal productivity as prompters, it resonates across the whole business, driving success and growth in the generative AI age.
Opening image created with Midjourney, running on Google Cloud, using the prompt: a cartoonish group of mixed gender, mixed race engineers surrounded by thoughts and ideas. (Not as easy as it looks, either! This took numerous tries and a good hour of tweaking to get right.)