Products
Use cases
Industries
Resources
Company
Let’s clear the air: if you think “prompt engineering” is some sci-fi skill set reserved for hoodie-clad coders, think again. If you’ve ever drafted a targeted RFP, cross-examined a hostile witness, or wrangled three partners’ feedback into a single deck by morning—you’re already halfway to mastering this dark art.
Prompt engineering is where the human mind meets machine logic. It’s not about tricking the model—it’s about briefing it. Persuasively. Precisely. Repeatedly.
Part one broke down What is Generative AI and Why Legal Should Care, this blog goes a level deeper! This section is your behind-the-curtain look at how legal professionals can stop dabbling with AI—and start directing it like a pro.
Here’s the secret sauce: prompting isn’t about knowing Python. It’s about knowing your audience—and when your audience is not the trillion-parameter Large Language model (LLM), what you say (and how you say it) dictates whether you get brilliance... or boilerplate is produced.
Prompting isn’t a matter of asking nicely. It’s legal strategy in miniature. If you can write an issue spotter, you can prompt like a boss. The core challenge? We lawyers often either overcook the ask or underserve the context.
I’ve seen prompts that are so vague they might as well say, “Do law stuff.” Others are twelve paragraphs of exposition with no clear destination. And some just forget that generative AI, no matter how advanced, can’t divine jurisdictional nuances from thin air. You wouldn’t send a summer associate into court with zero prep—why would you treat your digital wingman any differently?
In legal, our prompts tend to flail in one of three ways:
It’s not that AI doesn’t work. It’s that we don’t always give it a fair shake. And that’s where prompt engineering comes in.
Forget manners. Think mandates. Good prompts are crisp, layered, and contextual. They guide the model toward a clear outcome—structured like a mini legal brief.
You’re not just tossing questions into the void. You’re issuing directives to a machine that’s read more legal documents than any of us ever will (and still sometimes hallucinates like it’s had three martinis and no sleep).
So how do you brief your AI associate properly?
Prompt engineering isn’t Mad Libs. It’s jazz. There’s rhythm, variation, and a need to read the room—or in this case, the model. A well-crafted prompt doesn’t just say, "write a memo." It sets the scene, the tone, and the stakes.
1. Set the Role
Tell the robot who it’s pretending to be.
Example: “You’re a senior litigation associate at an AmLaw 50 firm with a mean streak in antitrust and a caffeine dependency.”
2. Define the Audience
No one wants boilerplate. Say who you're talking to.
Example: “Write for LegalTech folks and in-house counsel who’ve seen a few prompts—and a few trainwrecks.”
3. Define the Task
If you’re vague, you’ll get vague. Be surgical.
Example: “Draft a memo flagging key risk areas in the attached merger agreement.”
4. Break into Steps
Structure = sanity. Give it scaffolding.
Example: “Give me 6 sections, each with clear headers, bullets, and one spicy takeaway.”
5. Add Context
The model’s good, but it’s not psychic. Feed it the who, what, and why.
Example: “It’s for a GC who hates legalese, loves visuals, and wants it yesterday.”
6. Set Format + Tone
This is where you keep it on-brand.
Example: “Use clean formatting, punchy phrasing, and channel a slightly irreverent explainer—think LegalTech meets Vogue.”
7. Iterate. Don’t Abdicate.
The first draft is a starting line, not the finish.
Example: “Now rework it as a Westworld metaphor. Then simplify it into a client email. Let’s see how many gears this thing’s got.”
Imagine briefing a GAI model as if it were your sharpest junior associate: "You're a second-year litigation associate at an AmLaw 50 firm. Draft a 600-word summary of antitrust risk in the attached merger agreement. Target audience is the GC, so keep it strategic, not doctrinal. Use clear headers and a confident tone." That’s a brief, not a request.
The magic is in the layering. Maybe it’s your GC’s no-nonsense style. Maybe it’s the slightly cheeky edge you bring to client trainings. The more direction you give, the better your LLM behaves.
Like crafting a killer argument, each pass refines. Each nudge coaxes the model into alignment.
Here’s the twist no one talks about: lawyers make exceptional prompt engineers. Not because we love tech (many of us don’t), but because we already think this way. Structured communication? Check. Audience calibration? Check. Relentless revision until it sings? Triple check.
Crafting a good prompt is like writing a killer closing: you’re guiding the listener—or in this case, the model—to a conclusion without scripting every word. You’re also adapting in real time. The same instincts that make you thrive in a depo make you formidable at prompting.
We think structurally. We qualify everything. We obsess over audience and tone. All of which map beautifully onto building world-class prompts. If you’ve ever rewritten a clause six ways for six different stakeholders—you’re built for this.
The trick is unlearning the instinct to be passive with tech. You’re not a user. You’re the director. The robot is just the talent. To learn more feel free to download a full book on prompting here.
This isn’t theory. Prompting powers are a part the legal workflows already embedded in your day-to-day. You’re just not calling them that.
Let’s get concrete. You’re in Reveal, knee-deep in Slack threads, audio transcriptions, and metadata rabbit holes. When you use natural language to say, "Show me communications between Custodian A and B referencing Project Titan before Feb 2021," you’re already prompting.
These aren’t isolated features. They’re reflections of a broader shift: legal tech is moving from point-and-click to prompt-and-partner. Whether you’re searching ESI or prepping a first-pass deposition outline, the question isn’t whether to use prompts—it’s whether you’ll use them well.
You’re telling the system:
Prompt engineering is what turns those generic tools into bespoke, context-aware superpowers.
Think of GenAI not as a vending machine, but a conversation partner. You ask, it answers. You tweak; it refines. Prompting isn’t a one-and-done act—it’s improv. If you’re not getting gold on the first go, it doesn’t mean the tech failed. It means the brief needs work.
Start small. Refine tone. Add legal specifics. Break down complex asks into parts. And above all, treat each back-and-forth as a rehearsal for better collaboration.
The next phase of GenAI isn’t just getting outputs. It’s shaping relationships. When I prompt GPT-4o to “summarize this blog in my voice,” it does a decent job. When I follow with “Now punch it up like I’m presenting at Legalweek with a splash of snark,” it starts to understand my brand.
The best prompts aren’t solo commands—they’re dialogue. You steer. The model adapts. You redirect. It refines. It’s improv, but for ideas.
The faster you can translate messy thoughts into crisp prompts, the faster you can prototype briefs, build workflows, and win cases. This isn’t fluff—it’s leverage.
Want to run a new client strategy workshop? Prompt a draft agenda in your brand voice.
Need to build a data map across global custodians? Prompt a first-pass schema, then refine it. Trying to train junior associates on privilege review? Prompt example threads for them to dissect.
We’re not replacing judgment. We’re amplifying it—with a side of speed.
Prompt engineering is not some fleeting trend. It’s a core skill for legal professionals navigating the GenAI era.
You don’t have to be perfect. You just have to be deliberate. Iterate. Refine. Rinse. Repeat. That’s how you turn generic AI into something game-changing for your practice.
And if you’re still unsure where to start? Try prompting your AI with:
“Summarize this article in Cat Casey’s voice, with just the right blend of cheek and legal nerdiness.”
Chances are, if it sounds like me—it’s working.
Want to learn even more about how GAI is reshaping the eDiscovery game? Check out this great eBook The Ultimate Guide to GenAI for eDiscovery.