Product
TABLE OF CONTENTs
TABLE OF CONTENT
What happens when your enablement team is also your most demanding customer?
Stacey Justice hates the word "enablement."
It means something different at every company—sometimes even to every person within it. She's tried to rename her function many times, but hasn't found anything better yet.
This is, of course, an ironic problem to have for someone who is VP of GTM Enablement at Gong—one of the most beloved enablement tools. (Literally every single one of our recent guests has mentioned it as their favorite tool.)
So we asked her to break down exactly how she uses Gong for enablement at Gong—including the specific features her team relies on most, and the framework she's built around them.
In the first six months of her AI-powered onboarding initiative alone, new-hire ARR increased by 53%, and commercial ramp time dropped by 2.7 months.
Gong's go-to-market org spans just over 1,000 people across five North American hub offices, a fast-growing EMEA presence, and a new Singapore operation. Stacey's team supports all of them—every customer-facing role from SDRs to support.
What makes her job structurally different from almost every other enablement leader's is that she trains reps to sell a product she uses every day to run her own programs.
The pros & cons of being customer zero
Gong's sales team is customer zero for every new product they roll out. That means reps develop genuine product confidence that a training deck can't manufacture.
But it also means absorbing field feedback before the product is fully baked, and running without the implementation playbook that Gong's Services team would hand to a paying customer.
"We get in, and we use the product early," Stacey says. "Alpha code, beta code. It might mean the product's not ready." But she says the field has learned to live with that.
The upside is that Stacey's team gets to test-drive the product, document what works, and set the standard for their customers to follow.
This shows up in moments like the launch of Gong Enable earlier this year. Stacey got to train her reps on how to use it by running the training in Gong.
So they’re not just teaching reps to sell the product—they're modeling best practices for using it. That drives the confidence that conventional training can't replicate.
As for the rest of their tech stack, they keep it pretty simple. Content lives in Google with Glean on top. And their digital sales room integrates directly into Gong.
"Any vendors I talk to, I tell them I'm probably the hardest person to sell to," Stacey says. Everything has to fit into the Gong workflow, or it doesn't make the cut.
"We are Gong-first in everything that we do—and I think that's absolutely led us into some of the performance and stickiness that we've had with it."
The enablement flywheel
When Stacey joined Gong two years ago, enablement was siloed. Different segments ran different programs. Training was good but leader-led, which meant little consistency across the org.
Her fix was to restructure the function around outcomes—win rate, ramp time, productivity, expansion—rather than training completion, and build a three-stage operating model to get there.
She calls it the enablement flywheel:
- Detect: Understand what's actually broken in the field
- Prescribe: Build content to address it
- Validate: Confirm whether it worked
Stacey says most enablement programs skip the first step and never get to the third. Here’s how that model plays out in practice:
Detect
Before building any content, Stacey uses Gong's AI theme spotter to understand what's actually happening in the field.
- What objections are coming up?
- Which competitors are appearing in deals?
- Where are reps losing?
The system surfaces the answers and—critically—quantifies them.
"You can use AI theme spotter, go in, ask: ‘What are the biggest objections that are happening?’ and it will tell you how many opportunities it's impacting, and how much ARR it's impacting," she says.
Prescribe
Next, build targeted content from that field data—not from assumptions. This is where AI Builder has become Stacey's most-used Gong feature.
"To me, that's so fundamental in terms of actually connecting to the field. You're pulling insight from what's actually happening in the conversations and being able to create content, create job aids, create lessons—whatever you need."
Feed it a prompt: "create a battle card against this competitor" or "write discovery questions for an enterprise AE selling Gong Enable"—and it generates a starting point built from actual customer conversations.
"It's probably accelerated the workflow of my team by weeks," she says. "You're not starting with a blank page."
Validate
Next, confirm whether the training actually changed behavior. This is where most enablement programs fall short.
Stacey uses automated AI scorecards to grade calls against the skills they trained on. That data is automatically routed back to enablement.
"I'm getting all this visibility to understand: is this being adopted? Are they actually doing what we asked them to do?"
The signal runs both ways: real field data shapes what gets built, and what gets built gets validated against real field behavior. That’s the flywheel.
The case for AI role play
The hype around AI role play is real—but there’s still lots of skepticism. Is it too easy to game? Will an enterprise AE actually use it? Or is it just for training junior SDRs to handle basic objections?
Stacey's answer: AI role play only fails when the scenarios are made up. Most AI role-play tools are built on fictional prompts—a trainer's best guess of what a tough buyer sounds like. Gong Enable instead creates role-play scenarios from actual recorded calls.
Stacey gives a concrete example: an enterprise team struggling to negotiate with procurement teams. She goes into Gong, pulls every call where her team has faced a tough procurement persona, and builds the role play from that.
"The credibility goes up completely," she says, "because it's literally what's happening inside of their accounts."
A senior AE can't dismiss it as unrealistic, because it isn't. It's a distillation of the hardest versions of a conversation they've already had.
"My approach to enablement has always been: how do you connect theory to reality? What's been missing is being connected to exactly what's happening. It's not me creating a mock scenario of ‘I think this is what's happening in the field.’ It's actually pulling from those conversations they're already having."
The role play is just the first gate. In certifications, Stacey requires reps to submit two actual customer calls showing the behavior they just practiced. Managers score those calls using an AI-assisted scorecard.
The simulation gets them field-ready, then the real calls prove they're actually doing it.
Train with mountains, rocks, and pebbles
Stacey's training model is built around continuous reinforcement, with programs broken into three tiers that can be deployed at any time of year—because skills don't decay on a schedule, and Gong's reps don't all join at once either.
The mountain is the anchor. Each quarter, Stacey's team picks one major skill for the entire GTM org to focus on—Q2 might be negotiation, Q3 something else. It's a full program: certifications, AI role plays in Gong Enable, and live components. About 80% of the content is consistent across segments; the rest gets customized.
But a rep who joins in Q4 missed the Q2 negotiation mountain. So everything in the mountain also gets broken into rocks and pebbles—micro-learning versions built in AI Builder that a manager can find in Gong and assign in a one-on-one, any time of year. A rep caving on discounts in October gets the negotiation module that week, not next Q2.
Once a skill is trained, Stacey sets up automated AI call reviewers attached to the relevant pipeline stage in Gong. If stage six involves a procurement discussion, every call at that stage is automatically scored against the negotiation rubric her team built. She lets it run for a quarter, pulls the results, and uses the gaps to design the next targeted push—which feeds back into the flywheel.
"When [managers] sit down and have performance conversations [with reps], they can say, 'you're really caving on all those discounts—we need you to go in and actually practice this,' and that content is there in a format that matches what we're driving."
Coach managers, not just reps
Stacey treats managers as a separate audience—with their own skill gaps, curriculum, and scorecards built in Gong. Because if managers aren't coaching the skills Stacey's team is training, the flywheel stops.
Gong's leadership enablement program is built around 13 specific manager activities: forecast inspection, pipeline inspection, one-on-ones, deal reviews, and more.
For each activity, Stacey's team has identified the required skills, built microlearning modules in AI Builder, and set up automated scorecards. Managers self-enroll. Their calls get reviewed the same way rep calls do.
It won’t surprise you to learn that Gong records internal meetings, too. Before a performance review, Stacey pulls up her recent one-on-ones with that person in Gong, feeds them into AI Builder, and asks it to surface the most important thing to focus on in that conversation. The same tool her reps use to prep for procurement negotiations, her managers use to prep for performance reviews.
"We've built our leadership enablement program into this Gong workflow. Let's say it's pipeline inspection—we've identified that activity and the skills required within it. We built micro learning so that second and third-line leaders can use that to drive self-enrollment and improve based on the calls that we've had."
To sell AI, don’t lead with AI
Gong is selling AI products into an enterprise market that is simultaneously excited about AI and increasingly skeptical of the category.
The competitive backdrop doesn't help: Highspot and Seismic announced a merger in February 2026, Clari and Salesloft combined before that—the revenue AI category is consolidating fast, which means buyers are fielding more pitches from more vendors making bigger claims.
Stacey's directive to her reps is simple: don't lead with AI. Lead with outcomes. "AI is producing a lot of hype—'oh my gosh, it's AI.' But it doesn't matter that it's AI unless you do something with it," she says.
Their training centers on four specific business outcomes: ramp time, win rate, forecast accuracy, and rep productivity. Every AI feature gets connected to one of those. The conversation isn't "here's what Gong's AI can do." It's "here's how your ramp time changes when new hires are doing AI role play against real customer calls before they're fully ramped."
The numbers back that framing up. In Gong's own commercial segment, the AI-powered onboarding program Stacey built cut time to ramp by 2.7 months and increased ARR sold by new hires 53% in the first six months.
When she used Gong’s initiative boards to assess the effectiveness of new messaging, win rates doubled for reps who used the updated messaging.
For the record, Stacey doesn’t think AI will replace enablement jobs. She believes the tools are finally making it possible to do what enablement was always supposed to do, but couldn’t before: build content from real field data, validate behavior change at scale, and close the gap between training and execution.
"What it's finally doing is getting enablement to a place where you're much closer to the field, you're creating content that is actually going to impact the business. I think it's what's been expected for enablement to do. We just didn't have the tools at our fingertips to do it."
Whatever you call it, it's working
Stacey Justice still doesn't have a better word than "enablement." Transformation, productivity, acceleration—she's tried them all. Nothing sticks.
But the word problem is downstream of the real one. Enablement has struggled to prove its value because the tools it had—LMS courses, slide decks, manually curated battle cards—were too far removed from what actually happens in the field. The function sat adjacent to execution rather than inside it. It built things and hoped they transferred.
What Stacey has built at Gong is a program that almost entirely closes that gap. The content comes from real calls. The practice scenarios come from real calls. The validation comes from real calls. If reps aren't doing what was trained, the scorecards show it within a quarter. If a new objection is emerging in the field, the theme spotter finds it before it becomes a pattern.
Whatever you want to call it, it's not what most people mean by "enablement."
Watch the full episode
Watch Alex and Stacey's full conversation on Grow & Tell, Dock's podcast for revenue leaders.




.webp)



.webp)

