Product
TABLE OF CONTENTs
TABLE OF CONTENT
Here's our take on the AI tools gaining the most traction, where the real opportunity lies, and why your data layer matters more than any product you'll buy.
Most enablement leaders we talk to—through Dock’s sales process and through our show—are equal parts curious and overwhelmed when it comes to AI.
Nearly every team has gotten the “we need to be doing more with AI” talk from the CEO. And sure, a few folks like Justin Driesse and Ryan Vanshur are doing impressive things. But most teams are still figuring out where to start.
We've been building AI into Dock for the last two years, which gives us an unusually close view of what's actually working and what isn't.
Here's our honest take on the current state: where AI is gaining real traction in sales, why most enablement teams are less behind than they think, and where to actually focus your energy first.
Everyone in enablement feels behind on AI
There are two competing feelings most enablement leaders have about AI right now.
The first is that you're hopelessly behind. Everyone on LinkedIn is running agents and closing deals in their sleep, and somehow you're still figuring out your Gong integration.
The second kicks in when you actually open a tool: underwhelm. It hallucinated. It gave you something generic. It didn't know your product. You closed the tab.
The AI hype is genuinely overstated. When we talk to our customers—including some of the most forward-thinking enablement people we know—AI adoption is still pretty patchy.
Most teams have an AI-powered call recorder and maybe a ChatGPT subscription. The sophisticated multi-agent workflows you read about are the exception, not the baseline.
But the underwhelm is also a trap—specifically, the trap of judging today's tools by an experience you had six months or two years ago.
Alex was recently at a wedding, and someone mentioned suffering from a mysterious illness. He suggested asking ChatGPT. Her response was, "But it'll just tell me I have cancer." That’s the old WebMD joke, of course—a concern that made sense in 2012, but has nothing to do with how these tools actually work today.
In other words, things change so quickly that it’s hard to have set opinions on anything.
We're wired for linear thinking—we try something, form an opinion, and move on. But this isn't linear, and stale impressions compound fast.
Not only are people are getting better at using these tools, but the models themselves get better every day. Anyone who tried AI for coding a year ago and gave up has no idea what Cursor or Claude Code can do today.
"If you sort of rely on that past experience—'that's how it used to work'—it's just not true. Technology is evolving so fast. And we, as humans, are bad at understanding exponential change." — Alex Kracov
Our advice: keep an open mind, and keep testing. Dismissing AI altogether is the one move that guarantees you will actually fall behind.
Where AI is actually gaining traction in sales
From our conversations with enablement leaders, a few use cases keep surfacing as the ones that have already taken real hold.
1. Outbound personalization was the first major use case—and still the most visible. Tools like Unify are using AI to sift through signals, personalize messaging, and automate triggers. It works, although buyers are already getting good at detecting AI-generated outreach. The window where "AI-personalized" reads like a thoughtful human email is narrowing (and maybe already gone).
2. Account research is one of the more quietly impactful use cases. The most advanced teams we've talked to have AEs getting AI-generated deal briefs before every meeting—a structured summary of where the account stands, what the business goals are, and what happened in the last call. What used to be 45 minutes of prep is now a five-minute review.
3. Handoff notes are a close cousin. Sales-to-CS transitions have always been a process tax—salespeople writing up context they already have in their heads while CS waits. AI can generate a solid first draft from call recordings and CRM data, which, at a minimum, cuts that time in half.
4. CRM updating is unglamorous but high-impact. Nobody wants to update Salesforce. AI can handle a significant chunk of that automatically, which means cleaner data and less time reps spend on admin they were probably skipping anyway.
5. Customer-facing content generation is where tools like Dock and Gamma are finding a useful niche. Take a Gong call and generate a business case, slide deck, executive summary, or customer success plan. The collateral that used to take a rep an hour to pull together can now be a ten-minute edit.
6. Conversational intelligence is the most dominant AI use case on the enablement side. Gong comes up in almost every conversation. Before it existed, visibility into what reps were actually doing on calls was essentially zero.
"It was really hard to do enablement if you had 1,000 sellers on calls and you had no insight into what was actually happening on those calls. When I started my career at Yelp, I would have managers barging into my call—that was how you did it."
The combination of call data and AI-generated summaries has made coaching at scale genuinely possible.
Conversational intelligence is also what's powering the category generating the most buzz in enablement right now: AI role-play.
What about AI role play?
AI role play was the hottest topic at the recent Sales Enablement Collective conference in New York, where Alex was a few weeks ago. The jury’s still out on whether it’s effective or not.
Alex’s take is that AI role play is strongest for junior-rep onboarding and BDR cold calling, where repetition and scripted motions are the norm. Our recent Grow & Tell guests are conflicted:
- Morgan Kassel at Vanta told us that they use HyperBound for new SDRs. But for senior reps in complex, relationship-driven enterprise sales, it's a harder sell. Most of them would rather learn from human coaching via Gong call reviews.
- But Sheevaun Thatcher at Demandbase runs everyone through AI role-play regardless of tenure. She says it’s the only way to beat the forgetting curve.
- Justin Driesse at Legora remains skeptical—he prefers manager-led scenario practice built from real training transcripts over anything that feels like a scripted bot.
Alex also flagged that he’s already heard from enablement leaders that some reps focus only on figuring out how to pass the role-play rather than actually trying to get better.
The category gets genuinely interesting when the role-play scenarios are grounded in real data. Stacey Justice at Gong doesn't buy into the skepticism about enterprise reps—because Gong Enable builds role-play personas from your actual call data, not generic prompts.
"The ability to go in and set it up so that the persona's going to be procurement—and we're going to use all of the insight and data from all the calls that we've had with procurement professionals. It's not Stacey over in the corner creating a mock scenario of what I think is happening in the field. It's actually pulling from those conversations they're already having." — Stacey Justice, Vice President of GTM Enablement, Gong
The category will keep improving. But right now, Alex’s advice is to lead with onboarding and BDRs, make sure your scenarios are grounded in real call data, and measure behavior change rather than completion rates.
Where to start with AI: outcomes and data before tools
When a CEO hands an enablement team $20,000 and says, "We need to be doing more with AI," the path of least resistance is buying something that looks like AI.
That’s what makes AI role play such an easy choice. It's visible, it sounds impressive in a board update, and it checks the box.
The better place to start, though, is leading with the business outcome you're actually trying to achieve—close rates, time-to-productivity, handoff quality—and aligning the AI investment there.
But before any of that, there's a more fundamental step most teams skip: building the data layer.
As a heavy AI user myself, I wish I had gotten this AI advice two years ago. When ChatGPT came out, I rushed to build all kinds of marketing workflows: a custom GPT writer trained on Dock's style, automated content pipelines, various tools stitched together. Everything worked fine—but it was all quickly made obsolete by newer models and tools.
What I should have been doing instead was building clean source data that could be used by AI tools now and in the future:
- Messaging documentation
- Writing style guides
- A structured competitor research database
- Up-to-date product information
- A database of podcast transcripts and video clips
Now that I have that foundation, every new AI tool I try works dramatically better from day one. I'm not starting from scratch each time something improves—I already have the raw material to feed it.
It's the same issue Alex sees again and again with Dock customers:
"One of the most fundamental things with AI and getting all these agentic workflows working is the data. None of this works that well without having a really strong data pipeline. AI could hallucinate or give you wrong answers. If you're trying to set up AI role plays and it doesn't have good data to back up real-life scenarios, it's going to be bad. Data is the fundamental fuel."
The question is which team owns what. Alex's view: RevOps should own the CRM data—win/loss rates, pipeline infrastructure, all of that. Enablement (in partnership with Product Marketing) should own call data, product info, client-facing collateral, battle cards, business cases, decks, and training content. The lines will always be a little blurry—that's fine. The bigger risk is both teams assuming the other is handling it.
The good news: the entry point is simpler than most teams expect. If your team doesn't have a call recorder yet, start there. It's the highest-ROI data source for enablement AI, and every major platform—Gong, Fathom, Chorus—now has native AI features built on top of it.
One pattern worth stealing from the teams doing this well: VPs of Enablement don't have time to experiment with every new tool—but the good ones give their team members access and explicit permission to surface ideas. The experimentation has to be distributed. The prioritization stays centralized.
What selling an AI product taught us about buying one
Alex and I have skin in this game at Dock. We build AI features, we sell them, and we learn something every time a deal closes or doesn't.
A few things stand out from the past year of selling the AI-powered version of Dock.
1. AI products all look the same from the outside
Every tool’s interface is similar, everyone claims their model is better, and you can't tell the difference in a 30-minute demo.
Des Traynor at Intercom calls these "iceberg products." What matters is what's below the surface, and the only way to find out is to give the product your actual data. Which leads directly to the second problem.
2. Pilots are key for AI products
Most buyers aren't willing to upload their data to an unapproved vendor. Which means you can't prove your AI works in a demo.
The only real answer is a structured pilot—small enough that security concerns are manageable, long enough that the AI has enough data to demonstrate value. It's a process tax on both sides, but there's no shortcut.
3. Usage-based pricing is a tough sell
Usage-based or outcome-based pricing is really popular for AI products—you pay for the work AI does, not just the seats.
But most enterprise buyers—especially sales and enablement teams—don't want flexible line items. CFOs don’t want to hand every AE in an organization a token budget.
Dock ended up going with flat-rate pricing for now, and it's made the sales conversation dramatically simpler.
What this all adds up to
In summary, we're past the chaos phase but nowhere near the ceiling.
The tools that work, work—conversational intelligence, AI-generated content, and account research automation.
The tools that are still maturing (role play, agentic workflows, unified data layers) are worth watching and preparing for, but not worth betting your quarter on.
The biggest issue is the framing of the question "What AI tool should we buy?" That question tends to produce checkboxes. The better question is what business outcome you're trying to move, and whether you have the data foundation to let AI actually help with it.
Watch the full episode
Watch Alex and Eric's full conversation on Grow & Tell, Dock's podcast for revenue leaders.






.webp)



