AI is on, but the value is inconsistent.
Across GTM teams, AI is everywhere. Copilots draft messages, models prioritize accounts, systems suggest next best actions, and forecasts increasingly rely on machine-driven insight. And yet, a familiar refrain keeps surfacing in conversations with operators and executives alike: sometimes it works, sometimes it doesn’t.
This is often framed as an AI maturity problem. In practice, it looks more like an organizational readiness problem.
After a year of pilots, labs, and real-world experimentation, the models themselves are no longer the primary constraint. They are improving quickly, shaped by widespread usage and constant iteration. The shift we are now seeing is not about whether AI works. It is about whether organizations are prepared to work with it at scale.
Why this is happening now
A widely cited MIT study suggests that roughly 95 percent of AI initiatives are “failing.” That number is easy to misinterpret.
Most of these efforts are not true failures. They are experiments. Narrow use cases. Sandboxes designed to learn quickly and limit downside. In that sense, a high failure rate is not only expected, it is healthy. Failing fast is how teams figure out what is viable and what is not.
The more useful observation hidden in that statistic is this: many organizations are deploying AI faster than they are onboarding it.
For much of the past year, AI has been treated as background optimization. It helped speed up tasks, summarize information, and reduce friction at the margins. That approach made sense. But as AI begins to influence prioritization, recommendations, and decisions, the requirements change. The next phase is less about turning AI on and more about preparing the organization around it.
The reframe
AI onboarding is not about teaching people how to use AI.
It is about teaching AI how your organization actually works.
This is where the employee onboarding analogy becomes useful.
When you hire a new employee, you do not give them access to everything on day one and expect them to perform at full capacity. You onboard them. You provide context before responsibility. You define workflows. You clarify what good looks like. You limit access until trust is established. You allow time for ramp.
AI now requires the same discipline.
Much of what teams casually label as “AI slop” is not a failure of intelligence. It is a failure of onboarding. Slop is what happens when AI has access but lacks orientation. When it is given data without definitions, autonomy without guardrails, and tasks without understanding how success is measured.
What “AI onboarding” actually means
In practice, AI onboarding looks far less like training sessions and far more like foundational setup:
- Curated data, not total access – More data does not equal better output. Context and relevance matter more than volume.
- Explicit workflows before automation – AI cannot improve a process that has never been clearly defined.
- Permissions, governance, and constraints – Trust is built when AI understands what it is allowed to do and why.
- Operational artifacts, not just prompts – Mission, values, KPIs, OKRs, definitions, and handoffs are not abstract concepts. They are inputs.
- Narrow execution with broad context – Just as you would not expect a new hire to run sales, product, and marketing on day one, AI should start with focused responsibilities. But it still needs awareness of how those responsibilities connect across the GTM system.
When these elements are missing, AI rarely fails loudly. Instead, it produces noise. Everything looks average. Outputs feel plausible but generic. Recommendations are ignored. Over time, trust erodes quietly, not because the system is wrong, but because it never feels meaningfully right.
Where this shows up in the market
We are seeing this pattern emerge across GTM platforms as AI moves closer to daily execution and decision-making.
In conversations with companies like Apollo, the emphasis is not on replacing human judgment, but on guiding it. AI acts as a structured copilot, providing an initial frame while keeping humans in the loop. That approach implicitly assumes onboarding. Without clean data, defined sequences, and shared definitions of effective outbound, the copilot has nothing solid to work from.
A similar dynamic appears in revenue platforms such as Clari, where AI-driven insights depend heavily on normalization, permissions, and trust. Forecast intelligence only becomes valuable when the system understands how the organization defines pipeline, risk, and accountability.
The same tension shows up in execution and prioritization layers. Sales engagement platforms like Outreach increasingly rely on AI to guide rep behavior. Account-based platforms such as 6sense use AI to influence which accounts matter and when. In both cases, the technology assumes a level of GTM clarity and alignment that many organizations have not fully documented.
Platform gravity matters as well. As AI becomes embedded inside systems of record like HubSpot, it inherits whatever operational clarity or ambiguity already exists. AI does not fix foundational issues. It amplifies them.
What is getting harder
As AI becomes more central to GTM execution, several things are becoming harder to ignore:
- Treating AI adoption as a feature rollout rather than an operational shift
- Measuring usage instead of impact
- Assuming trust will emerge automatically over time
- Viewing onboarding as a one-time event rather than a continuous process
These challenges are not new. What is new is where AI sits in the stack. It no longer lives at the edges. It increasingly touches judgment, prioritization, and outcomes.
The deeper implication
AI may be the first “employee” that forces companies to confront how undocumented their operations really are.
Human employees routinely compensate for ambiguity. They fill gaps with judgment, lived context, and informal knowledge. AI attempts to do the same, but it does so poorly and confidently. When AI produces noise or slop, it is often exposing where workflows were never explicit, definitions were never aligned, and trust was assumed rather than built.
This is why many AI initiatives feel underwhelming. Not because the technology is immature, but because the organization has not yet done the work of onboarding it.
Executive takeaway
The next phase of AI adoption in GTM will not be won by better models alone. It will be won by companies that treat AI like a real hire.
Onboard it.
Teach it how your business actually works.
Constrain it before you scale it.
Let it ramp.
The organizations that do this will stop talking about AI adoption as an experiment and start experiencing it as leverage.