Choosing a Search Firm

Compensation Intelligence

Board & Governance

Succession Strategy

AI Leadership Trends

Talent & Workforce Trends 

AI Leadership Appointments

Compensation Changes

Big Tech Succession

CHRO & CPO Appointments

CEO Transitions

Board Members and Governance Committees

Operating Partners at private equity and venture capital firms

CHROs and Chief People Officers

HR leaders responsible for executive hiring

CEOs and Founders

The Human Cost of Agentic AI: Why Workforce Resistance Is the Problem Leadership Is Not Solving

April 28, 2026

Agentic AI has crossed a threshold. It is no longer the subject of pilot programmes and proof-of-concept presentations. Across industries, from manufacturing floors to financial services back offices, systems capable of autonomous decision-making and action are being woven into daily operations. The technical integration is accelerating. The human one is not.

For CEOs, CHROs, and board members overseeing that transition, a specific and uncomfortable pattern is forming. Leadership teams are pushing forward with implementation while a significant share of their workforce is actively opposed to it. The gap between those two realities is not a communications problem. It is an organisational risk, and most companies are not yet treating it as one.

What Resistance Actually Looks Like

Workforce anxiety about AI is not new, but agentic AI has sharpened it. Earlier generations of automation displaced specific, bounded tasks. Agentic systems are different. They reason, they act, they sequence decisions across workflows without human input at each step. For employees watching that capability arrive in their organisation, the question is no longer whether their role will change. It is whether it will exist.

Survey data circulating in boardrooms reflects the scale of that concern. Roughly seven in ten workers report fearing that AI will eliminate their jobs entirely. Nearly half of CEOs acknowledge they are already encountering active resistance from staff, yet are proceeding with implementation on their original timelines regardless.

That combination, widespread employee fear and leadership choosing to absorb rather than address it, produces a specific set of consequences. Productivity gains fail to materialise because the people expected to work alongside new systems are disengaged from them. Middle management, caught between pressure from above and friction from below, becomes a bottleneck rather than a bridge. Trust erodes in ways that do not show up immediately in operational metrics but surface later in attrition, quiet non-compliance, and the quiet failure of systems that were never properly adopted.

What makes this moment different from previous technology transitions is the speed at which agentic AI is moving and the breadth of roles it touches. Previous waves of automation were largely confined to manual, repetitive, or rules-based work. Agentic AI is moving into knowledge work, into roles that require judgment, into functions that employees have long considered distinctly human. The psychological stakes are higher, and the window for building genuine organisational readiness is shorter than many leadership teams appear to appreciate.

The Communication Instinct and Why It Falls Short

The default response from leadership when facing internal resistance is to increase communication. Town halls are scheduled. Internal messaging is refined. Reassurances are issued about the company's commitment to its people. In most cases, that response addresses the wrong problem.

Employee anxiety about agentic AI is not primarily driven by a lack of information. It is driven by a lack of certainty. Workers want to know, with specificity, what will happen to their role, their function, and their income. When leadership cannot provide that certainty, and in the early stages of a large-scale AI transition they rarely can, the communication vacuum does not stay empty. It fills with rumour, speculation, and worst-case assumptions that are often more damaging than the reality.

The more productive question for leadership is not how much to communicate, but at what point in the implementation process employees are being brought into the conversation. There is a meaningful difference between telling people what is happening to them and involving them in decisions about how it happens. The latter builds a degree of ownership that the former cannot.

In several European markets, this is not simply a cultural preference. The EU AI Act and the works council frameworks operating in Germany, France, the Netherlands, and other jurisdictions create genuine legal obligations around transparency and consultation when AI systems are introduced in ways that affect working conditions. Forward-thinking organisations are treating those requirements not as compliance thresholds to clear but as structural prompts for the kind of early, honest dialogue that builds durable trust. The organisations handling this transition well are, in most cases, the ones that started the conversation before the systems were already in place.

Investment Asymmetry: Technology Versus People

Across the industry, investment in agentic AI infrastructure is accelerating. Spending on the underlying technology, the models, the integration layers, the data architecture, continues to grow. Investment in the people expected to work alongside that technology is moving at a different pace.

Close to seven in ten senior decision-makers report plans to increase AI investment across their organisations in the near term. The harder question, one that boards and leadership teams are not asking with sufficient consistency, is what proportion of that investment is directed at workforce capability rather than system capability.

Reskilling and upskilling appear regularly in strategic documents and investor presentations. The operational reality is frequently different. Programmes are announced and underfunded. Training is delivered once and not reinforced. The assumption, sometimes implicit, sometimes explicit, is that employees will adapt because they have to.

Early data on agentic AI suggests that significant productivity gains are achievable, in some analyses approaching 40% in relevant functions. But those figures are built on a specific assumption: that the workforce understands the tools, is willing to use them, and has been prepared to integrate them into their work in a meaningful way. When that preparation is absent, the productivity case collapses. The technology performs. The adoption does not.

The more complex dimension of workforce preparation is that upskilling for an AI-augmented environment is not reducible to technical training. It requires something harder: helping people understand what their role is for in a world where a meaningful share of their current tasks are handled by autonomous systems. That is a question of professional identity, not just capability. It asks workers to reimagine where their value sits, what they bring that AI cannot replicate, and how their sense of purpose at work survives a significant redistribution of what they actually do each day. Few organisations are approaching that challenge with the depth it requires.

The Architecture Question Leadership Is Deferring

Beneath the communications challenge and the investment gap sits a more fundamental question that most leadership teams have not yet brought clearly enough to the surface. What does the workforce of the future actually look like, and are current decisions building the right foundations for it?

The emerging consensus among organisations that have moved furthest in agentic AI deployment points in a consistent direction. The future operating model is not humans on one side and AI on the other, with the boundary between them being negotiated over time. It is a continuous integration of both, with agents and people working together as constituent parts of the same workforce, each handling what they are suited for.

In practical terms, that means agents absorbing data processing, routine customer interactions, structured decision sequences, and the administrative layers that currently consume significant portions of knowledge worker time. It means humans concentrating on judgment, relationships, oversight, and the contextual reasoning that autonomous systems still handle poorly. Neither displacing the other, but the division of labour shifting substantially and continuously as the technology develops.

That future is closer than most organisations are positioned for. And one of the most consequential decisions a leadership team will make in the next few years is not which AI tools to procure, but what architecture to build around them.

The organisations that approach this as a design question, asking what a workflow looks like when an agent handles the first layer and a human takes the second, how accountability is assigned when an agent makes a consequential decision, how customer data moves through integrated systems without degrading the interaction, will build operating models that work. The organisations treating it as a series of individual tool procurement decisions, made incrementally without a coherent view of the whole, will find themselves rebuilding at significant cost once the fragmentation becomes visible.

The cost of getting the architecture wrong is not only financial. It is the human disruption of asking people to adapt repeatedly to systems that were not designed with their work in mind. That disruption compounds resistance rather than reducing it, and it is largely avoidable with earlier, more deliberate design thinking.

When People First Means Something

It is common for organisations undergoing large-scale technology change to describe themselves as people-first. It is considerably less common for that description to hold under pressure. When efficiency targets are being tracked quarterly and board-level expectations around AI returns are already set, the conditions for genuine people-first decision-making are demanding.

The practical test is not whether people-first language appears in the strategy. It is whether employees raising concerns about AI are being heard on their own terms or managed toward a predetermined conclusion. It is whether the workers most directly affected by automation have a meaningful voice in how it is implemented. It is whether the fear that employees carry is being addressed with specific, honest answers or absorbed into optimistic framing that serves the organisation's narrative more than the individual's reality.

Leaders who close the gap between what they say about their people and how decisions are actually made will build a more durable implementation. Those who do not will find that the resistance they are absorbing now becomes a more serious organisational problem later, one that is harder to address once the architecture is already set and the workforce has already drawn its conclusions.

Internal Advocacy as an Underused Asset

One pattern that distinguishes organisations navigating this transition well from those that are not is the presence and visibility of internal advocates: employees who were initially sceptical of agentic AI and became genuine supporters based on their own experience of using it.

Research on learning and behaviour change within organisations consistently shows that peer influence outperforms top-down messaging by a substantial margin. Employees report learning significantly more from colleagues than from management communication. In the context of AI adoption, that asymmetry matters. An employee who watched a peer stop spending three hours a day on a task they disliked, because an agent now handles it, is more persuaded than any internal communications campaign.

The practical implication for leadership is specific. Identifying employees whose experience with agentic AI has been genuinely positive, giving them visibility, supporting them in sharing what changed for them, and creating conditions for that experience to spread is one of the highest-leverage activities available during an implementation. It costs relatively little and operates through the channel that employees actually trust.

The wins that generate this kind of advocacy tend to be quiet ones. Not large-scale transformation announcements, but the specific, granular elimination of work that people found tedious, draining, or low in meaning. When an agent removes the administrative overhead from a role, the person in that role tends to experience the technology as something working for them rather than against them. That shift in perception, once it takes hold at scale, changes the internal conversation in ways that leadership communication alone cannot.

What Boards and Senior Leadership Need to Be Asking

For CEOs, CHROs, and board members, the questions that matter most right now are not primarily technical. The technology is advancing on its own trajectory. The human and organisational questions are the ones where leadership choices will determine outcomes.

Several frames are worth bringing into regular board and executive team discussion:

On communication and trust. At what point in the AI implementation process are employees being brought into the conversation, and is that early enough to build genuine ownership rather than managed compliance?

On investment balance. What proportion of total AI investment is directed at workforce capability, and does that proportion reflect the actual dependency of technology returns on human adoption?

On architecture. Is the organisation building a coherent human-agent operating model, or accumulating individual tool decisions that will require expensive restructuring later?

On accountability. When an agentic system makes a consequential decision, where does accountability sit, and has that question been answered clearly enough to satisfy both regulators and the workforce?

On people-first. When employees raise concerns about AI, is leadership listening to address those concerns or to manage them? The distinction matters more than most organisations currently acknowledge.

The Transition Is Still Forming

Agentic AI will continue to move into core business operations regardless of how well or poorly the human transition is managed. The technology's trajectory is not contingent on workforce readiness. But the returns from that technology, the productivity gains, the operating model improvements, the competitive advantage that boards are expecting, are entirely contingent on it.

The organisations that come through this period in the strongest position will not necessarily be those that moved fastest or deployed most aggressively. They will be the ones that took the human side of the transition as seriously as the technical side, asked difficult questions before they had comfortable answers, and built the internal conditions for genuine adoption rather than surface compliance.

The resistance forming in workforces across Europe and beyond is not an obstacle to be managed around. It is a signal worth reading carefully. Leadership teams that read it accurately, and respond with the specificity and honesty it requires, are building something more durable than a technology implementation. They are building the organisational capacity to keep adapting as the technology continues to develop, which is the only form of readiness that will hold.

Join 1M+ Readers Who Never Miss a Headline.

Stay informed wherever you are — join our growing community of readers and followers across social platforms.

Choosing a Search Firm

Compensation Intelligence

Board & Governance

Succession Strategy

AI Leadership Trends

Talent & Workforce Trends 

AI Leadership Appointments

Compensation Changes

Big Tech Succession

CHRO & CPO Appointments

CEO Transitions

Board Members and Governance Committees

Operating Partners at private equity and venture capital firms

CHROs and Chief People Officers

HR leaders responsible for executive hiring

CEOs and Founders