Latest Artificial Intelligence News & Articles Online https://www.europeanbusinessreview.com/category/technology/artificial-intelligence/ Empowering communication globally Thu, 26 Feb 2026 01:26:17 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 The Psychology Behind Why we still Struggle with Cyberattack Response https://www.europeanbusinessreview.com/the-psychology-behind-why-we-still-struggle-with-cyberattack-response/ https://www.europeanbusinessreview.com/the-psychology-behind-why-we-still-struggle-with-cyberattack-response/#respond Tue, 24 Feb 2026 05:46:26 +0000 https://www.europeanbusinessreview.com/?p=244362 Interview with Dan Potter of Immersive Preparation does not guarantee performance. In this interview, Dan Potter, Senior Director of Operational Resilience at Immersive, explains why confidence can mask vulnerability, how human […]

The post The Psychology Behind Why we still Struggle with Cyberattack Response appeared first on The European Business Review.

]]>
target readers-cv

Interview with Dan Potter of Immersive

Preparation does not guarantee performance. In this interview, Dan Potter, Senior Director of Operational Resilience at Immersive, explains why confidence can mask vulnerability, how human factors shape incident outcomes, and what leaders can do to build resilience that holds up when tested most.  

Cyberattacks have become a familiar feature of modern business, and most organisations have planned accordingly. Yet even with the most comprehensive response plan in place, many organisations struggle to manage when a cyber incident unfolds in real time. 

From frontline cyber practitioners to senior leadership, performance can quickly deteriorate under pressure, leading to muddled responses, slow reactions, and wrong choices.  

Why do organisations struggle to respond effectively to cyberattacks, even when they believe they are prepared? 

Many organisations genuinely believe they are prepared because they’ve invested time, money, and attention into cybersecurity. And in most areas of the business, that should be enough. But in dealing with a cyber incident, you can’t afford to build your confidence based on activity – it must have a foundation in experience too.   

As a result, we consistently see a gap between perceived readiness and actual performance. In Immersive’s recent research, almost all (91%) business leaders expressed confidence in their organisation’s ability to handle a major cyber incident, and most (71%) believe they have a mature cyber readiness programme.  

Yet when teams were placed into realistic, high-pressure simulations, performance told a very different story, and confidence dropped to around 60% in Immersive’s crisis sims.

That disconnect exists because confidence is reinforced by metrics like completed training and documented plans, which reflect how people behave under genuine stress.

That disconnect exists because confidence is reinforced by metrics like completed training and documented plans, which reflect how people behave under genuine stress.

Most people are not naturally good at handling the adverse conditions created by a cyber incident. There’s a high perceived risk and impact of failure, combined with significant uncertainty and incomplete information, all wrapped up in intense time pressure. Decision-makers often must make snap choices, but overreacting and taking a server offline unnecessarily could cause more harm than good. 

So, people become more cautious and reactive, and more likely to rely on familiar patterns, even when those patterns are no longer appropriate. 

Immersive’s simulations show that decision-making often deteriorates during an incident. What is happening psychologically when teams are placed under that kind of pressure? 

When people are under intense pressure, their brains start to behave very differently from how they do in calm, analytical settings. The combination of urgency, uncertainty, and fear of failure is incredibly disruptive to good judgement. Average decision-making accuracy in Immersive’s crisis sims is just 22%. 

One of the first things that happens is cognitive narrowing, which is when people focus on a smaller slice of information and lose sight of the wider picture. It’s a common threat response and can be useful in simple emergencies, but in complex cyber incidents, it often leads to tunnel vision. Teams fixate on technical details, wait for more certainty, or defer decisions upwards rather than stepping back to coordinate a response. 

There is also a strong emotional component. Cyber incidents trigger anxiety around loss, reputation, and personal accountability. When people feel that risk, they become more cautious and less willing to act decisively. Ironically, that hesitation can slow containment and increase impact. 

Another important factor is expectation. Teams tend to perform best when events unfold in ways they recognise. But serious incidents rarely unfold predictably, and when it goes differently from rehearsed responses, confidence starts to plummet.   

So, it’s not a case of putting in more hours of study and practice – that knowledge and preparation are often meaningless without the experience to back it up. 

Why does lack of coordination matter more than lack of technical knowledge during a cyber incident? 

Cyber incidents are often thought of as technical events, but they’re really fast-moving crises that affect the whole business. That means they demand coordinated action across multiple teams simultaneously. So, when things break down, it’s often not because individuals don’t know what to do, but because people are unsure who should act, when, and with what authority.  

When things break down, it’s often not because individuals don’t know what to do, but because people are unsure who should act, when, and with what authority.  

In high-pressure situations, humans look for clear structure. When roles, escalation paths, or decision rights are ambiguous, people hesitate. They wait for reassurance, seek permission, or focus narrowly on their own responsibilities. That behaviour is completely natural, but it creates delays and bottlenecks when time matters most.  

What we often see in simulations is that technically strong teams slow down because they are trying to coordinate decisions with legal, communications, or leadership teams that have never practised working together under pressure. Each group is operating with different priorities, language, and risk thresholds. If teams haven’t practised those interactions in realistic conditions, even small misalignments can cascade into significant delays. 

However, in Immersive’s exercises, less than half (41%) of organisations typically include departments like legal, executive, and communications. This means this critical cross-departmental teamwork is not being put to the test. Despite this, 90% still told us they felt their cross-functional communication is effective.   

What should leaders do differently if they want to improve decision-making and resilience during cyber incidents? 

The most important shift leaders need to make is to stop treating cyber readiness as a compliance exercise and start treating it as a human capability. Policies and plans are important, but they don’t tell you how people will behave when they are tired, stressed, and forced to make decisions with incomplete information. 

Real resilience is built through exposure and practice, not reassurance, yet less than half (46%) of organisations currently use performance-based metrics to assess readiness.  

Leaders should ensure teams are regularly placed into realistic scenarios that reflect the ambiguity, time pressure, and cross-functional tension of a real incident.  

That means simulating difficult decisions in unfamiliar situations, not just the expected technical responses. If people have never had to make trade-offs under pressure, they will struggle when those moments arrive for real.  

Finally, this approach needs to encompass the whole organisation, not just IT and security personnel. Non-technical teams need to be involved too, including leadership. 

When executives experience the discomfort of making time-sensitive decisions in a simulated crisis, it changes how they think about risk, investment, and preparedness. Resilience improves when decision-making is practised at every level, not assumed at the top. 

Executive Profile

DanPotterDan Potter joined Immersive in 2022. He previously worked at Citi for over 15 years, gaining significant expertise in the design, delivery and management of resilience related disciplines including crisis management, business continuity and disaster recovery (including cyber), third party resilience and exercising.  

The post The Psychology Behind Why we still Struggle with Cyberattack Response appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-psychology-behind-why-we-still-struggle-with-cyberattack-response/feed/ 0
How Business Leaders Can Rescue and Redefine AI Success https://www.europeanbusinessreview.com/how-business-leaders-can-rescue-and-redefine-ai-success/ https://www.europeanbusinessreview.com/how-business-leaders-can-rescue-and-redefine-ai-success/#respond Sun, 22 Feb 2026 16:38:52 +0000 https://www.europeanbusinessreview.com/?p=244293 By Keith Schlosser AI ambition has outpaced enterprise readiness. In this article, Keith Schlosser explains why many business-led AI pilots are faltering and what CIOs must do next. You will […]

The post How Business Leaders Can Rescue and Redefine AI Success appeared first on The European Business Review.

]]>
target readers-cv

By Keith Schlosser

AI ambition has outpaced enterprise readiness. In this article, Keith Schlosser explains why many business-led AI pilots are faltering and what CIOs must do next. You will learn how to replace fragmented experimentation with governed platforms, stronger architecture, and a structured recovery framework that turns unstable initiatives into scalable advantage.

Forrester’s Predictions 2026: Tech Leadership report says one in four CIOs will be asked to step in and fix failed, business-led AI projects. It’s not a hypothetical; it’s already happening.

Across industries, teams launched AI pilots without the enterprise backbone to sustain them. Many organizations didn’t have robust architectures, data governance, or even security boundaries in place before spinning up a dozen experiments. Some projects included IT input, but many didn’t—and as a result, the technical work never happened or was incomplete. What looked like innovation on paper quickly became a tangle of shadow integrations, brittle prompts, and ungoverned agents.

Now those projects are landing on the CIO’s desk with a familiar mandate: make it work—and make it safe.

From Chaos to Architecture

This isn’t the first time technology spread faster than its scaffolding. In the 1990s, departments rushed to deploy their own CRM systems. Pockets of value appeared, but the enterprise became fragmented and risky until IT stepped in to standardize and scale. The same pattern is playing out with AI.

CIOs are well positioned to stabilize what others started. The job now is to replace scattered experimentation with an architecture that provides context, control, and transparency across every AI initiative.

Why Platforms Are the Turning Point

When early AI pilots launched, there simply weren’t platforms to build on. Every team had to wire together its own stack—data pipelines, connectors, governance layers—from scratch. It was the only way to experiment, but it wasn’t sustainable.

That era is over. Purpose-built agentic AI platforms now exist to handle the heavy lifting: multi-model orchestration, observability, document preparation, and security. They let IT regain control of fragmented efforts without starting from zero.

As Eric Barroca, CEO of Vertesia, recently wrote, “Wiring stacks together isn’t innovation—it’s plumbing.” Platforms like this give CIOs the foundation for the turnaround. They’re designed to wrap existing AI efforts with guardrails—central security, evaluation harnesses, and orchestration—so CIOs can skip the plumbing and focus on what matters: getting business outcomes from the systems already in motion.

This isn’t about slowing innovation. It’s about putting it on rails. The new job of IT leadership is to bring discipline to what’s already out there using the capabilities modern platforms provide:

  • Governance at scale. Centralize security, authentication, and observability across every agent and model.
  • Multi-model orchestration – The ability to use, compare, and switch across models (open or proprietary) as cost, speed, or performance shift.
  • Document and content preparation – Structuring long-form, multimodal content into retrievable knowledge for more accurate results from LLMs.
  • Context preservation. Ensure systems can retain and apply business knowledge securely, so results are grounded in enterprise reality.
  • Workflow integration – Agents that span documents, APIs, and systems to complete multi-step work.

A Practical Six-Step Rescue Framework

Most rescue efforts start out messy. Inherited agents behave inconsistently, data pipelines are brittle, and no one knows what’s in production. The framework below gives CIOs a structured way to re-establish order and move from firefighting to sustained control.

While these steps can be executed manually, modern AI platforms make much of the groundwork—monitoring, orchestration, and evaluation—faster and safer to implement.

  1. Triage – Benchmark every existing agent’s accuracy, cost, and reliability.
  2. Govern – Eliminate shadow projects, define access controls, and enforce audit trails.
  3. Re-ground – Improve retrieval pipelines and tool constraints to stabilize outputs.
  4. Route – Add model rotation and A/B testing to balance speed, cost, and compliance.
  5. Observe – All agent actions, outputs, and applications across all departments.
  6. Scale – Template what works and promote it safely from pilot to production.

What Comes Next

The fix doesn’t require ripping and replacing every project that went sideways. It requires giving IT the tools, structure, and authority to govern what’s already been built—and the clarity to advise what should continue.

AI doesn’t fail because the models are bad; it fails because the systems around them aren’t ready—and because teams are fragmented, each working on their own siloed initiatives. Now, CIOs have both the technology and the mandate to correct that.

This is more than rescue work. It’s a strategic opening for IT to reset the enterprise AI agenda—moving from scattered, business-led pilots to a governed, outcome-driven platform model. The CIOs who seize that moment won’t just stabilize AI in their organizations—they’ll define how it’s run for the next decade.

About the Author

KeithKeith Schlosser is a longtime technology and insurance executive who has led enterprise transformation from the inside, including serving as Group CIO at Axis Capital, EVP CIO for Chubb International, and VP – CIO International for Travelers Insurance. He has guided large teams through modernization, data strategy, and early AI adoption across complex, regulated environments, and currently serves as an advisor for innovative companies such as Dune Security and Vertesia, developer of a unified, low-code platform for building, deploying, and operating enterprise-grade generative AI applications.

The post How Business Leaders Can Rescue and Redefine AI Success appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/how-business-leaders-can-rescue-and-redefine-ai-success/feed/ 0
Redefining Leadership in the Age of AI: What Skills Will Future Leaders Need? https://www.europeanbusinessreview.com/redefining-leadership-in-the-age-of-ai-what-skills-will-future-leaders-need/ https://www.europeanbusinessreview.com/redefining-leadership-in-the-age-of-ai-what-skills-will-future-leaders-need/#respond Thu, 19 Feb 2026 01:42:22 +0000 https://www.europeanbusinessreview.com/?p=243391 By Vedika Lal, Zuzanna Staniszewska, and Géraldine Galindo Artificial intelligence is transcending its original role as a tool designed to support humans and is increasingly performing leadership functions. What skills […]

The post Redefining Leadership in the Age of AI: What Skills Will Future Leaders Need? appeared first on The European Business Review.

]]>
target readers-cv

By Vedika Lal, Zuzanna Staniszewska, and Géraldine Galindo

Artificial intelligence is transcending its original role as a tool designed to support humans and is increasingly performing leadership functions. What skills and competencies, then, will be demanded of human leaders in this brave new world?

“Our future is one where AI assumes leadership roles.” When Van Quaquebeke and Gerpott, pioneering researchers in leadership, wrote this in 2023, it sounded like provocation. Two years later, it sounds more like description. Across industries, AI no longer just supports leaders, it acts like one.

Rapid technological progress and advances in artificial intelligence and big data are supporting humans in analysing and interpreting large volumes of complex data, such as medical or corporate performance data1. As such, these technologies are transcending their original role as tools designed to support humans in everyday tasks, both inside and outside the workplace, and are increasingly assuming more intricate roles in which they can perform leadership functions.

AI has begun to replace and displace humans across a range of standardised managerial functions.

More specifically, AI has begun, and is likely to continue, to replace and displace humans across a range of standardised managerial functions , such as allocating tasks and resources, planning shifts, appraising performance, analysing team dynamics, determining compensation, and even making selection, promotion, and retention decisions. These changes in jobs and skills are leading to a rethinking of how they are managed.

Different forms of leadership

Leadership issues are at the forefront of considering these changes, but also of driving them forward, and this is evident by increasing AI integration beginning to demonstrate various forms of leadership.

Van Quaquebeke and Gerpott offer a useful distinction: AI can support, augment, or even substitute for human leadership.

Already, it is doing all three:

  • Through task allocation and real-time feedback or guidance, AI performs task-oriented leadership
  • Through sentiment monitoring and behavioural nudging, it enacts relationship-oriented leadership
  • Through the generation of strategic narratives, persuasive messages, and personalised communication, it performs change-oriented leadership3.

Thus, while the leader might be human, the logic shaping those decisions is becoming increasingly algorithmic. This suggests that organisational values are increasingly being embedded in algorithms, and that AI has become capable of performing functions once considered resistant to automation4.

What, then, is left for human leaders?

With several leadership functions increasingly being taken over by AI, it is highly likely that organisations will require fewer human leaders, particularly at lower and middle management levels.

This does not imply that human leaders will no longer be needed entirely. Instead, this suggests that the leaders we will need are different in nature, that is, leaders who understand not only how humans function, but also how AI operates2.

As Van Quaquebeke and Gerpott put it, “They won’t be leading the humans within an organisation but leading the machines that lead the humans.”

Ethical stewardship – the future of leadership?

Human leaders will need to possess AI literacy to guide, prompt, and supervise AI-driven leadership systems effectively 5. A first step, then, is identifying ways to assess objectively whether human leaders understand the fundamentals of AI and its potential benefits and harms for employees6.

That is, leaders will increasingly focus on governing the systems that lead people, deciding the limits of optimisation, setting boundaries, and making ethical trade-offs.

Leadership is likely to be redefined as ethical stewardship.

As leadership becomes more automated, leadership systems will not only prioritise organisational goals such as system performance, but will also become more human-centred, with a stronger orientation toward employee interests, including well-being7. In such a landscape, empathy, moral imagination, and creativity become especially valuable, as these are qualities that machines cannot yet directly replicate. These are the very characteristics that will matter most as organisational procedures become increasingly complex7.

What skills will future leaders need?

Drawing from this, we can predict that human leaders will need to become especially skilled at recognising, explaining, and shaping the complex patterns that emerge when humans and AI work together. Additionally, as AI tools increasingly diffuse and redistribute decision-making processes, human leadership will shift toward balancing and orchestrating autonomy in relation to algorithms, feedback systems, and attentional demands.

At its core, however, the most important task for human leaders will be to assert and defend ethical judgement in relation to machine-driven systems7. As these systems become more deeply integrated, it also becomes more crucial than ever for human leaders to articulate and uphold ethical standards against the backdrop of increasingly powerful algorithms2.

Enabling algorithms as our moral agents could bring unintended consequences, especially if the data they are trained on reproduces existing biases. If human leaders understand even the basic functioning of AI systems, including how data or developers may misuse such systems to reproduce existing biases and values, they will be better positioned to meaningfully guide and shape their work environments2.

This gap between calculation and care is where human leadership still belongs.

The task ahead isn’t to defend old hierarchies or to fear automation, but to redefine leadership as a form of ethical design. It’s about ensuring that our systems, however intelligent, remain accountable to the people they serve.

More specifically, this will require human leaders to develop a kind of digital backbone, so to speak, that enables them to remain firm when technologies generate ethically questionable recommendations , such as disproportionately targeting certain groups for dismissal based on performance metrics or promoting constant AI integration without reflection on what this means for people and its broader implications8.

Rather than primarily motivating employees or instilling inspiration, human leaders will need to determine:

  • which goals AI systems should optimise
  • which organisational values must be preserved
  • where and when automation should be curtailed, even if doing so reduces process efficiency.

This would mean being accountable for even those decision-making systems that are otherwise not so transparent, and overcoming the urge to hide behind the algorithm’s decision-making.

The leadership role will change

To revisit the question posed above, what is left for human leaders, we can expect a shift from telling people what to do toward directing the systems that shape how work is performed.

Human leaders will need to become especially skilled at recognising, explaining, and shaping the complex patterns that emerge when humans and AI work together.

As AI increasingly mediates decisions related to hiring, performance, development, and support, human leaders will become accountable for what these systems normalise. They will be responsible for ensuring that empathy is not reduced to a metric, fairness is not distorted by biased data, and diverse, often marginalised employee groups are not inadvertently excluded by the logics of optimisation5.

This future, too, has risks. It could privilege an elite of AI-literate experts and widen the gap between those who understand the systems and those who are governed by them.

It only works if it remains grounded in empathy and inclusion, and if it listens as much as it codes. Most critically, leaders must ensure that AI does not reproduce a single model of the ideal worker, one that is implicitly aligned with consistent availability, heteronormativity, and dominant career trajectories, at the expense of differences3.

Inclusive leadership, in this context, lies in the ongoing ethical work of keeping algorithmic systems receptive to diversity, complexity, and plurality in organisational life.

Leadership will therefore become less about the forms of presence or persuasion leaders once embodied, and more about the ethical boundaries they set, grounded in safeguarding inclusion and taking responsibility for how algorithms shape everyday work.

About the Authors 

Vedika Lal

Vedika Lal is a postdoctoral researcher at the Institute for Leadership and Inclusive Management, attached to the Work & Human Relations department of ESCP Business School.

 

Zuzanna StaniszewskaZuzanna Staniszewska (PhD) is an assistant professor at Kozminski University in Warsaw and a research associate and visiting scholar in the Work and Human Relations Department at ESCP Business School in Paris.

GeraldineGéraldine Galindo is full professor in the Work & Human Relations department on the Paris campus and director of the Institute for Leadership and Inclusive Management at ESCP Business School.

 

Endnotes
1. McKinsey & Company. “The State of AI”. QuantumBlack Insights. Available at: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
2. Van Quaquebeke, N., & Gerpott, F. H. (2023). “The now, new, and next of digital leadership: How Artificial Intelligence (AI) will take over and change leadership as we know it”. Journal of Leadership & Organizational Studies, 30(3), 265–75. https://doi.org/10.1177/15480518231181731
3. Kark, R., & Buengeler, C. (2024). “Wo∼Men and leadership: Re-thinking the state of research on gender and leadership through waves of feminist thinking”. Journal of Leadership & Organizational Studies, 31(3), 245–66. https://doi.org/10.1177/15480518241257105
4. “L’IA et le leadership: une révolution en marche”. Les Échos. Available at: https://www.lesechos.fr/idees-debats/leadership-management/lia-et-le-leadership-une-revolution-en-marche-2182543
5. Sposato, M. (2024). “Leadership training and development in the age of artificial intelligence”. Development in Learning Organizations An International Journal, 38(4), 4–7. https://doi.org/10.1108/dlo-12-2023-0256
6. Wang, B., Rau, P.-L. P., & Yuan, T. (2023). “Measuring user competence in using artificial intelligence: validity and reliability of artificial intelligence literacy scale”. Behaviour & Information Technology, 42(9), 1324–37. https://doi.org/10.1080/0144929x.2022.2072768
7. Bastian, R. (2025). “Why Empathy Is More Important Than Control for Leaders in an AI-Driven Future”. Forbes. Available at: https://www.forbes.com/sites/rebekahbastian/2025/04/28/why-empathy-is-more-important-than-control-for-leaders-in-an-ai-driven-future/
8. De Cremer, D., Narayanan, D., Deppeler, A., Nagpal, M., & McGuire, J. (2022). “The road to a human-centred digital society: opportunities, challenges and responsibilities for humans in the age of machines”. AI and Ethics, 2(4), 579–83. https://doi.org/10.1007/s43681-021-00116-6

The post Redefining Leadership in the Age of AI: What Skills Will Future Leaders Need? appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/redefining-leadership-in-the-age-of-ai-what-skills-will-future-leaders-need/feed/ 0
What an MBA Teaches that AI Never Will in 2026 https://www.europeanbusinessreview.com/what-an-mba-teaches-that-ai-never-will-in-2026/ https://www.europeanbusinessreview.com/what-an-mba-teaches-that-ai-never-will-in-2026/#respond Tue, 17 Feb 2026 13:03:23 +0000 https://www.europeanbusinessreview.com/?p=244049 AI is changing organisations across different economies, and MBAs are no exception. Here’s why an MBA still provides value and ROI – and who should reconsider an MBA in the […]

The post What an MBA Teaches that AI Never Will in 2026 appeared first on The European Business Review.

]]>
AI is changing organisations across different economies, and MBAs are no exception. Here’s why an MBA still provides value and ROI – and who should reconsider an MBA in the age of intelligent machines.

In 2026, you can walk into just about any business school admissions event, and you’ll hear the same anxious question: “With AI automating so much of business, is an MBA still worth it?” It’s a fair concern, as AI now drafts financial models, generates market research, and even simulates customer conversations.

This question seems to assume that the primary value of an MBA lies in technical or analytical output. The deeper issue is not what AI can do, but what remains distinctly human inside modern organisations. While intelligent systems take over more structured tasks, the comparative advantage of human professionals shifts toward the capabilities machines struggle to replicate.

Emotional Intelligence: Why Soft Skills Matter in the Age of AI

Perhaps nowhere is the gap between AI and human capability more pronounced than in emotional intelligence. While AI can now simulate empathetic responses and even monitor team sentiment in workplace communications, it remains fundamentally incapable of genuine human connection or actual empathy.

A 2025 Workday study revealed a disconnect: 82% of individual contributors believe employees will increasingly crave human connection as AI becomes more integrated into work, but only 65% of managers share that view. This gap suggests many leaders may be underestimating the emotional and relational impact of AI on their teams.

MBA programs develop leadership, emotional intelligence, and interpersonal skills through intensive group work, leadership simulations, and real-world consulting projects where students must navigate team dynamics, resolve conflicts, and motivate diverse groups toward common goals.

While the exercise took place a few years ago, critical, creative, and strategic thinking remain domains where human training and experience still seem to outperform current AI systems.

Consider the reality of leading through organizational change: a merger, restructuring, or strategic pivot. AI can model the financial implications and even predict employee attrition risks. But when it’s time to stand in front of anxious employees and communicate a difficult decision while maintaining trust, purpose, and morale? That requires emotional intelligence that no algorithm can provide.

As one Fast Company analysis put it: AI can handle the “what” and “how” of work, but only real leaders can handle the “why” Fast Company. In the age of AI, softer skills might matter more than ever.

AI Falls Short with Strategic Thinking

Beyond emotional intelligence, MBA programmes are designed to cultivate strategic judgment in complex, real-world situations, an area where AI still shows clear limitations. In a strategy exercise reported in September 2024 and attributed to the WU Executive Academy, 21 MBA students were asked to tackle an entrepreneurial challenge alongside ChatGPT.

The task centred on a small company with a strong innovation but no effective means of protecting it from larger competitors. Participants were required to develop viable strategic responses under real-world constraints. While the MBA students submitted their solutions once, ChatGPT was reportedly given multiple attempts to refine its answers.

According to the evaluation described in the article, the outcome was unambiguous. All 21 MBA students outperformed ChatGPT across every assessment criterion used. The students’ strategies were judged to be more situation-specific, more realistic, and better grounded in practical business judgment. In contrast, ChatGPT’s responses tended to remain generic, lacking the contextual sensitivity required to address the company’s strategic dilemma effectively.

While the exercise took place a few years ago, critical, creative, and strategic thinking remain domains where human training and experience still seem to outperform current AI systems.

On top of that, an MBA still carries a brand value that AI is unlikely to replace any time soon. The degree continues to act as a widely recognised proxy for leadership potential, analytical discipline, and professional networks.

Who Benefits From an MBA in the Age of AI

While an MBA is the elite diploma when it comes to strategy, leadership, organisational skills and emotional intelligence in a business context, an MBA is not universally valuable or profitable in an AI-driven economy.

An MBA can often be a weak investment for individuals seeking:

  • Pure technical execution roles
  • Narrow functional specialisation
  • For professionals who need rapid credentialing for short-term employability

An MBA does remain a strong investment for individuals aiming to:

  • Lead complex organisations
  • Make high-stakes decisions under uncertainty
  • Integrate technology, people, purpose, and strategy
  • Assume responsibility for outcomes rather than outputs

AI reduces the scarcity of information and execution capacity. It increases the need for human judgment, strategy and emotional leadership. The MBA’s relevance in 2026 lies entirely in serving the latter.

Does AI Render Technical MBAs Unnecessary?

If you’re in a technical field, considering an MBA, the question of whether an MBA is worth it, can seem even more pressing. If AI can write code and analyze data faster than humans, why invest six figures in a technical business education?

The concerns are obviously not unfounded. Entry-level hiring at the 15 biggest tech firms fell 25 percent from 2023 to 2024, also targeting the roles relevant to many technical MBA graduates. AI’s progression is such that after every seven months or so, it is able to complete tasks that took twice as long before. On a coding project, AI can do in minutes what used to take an hour.

A technical MBA doesn’t primarily train graduates to write better code – AI already does that better and definitely faster. Instead, these programs develop the ability to bridge technological capability and business value. A technical MBA is still valuable if your goal is not to compete with AI on execution, but to operate with it and above it. In an AI-intensive economy, advantage shifts from doing the work to deciding, directing, and integrating the work. And that is the layer where good technical MBAs operate.

Microcredentials and Continuous Upskilling

People looking for short-term employability might be better off upskilling with microcredentials. Certificates in STEM fields like business analytics, data, and AI are a growing trend within business education. This happens as the internet and online teaching is complementing classic degree structures with more flexible, modular, and personalized learning paths, at a fraction of the (time and opportunity) cost.

Microcredentials aren’t peripheral offerings. According to Coursera’s 2025 Micro-Credentials Impact Report, 96% of employers agree that microcredentials strengthen a candidate’s job application, while 94% of students say microcredentials fast-track skill development. Perhaps most tellingly, 87% of employers have hired at least one candidate with a micro-credential in the past year.

What makes microcredentials so attractive is their alignment with the velocity of technological change. They allow professionals to update specific competencies without committing to multi-year programmes, making them particularly suited for rapidly evolving domains such as AI tooling, data infrastructure, automation workflows, and applied analytics.

However, microcredentials primarily address what you know and what you can do. They are less effective at developing how you think, how you decide, and how you lead. This distinction is critical when comparing them to MBA education.

At the same time, MBAs have an answer to that. MBA programmes are inherently designed around continuous adaptation – decision-making models, strategic reasoning, and leadership capacity remain relevant even as technologies change. At the same time, an increasing number of MBAs are offering continuous upskilling, rather than “only” a one-time diploma.

Is an MBA Worth It in the Age of AI?

Whether an MBA is worth it in 2026, in an increasingly AI-driven economy, will always be case dependent.

AI is steadily eroding the premium once attached to routine analytical work and certain forms of technical execution. Tasks that previously required specialised training or significant time investment are becoming faster, cheaper, and increasingly automated. As a result, labour market dynamics are shifting, redistributing where human contribution creates the most value.

In this environment, the practical justification for an MBA changes. Its significance lies not in safeguarding against automation, but in cultivating capabilities that gain importance as intelligent systems become synthetic colleagues. Structured decision-making, economic reasoning, organisational navigation, and leadership under uncertainty remain fundamentally human responsibilities.

This article was originally published in ThinkMBA 16 February 2026. It can be accessed here: https://think-mba.com/what-an-mba-teaches-that-ai-never-will-in-2026/

The post What an MBA Teaches that AI Never Will in 2026 appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/what-an-mba-teaches-that-ai-never-will-in-2026/feed/ 0
From Trackers to Intelligent Companions: The Future of Parenting Apps https://www.europeanbusinessreview.com/from-trackers-to-intelligent-companions-the-future-of-parenting-apps/ https://www.europeanbusinessreview.com/from-trackers-to-intelligent-companions-the-future-of-parenting-apps/#respond Sat, 14 Feb 2026 12:29:16 +0000 https://www.europeanbusinessreview.com/?p=243916 By Dmitry Rumbeshta This article explores how parenting apps are evolving from simple tracking tools into intelligent companions. It examines generational shifts, information overload, and changing expectations of technology, arguing […]

The post From Trackers to Intelligent Companions: The Future of Parenting Apps appeared first on The European Business Review.

]]>
By Dmitry Rumbeshta

This article explores how parenting apps are evolving from simple tracking tools into intelligent companions. It examines generational shifts, information overload, and changing expectations of technology, arguing that modern parents need contextual, personalized guidance that reduces anxiety and helps them understand what matters in the moment.

The first digital tools for parents appeared long before mobile apps. These were large websites and forums – libraries of articles, expert opinions, and real conversations between parents. For many, they genuinely helped. When you have your first child, being able to read about others’ experiences and basic guidance can be a lifeline.

Many of these platforms still exist, offering depth and community. But their role has shifted over time. Competition for search traffic gradually replaced clarity and usefulness. SEO rules started to dictate what and how content was written, leaving parents with more text but less real understanding.

With the arrival of mobile apps, the focus narrowed even further. Most parenting apps became trackers. Sleep, feeds, diapers could all be logged. These tools answered the question, “what happened?” but rarely, “what does this mean, and what should I do?”

That approach made sense once. But today, it’s no longer enough.

A generational shift in expectations

We are entering a generational shift in parenthood. The majority of new parents are moving from boomers and early millennials toward late millennials and Gen Z. This shift is not just demographic – it fundamentally changes expectations around digital products, quality, and relevance.

In the U.S., the average age for first-time mothers has risen from about 21.4 years in 1970 to roughly 27.5 years in 2023. The average age of all mothers giving birth is now close to 30. These numbers reflect longer education, career priorities, financial pressures and deliberate life planning.

Today’s new parents are mostly late Millennials and older Gen Z. Pew Research Center notes that Millennials (born roughly 1981-1996) have already changed family life: they were less likely to live with a spouse and child at the same age as Gen X. Now Gen Z is entering its late twenties and early thirties and starting families as well.

These generations don’t just become parents later, they parent differently. They expect support that is relevant, contextual, and emotionally intelligent. Previous generations were more tolerant of basic trackers: logging feeds, naps, and diapers provided a sense of control. Even early Millennials, who grew up with digital tools, often felt that simple tracking was “good enough,” even if understanding what the data meant was left entirely to them. Today, that tolerance is fading, parents want guidance, not just records.

The real problem is not a lack of information

Modern parents are overwhelmed. Articles, reels and expert advice compete for attention. Cognitive science confirms what many parents already feel: more information doesn’t increase confidence. It increases anxiety and decision fatigue.

Parenting amplifies this effect. When you haven’t slept for days and can’t understand what’s happening with your baby, you don’t need an encyclopedia. You need answers to a few essential questions:

  • What is happening right now?
  • Is this normal for this stage?
  • What should I expect next?

Most apps still only answer the first question. They log data (sleep, feeds, diapers) but interpretation is left to the parent. Data without context is just another task at the end of an exhausting day.

Expectations of technology have changed

Outside parenting, user expectations have shifted for years. According to McKinsey, 71% of consumers expect personalized experiences, and 76% get frustrated when products don’t adapt to them. We expect technology to understand who we are, what we’ve done before, and what we need in the moment. When you become a parent, that expectation grows stronger.

Generic advice no longer works because it ignores what truly matters: the child’s age, the parent’s experience, the family situation, and the moment at hand. Trust develops when guidance feels personal, not broadcast to everyone.

Research shows that users today expect systems to be adaptive, contextual, and responsive to their individual needs. “Smart” no longer means more features, it means guidance that prioritizes what matters and adjusts in real time.

From trackers to intelligent companions

In high-stress moments, too much information increases anxiety and makes action harder. Studies show that information overload reduces satisfaction and can lead users to disengage entirely. Put simply, what parents really need is guidance they can trust.

This helps explain why content-heavy platforms are struggling.

Over time, many large parenting sites have been shaped more by SEO than by real parental needs. Google’s own research shows that algorithm-optimized content often fails to answer real questions, favoring long and generic articles over clear guidance. Much online parenting content now prioritizes breadth over needs-based support, leaving parents scrolling through every possible scenario when what they need is help with their immediate concern.

As these platforms lose relevance, conversational AI has rushed in. Chatbots promise speed and personalization, but parenting is not a neutral domain. Child development is closely tied to health, and parents are right to be cautious. Simply adding a chatbot to generic content can even mislead. Large language models can hallucinate, oversimplify, or offer advice without developmental grounding, and handing over care decisions to a generic AI system can deepen the problem.

This tension, between overwhelming content and imperfect automation, highlights the need for a new approach.

A necessary evolution

The future of parenting apps is not about knowing everything about your child. It’s about helping parents understand enough to feel confident in the moment.

The next generation of apps will not just log life, they will interpret it. They will help prioritize, explain what matters now, and provide guidance. They will respond to context rather than surfacing the most clickable content.

It’s a shift from tools to companions, from data to clarity, and from anxiety to reassurance. And that is exactly the support modern parents are looking for today.

About the Author

Dmitry RumbeshtaDmitry Rumbeshta is the co-founder and CEO of Sprouty, a parenting app used by over 2 million families worldwide. A parent himself, he focuses on building ethical, data-driven tools that help parents reduce anxiety and feel more confident during early childhood.

The post From Trackers to Intelligent Companions: The Future of Parenting Apps appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/from-trackers-to-intelligent-companions-the-future-of-parenting-apps/feed/ 0
AI at Machine Speed: The Cyber Risks that Will Define 2026 https://www.europeanbusinessreview.com/ai-at-machine-speed-the-cyber-risks-that-will-define-2026/ https://www.europeanbusinessreview.com/ai-at-machine-speed-the-cyber-risks-that-will-define-2026/#respond Sun, 08 Feb 2026 11:17:45 +0000 https://www.europeanbusinessreview.com/?p=243587 By Matthew Geyman AI is accelerating an asymmetric cyber threat landscape. Attackers need only one opening, while defenders require organisation-wide, machine-speed readiness. Resilience must be embedded in culture, with cybersecurity […]

The post AI at Machine Speed: The Cyber Risks that Will Define 2026 appeared first on The European Business Review.

]]>
target readers-cv

By Matthew Geyman

AI is accelerating an asymmetric cyber threat landscape. Attackers need only one opening, while defenders require organisation-wide, machine-speed readiness. Resilience must be embedded in culture, with cybersecurity treated as a board-level priority. AI brings both opportunity and heightened risk. Intersys Managing Director Matthew Geyman takes a deep dive.

Cybersecurity has always been a cat-and-mouse game, but as we move into 2026, the rules of that game are being rewritten by artificial intelligence. Threat actors are not just human adversaries working with limited time and resources. They are now increasingly supported by systems that can automate reconnaissance, tailor attacks with frightening precision, and operate at machine speed.

Regulators are paying close attention. In its latest supervisory priorities, for instance, the UK Prudential Regulation Authority (PRA) makes clear that cyber risk remains elevated and that firms need robust capabilities both to prevent breaches as well as to detect, respond, and recover critical services within their impact tolerances. Operational resilience must be woven into the underlying risk culture, while advances in AI are seen as both an opportunity and a source of novel risks, amplifying issues like inaccurate data, third-party reliance, and cyber threats. That framing is exactly right. Because AI is not simply adding another layer of complexity to cybersecurity, it is fundamentally changing the threat landscape itself.

The rise of hyper-personalised social engineering

For years, phishing was largely a numbers game: send enough credible generic emails and someone will click. AI has turned that blunt instrument into a scalpel. Attackers can now generate highly convincing, context-rich messages tailored to individuals, drawing on scraped data from social media, breached datasets and even corporate disclosures. The result is hyper-personalised social engineering that feels authentic, timely, and almost impossible to distinguish from legitimate communication.

Deepfake audio and video add another dimension. Fraudulent “CEO calls” or synthetic customer requests are becoming more sophisticated, eroding trust in the most basic verification mechanisms organisations rely on. In 2026, the biggest danger may not be the obviously malicious email, but the perfectly plausible one.

Automated attack chains

AI is also accelerating the industrialisation of cybercrime. We are moving rapidly towards automated attack chains: systems that can identify vulnerabilities, exploit them, escalate privileges, and move laterally across networks with minimal human input.

The implication is stark. Defenders are still operating with models built around human-paced threats: detection rules, manual triage, and delayed patch cycles. Meanwhile, attackers are compressing the timeline from intrusion to impact from days to minutes. Traditional security operations centres were not designed for adversaries that never sleep, never slow down, and can adapt in real time.

Where businesses are most exposed

The organisations most at risk in 2026 are not necessarily those with the weakest security budgets. They are the ones where complexity, legacy infrastructure, and third-party dependency collide. The PRA explicitly highlights the obsolescence of legacy technology as a resilience issue, particularly as firms undergo transformation programmes and adopt cloud-based solutions.

This is a critical point. Many firms are trying to modernise while simultaneously keeping critical services running. But they are grappling with legacy systems that cannot be easily patched, or cloud migrations introducing new misconfigurations. At the same time, outsourced providers are expanding the attack surface, while AI tools are being adopted faster than governance frameworks can keep up.

The weakest link is rarely the technology itself. It is the unmanaged interaction between systems, suppliers, and decision-making structures. Indeed, perhaps the most dangerous aspect of AI-driven cyber risk is that it is still being underestimated.

Many boards and senior leaders view AI as a productivity tool rather than a threat multiplier. But the PRA is clear that advanced technologies present novel risks, amplifying existing issues such as inaccurate data, reliance on third-party providers and cyber risks.

In other words, AI does not create entirely new categories of risk; it supercharges the ones firms already struggle with. Poor data governance becomes more damaging when AI models depend on that data. Third-party reliance becomes more dangerous when vendors embed opaque AI capabilities into core services. Cyber threats become harder to detect when malicious activity blends into automated noise.

Practical steps organisations must take now

So what does staying ahead look like in 2026? First, organisations need to stop thinking purely in terms of prevention. Breaches are inevitable; resilience is the differentiator. Firms must be able to detect attacks quickly, respond effectively, and recover critical services within defined tolerances.

Second, operational resilience must be tested realistically. That means severe but plausible scenarios, including those involving third-party disruption. Too many firms still treat resilience as a compliance exercise rather than a strategic discipline. The ‘Zero Trust’ principle of ‘Assume Breach’ is a clarion call to review operational resilience and recovery frameworks.

Third, AI governance cannot be an afterthought. Businesses adopting AI must ask:

  • What data is this model trained on?
  • What decisions does it influence?
  • What happens if it is manipulated or produces errors
  • Who is accountable?

Finally, cyber defence itself must become more automated. Human-only response models will not scale against machine-speed adversaries. Security teams need AI-assisted monitoring, faster containment playbooks, and crisis rehearsals that assume acceleration, not stability.

We are in the middle of a convergence: regulators demanding stronger resilience, organisations racing to innovate with AI, and threat actors exploiting the same tools with fewer constraints.

The pace of the cat-and-mouse game is accelerating, and the game itself is asymmetric. Attackers need only one AI-enabled opening. Defenders need machine-speed readiness across the entire organisation. Organisations must recognise that resilience must be a truly embedded part of their culture. AI is both an opportunity and a risk, and cybersecurity is a board-level strategic priority, not an IT problem that can be patched later.

About the Author

Matthew GeymanMatthew Geyman, Managing Director, began his career in London’s insurance market before founding Intersys in 1996 to deliver innovative IT solutions with integrity. Combining operational expertise and strategic vision, he leads a customer-focused organisation while pursuing his passion for emerging technology to improve business through IT.

The post AI at Machine Speed: The Cyber Risks that Will Define 2026 appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/ai-at-machine-speed-the-cyber-risks-that-will-define-2026/feed/ 0
Gartner: Boardroom Strategies for Dominating AI Investments, Risks, and Value https://www.europeanbusinessreview.com/gartner-boardroom-strategies-for-dominating-ai-investments-risks-and-value/ https://www.europeanbusinessreview.com/gartner-boardroom-strategies-for-dominating-ai-investments-risks-and-value/#respond Sat, 31 Jan 2026 06:44:25 +0000 https://www.europeanbusinessreview.com/?p=243192 By Tina Nunno Boards increasingly see artificial intelligence as central to future shareholder value. Yet a growing gap is emerging between board ambition and operational reality. This article examines how […]

The post Gartner: Boardroom Strategies for Dominating AI Investments, Risks, and Value appeared first on The European Business Review.

]]>
target readers-cv

By Tina Nunno

Boards increasingly see artificial intelligence as central to future shareholder value. Yet a growing gap is emerging between board ambition and operational reality. This article examines how AI is reshaping board governance, why traditional reporting falls short, and how executives can structure AI discussions around value, risk and strategic impact.

Artificial intelligence (AI) has moved decisively into the boardroom. For many directors, it now represents the most important investment theme shaping future competitiveness, resilience and shareholder value. According to Gartner’s 2026 Board of Directors Survey, 57% of board members rank AI as a top-three investment priority for the next two years, ahead of M&A, workforce investment and cybersecurity.

Yet despite this enthusiasm, conversations about AI are becoming more strained rather than more effective. Executives are pressed for faster progress, clearer returns and bolder ambition, often before organisations have resolved foundational challenges around data, skills, governance and risk. The result is a growing disconnect: AI features prominently on board agendas but remains far less mature than the expectations being placed upon it.

This disconnect increasingly reflects governance and oversight challenges, rather than limitations of the technology alone.

Why AI is now a governance issue, not a technology one

Historically, boards treated technology oversight as a delegated responsibility, primarily owned by the Chief information officer (CIO) or Chief technology officer (CTO), and oversight delegated to the audit, risk or technology committee. AI has fundamentally altered that model. Its implications cut across strategy, capital allocation, workforce design, risk management and corporate reputation, placing it squarely within the board’s fiduciary remit.

Directors increasingly view technological disruption, innovation failure, cybersecurity exposure, and data risk as among the most significant external threats to shareholder value. At the same time, one in four directors see inadequate technology as a major internal risk, limiting an organisation’s ability to scale, innovate and manage volatility.

In this context, AI is no longer “just another IT initiative.” It has become a cornerstone of modern board governance, forcing directors to engage directly with questions of feasibility, prioritisation and return on investment. This governance challenge is compounded by the fact that boards themselves are rarely aligned on what AI should deliver, or how quickly.

The AI divide inside the boardroom

One of the most overlooked challenges in AI governance is that boards are not aligned internally on what AI should deliver, or how fast.

Gartner identifies three broad categories of non-executive directors (NEDs) based on their AI posture:

  • Pioneers actively push for AI-led growth, differentiation and competitive advantage.
  • Pacers take a pragmatic stance, seeking proof of value while managing financial and cyber risk.
  • Protectors are skeptical, prioritising stability, cost control and risk minimisation over experimentation.

These differences matter. They shape how progress is interpreted, which questions are asked and how trade-offs are evaluated. When executives fail to recognise and navigate this divide, AI discussions can quickly become circular, defensive or overly technical, satisfying neither directors nor management.

Why traditional IT reporting no longer works for AI

Board dissatisfaction with AI reporting is increasingly visible. Directors consistently call for more meaningful discussion, yet are often presented with longer prereads, denser updates and presentations that emphasise activity over insight. Preparation demands on executives are routinely underestimated, while the pace of AI development continues to outstrip traditional reporting cycles.

AI’s inherent uncertainty compounds the issue. Dashboards and static metrics struggle to capture experimentation, learning curves and shifting risk profiles. When expectations evolve faster than reporting frameworks, frustration replaces confidence – particularly for boards already divided on AI’s value.

Reframing AI as a comprehensive investment portfolio

One of the most effective ways to reset board-level conversations is to treat AI as a comprehensive investment portfolio rather than a single programme or capability. Not all AI initiatives serve the same purpose, operate on the same timelines or carry the same risk, nor should they be evaluated through the same lens.

By positioning themselves as stewards of an AI portfolio, executives can better balance competing priorities across revenue growth, cost optimisation and risk management. This framing helps boards view AI initiatives with different timelines, risks and expected outcomes, supporting informed discussions about progress.

Making AI value legible to the board

Across boardrooms, the message from directors is remarkably consistent: connect AI to financial outcomes. Boards do not expect complete certainty, but they do expect transparency. Effective AI discussions move beyond technical capability to articulate how initiatives affect revenue growth, cost structures, resilience and risk exposure.

Whether AI is positioned as a source of innovation, competitive advantage, efficiency or protection, the underlying question remains the same: how does this investment affect the income statement, balance sheet or cash flow, and over what timeframe? Which line items will be impacted and when? Clear articulation of trade-offs, timing and uncertainty is often more valuable to boards than confident projections that overstate near-term returns.

The BOARD test for AI conversations

To sharpen AI discussions, executives benefit from a simple but disciplined BOARD communication approach: being brief, open, accurate, relevant and diplomatic. Applied consistently, this mindset helps shift board conversations away from hype and toward governance maturity. It also reflects a growing reality: some directors are already using AI to challenge assumptions and inform decisions, while others are still building confidence. Meeting directors where they are is no longer optional.

From AI hype to AI stewardship

The next phase of AI adoption will not be defined by who experiments fastest, but by who governs best. Boards are right to focus on AI’s strategic importance, but ambition must be matched with realism, structure and shared understanding of both risks and opportunities.

The organisations most likely to succeed will be those that reframe AI not as a promise, but as a managed portfolio of bets, governed with the same discipline applied to capital, risk and talent. AI governance maturity is increasingly becoming a signal of overall leadership quality and strategic discipline.

For boards, that shift begins not with new dashboards or tools, but with better, more holistic conversations regarding potential portfolios of AI value.

About the Author

Tina NunnoTina Nunno is a Managing Vice President and Gartner Fellow in Gartner’s Artificial Intelligence Practice. A recognised thought leader on AI business value, board engagement and executive leadership, she advises senior leaders globally, is a frequent keynote speaker, and coaches executives on AI governance, strategic communication and shareholder value creation.

The post Gartner: Boardroom Strategies for Dominating AI Investments, Risks, and Value appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/gartner-boardroom-strategies-for-dominating-ai-investments-risks-and-value/feed/ 0
AI Empowerment Will Separate Winners From Users in 2026 https://www.europeanbusinessreview.com/ai-empowerment-will-separate-winners-from-users-in-2026/ https://www.europeanbusinessreview.com/ai-empowerment-will-separate-winners-from-users-in-2026/#respond Sun, 25 Jan 2026 14:07:49 +0000 https://www.europeanbusinessreview.com/?p=242557 By Fahed Bizzari AI will not equalise competition in 2026. As models commoditise, advantage shifts to organisations that redesign workflows, embed governance, and build repeatable AI capability. Reliable execution, not […]

The post AI Empowerment Will Separate Winners From Users in 2026 appeared first on The European Business Review.

]]>
target readers-cv

By Fahed Bizzari

AI will not equalise competition in 2026. As models commoditise, advantage shifts to organisations that redesign workflows, embed governance, and build repeatable AI capability. Reliable execution, not deployment volume, drives EBIT. Leaders who own cross-functional standards compound gains; others experience quiet re-ranking by customers and markets over time globally today.

A comforting idea is spreading in executive conversations that if everyone has access to the same AI models, any advantage should evaporate.

That comfort is misplaced.

Commoditisation rarely removes advantage. It changes where advantage lives. When a capability becomes widely available, customers stop rewarding novelty. They start rewarding reliability.

That “quiet re-ranking” doesn’t happen in a headline moment. It happens through small, repeated differences in response quality, turnaround and follow-through.

And those micro-differences do translate into market outcomes. Faster, clearer responses shorten buying cycles. Consistent delivery reduces friction and escalations. Fewer errors reduce rework and client anxiety. Over time, that shifts conversion, renewal and price tolerance, even if nobody describes it as “AI advantage.”

The EBIT lever is workflow redesign, not deployment

Most organisations still talk about AI progress in the language of rollout. Seats enabled. Platforms approved. Usage rising.

That may be necessary, but it’s not the source of advantage, because rollout measures activity, not operating change.

If you want a hard signal on what drives enterprise impact, look at whether work has been redesigned around AI.

McKinsey’s 2025 research is unusually clear on what matters most: “Out of 25 attributes tested … the redesign of workflows has the biggest effect on an organization’s ability to see EBIT impact from its use of gen AI.”

EBIT (earnings before interest and taxes) is a blunt proxy, but it’s useful here because it forces the conversation away from anecdotes and toward operating reality.

In practice, workflow redesign means changing the default path by which work moves from input to output. AI is placed deliberately, not sprinkled opportunistically. Quality gates become explicit. Verification is designed into the flow, not left to individual caution. Exceptions are anticipated, not discovered in front of a client.

The result is that AI stops being a productivity perk for individuals and starts becoming a dependable organisational advantage.

The non-commoditised asset is organisational AI capability

If tools are increasingly shared, why do results still diverge? Because what separates organisations is not access. It is whether they have built the capability to use AI reliably in real work.

In a widely cited academic framing, Mikalef and Gupta (2021) treat AI capability as a measurable organisational construct and examine its relationship with creativity and firm performance. That matters because it keeps leaders from collapsing the question into “Which platform?” or “Which model?” and points them toward the actual competitive asset.

AI capability shows up as repeatability under real conditions: unclear context, delivery pressure, cross-team handoffs, sensitive information and edge cases where the model is least reliable.

Two firms can use similar tools and still feel very different. One is crisp and coherent across teams. The other is uneven: strong in pockets and brittle under pressure.

Commoditisation doesn’t erase that gap. It often makes it easier to see, because “good enough drafting” becomes normal and customers start noticing who stays consistent across touchpoints.

Leadership ownership decides whether capability compounds or stays trapped

There is a common mistake at this stage. Organisations treat AI as an IT deployment.

IT involvement is essential for safe access, identity controls and platform decisions. But advantage does not emerge from access alone. Advantage emerges from changed work, how tasks are performed, checked, approved and reinforced through management.

Deloitte’s executive research makes the ownership point bluntly: AI efforts succeed when ownership sits with a cross-functional leadership group rather than being treated as a technology programme owned by IT alone, and executive alignment is often the limiter.

This is where “quiet re-ranking” becomes a leadership problem.

AI touches marketing, sales, legal, procurement, operations and leadership decision-making. If ownership is concentrated in one function, standards fragment. Teams improvise local defaults. Managers reinforce inconsistent norms. Learning stays trapped in pockets.

Cross-functional ownership doesn’t mean more committees. It means clear leadership decisions about priorities, standards and reinforcement so capability spreads and holds.

Vendors can ship ingredients. They cannot ship the recipe.

The skeptic objection returns: “Even if this matters now, vendors will productise best practice and close the gap.”

Vendors can productise features, templates and guardrails. They can make tools easier to use. They cannot install the operating discipline inside your organisation.

As Deloitte’s Bill Briggs put it in 2025, organisations are obsessing over the “ingredients” while ignoring the “recipe,” which includes the culture, workflow and training required to make the technology work.

That “recipe” is what leadership ownership is for. It is how the organisation behaves day to day:

  • Do people know when to trust and when to verify?
  • Is disclosure normal, or politically risky?
  • Do managers model good practice, or treat AI as “something others do”?
  • Is learning shared, or hoarded?
  • Do standards hold under delivery pressure?

Vendors can support parts of this. They cannot substitute for leadership, management habits and workflow discipline. That is why commoditisation does not automatically equalise outcomes.

A decision rule that prevents quiet re-ranking

If commoditisation shifts advantage into operating capability, the leadership move becomes simpler. Stop asking, “Where can we use AI?” Start asking where AI-assisted work must become reliable.

A practical decision rule is four questions:

  1. Which three workflows most shape customer experience or margin? Pick them explicitly. McKinsey’s 2025 work on enterprise impact is a useful forcing function here: focus on redesigning the few workflows that matter most.
  2. In those workflows, what does “good AI-assisted work” mean here? Define it in behavioural terms: what must be verified, where ownership sits and what triggers escalation, because workflow redesign only works when quality gates are explicit.
  3. How will people learn this in real work, not just in training? The “recipe” Briggs describes: culture, workflow and training, only becomes real when practice is reinforced in the flow of delivery.
  4. Who owns it cross-functionally, so standards don’t fragment into pockets? Deloitte’s 2025 framing is the simplest reminder: cross-functional ownership is what prevents AI from becoming fragmented local practice.

Commoditisation is coming. The question is what it reveals.

Will it flatten everyone to the same baseline?

Or will it expose who has built real operating capability – and who has not?

About the Author

Fahed Bizzari Fahed Bizzari brings deep, practical experience to AI empowerment, having advised organisations including L’Oréal, Fugro and Dubai Police. He is founder of the Institute for AI Empowerment and upcoming author of The AI Empowered Organisation, and a frequent keynote speaker across Europe, the Middle East and Asia.

The post AI Empowerment Will Separate Winners From Users in 2026 appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/ai-empowerment-will-separate-winners-from-users-in-2026/feed/ 0
The Next Wave of AI: from Generation to Agency https://www.europeanbusinessreview.com/the-next-wave-of-ai-from-generation-to-agency/ https://www.europeanbusinessreview.com/the-next-wave-of-ai-from-generation-to-agency/#respond Fri, 23 Jan 2026 15:24:46 +0000 https://www.europeanbusinessreview.com/?p=242526 By Jacques Bughin If you’re quietly patting yourself on the back for finally getting a grip on AI and its impact on your organization, here’s a reality check. It turns […]

The post The Next Wave of AI: from Generation to Agency appeared first on The European Business Review.

]]>
target readers strategic manager

By Jacques Bughin

If you’re quietly patting yourself on the back for finally getting a grip on AI and its impact on your organization, here’s a reality check. It turns out there is more to AI than GenAI. Agentic AI is coming our way – and this time, it’s REALLY big!

The last two years have been dominated by generative AI, LLMs that produce text, code, and images at unprecedented speed. But the frontier has shifted. A new wave is forming, one that is less about generating content and far more about taking action. This wave is agentic AI,1 systems that plan, decide, execute, coordinate with other agents, and interface with real-world tools, software, or machines. And, unlike the previous transitions, this one fundamentally reshapes entire industries, labor markets, and the competitive landscape of AI firms.

Unlike the previous transitions, this one fundamentally reshapes entire industries, labor markets, and the competitive landscape of AI firms.

To understand why, one must look beyond the hype and examine what the emerging players are actually doing. Across the world, we see the foundational pieces of an agentic economy being assembled. Some companies—Moveworks,2 ServiceNow, OpenAI, Anthropic, CrewAI, LangGraph—are building the orchestration and multi-agent fabric. Others—Alibaba DingTalk, Tencent, ByteDance, Baidu—are deploying agents at societal scale. In Europe, Siemens and ABB are embedding agents inside factories, robots, and supply chains. Yet, while the surface impression is progress, the deeper truth is that the global market is still mono-agent, doing tool calls rather than cooperation. True multi-agent systems, agents coordinating as teams, are only in their infancy.

However, the direction of travel is now known; we are entering a world where most workflow, coordination, planning, and even knowledge work will be executed by agentic systems, not by generative models. And this shift will be bigger and more transformative than the GenAI wave for three reasons:

  1. agency automates tasks, not content;
  2. agency scales labor;
  3. agency restructures firms, workflows, and entire industries.

GenAI revolutionized output creation. Agentic AI is the wave we need to revolutionize the entire concept of work.

AI Agent

1. The emerging agentic market: still mono-agent, but crystallizing fast

Although the industry uses the language of “multi-agent AI,” today’s systems are, bluntly, one-agent wrappers around LLMs. Moveworks, the leader in enterprise AI service management, provides a single enterprise assistant that resolves tickets, completes HR workflows, and updates internal systems. It behaves like a highly competent internal employee: it resets passwords, rewrites policies, updates CRM fields, books travel, and links to Jira or Workday. But all this flows from a single agent orchestrating many tool calls. It is not yet coordinating with other autonomous agents; rather, it is acting as a “meta-employee” for the enterprise.

The same is true for Microsoft’s Autogen-based internal systems, and for Google’s Gemini Code Assist. They are not yet multi-agent societies; they are intelligent single execution loops with planning sequences.

Only a few players push into real multi-agent autonomy. CrewAI is an open-source Python library which allows the orchestration of multiple AI agents as a real project team. Instead of settling for a single generalist assistant, one can create a squad of specialized AIs, each with a role, a mission, and the ability to communicate with their colleagues. The agents are powered by LLMs such as GPT-4o or Claude. Each agent acts within their field, collaborates with others, and contributes to advancing the mission. Everything is coordinated by a manager agent who orchestrates the team. What makes CrewAI so powerful is its role-based agents, its shared contextual memory system, and its ability to handle complex inter-agent conversations. LangGraph is another case in point as an orchestration framework designed for building multi-agent AI systems with large language models. It allows developers to create complex, dynamic workflows as graph structures where multiple AI agents interact, collaborate, and maintain context and state over long-running tasks. LangGraph excels in managing multi-agent communications with fine-grained control over application flow, enabling reliable, customizable, and scalable agentic AI applications, including conversational agents, task automation, and decision-making systems.

But even here, most use is experimental. Agents cooperate to write reports, run simulations, or perform market analysis, not to autonomously run supply chains or operate financial systems. China is the exception. Its industrial platforms—DingTalk, Tencent’s scenario platforms, Baidu’s Apollo, ByteDance’s commerce systems—use multi-agent structures, because the underlying digital ecosystems are unified. DingTalk agents negotiate task assignments and approval flows. ByteDance’s HiAgent allows pricing, logistics, advertising, and inventory agents to coordinate asynchronously. Baidu Apollo’s self-driving system is multi-agent by necessity, allowing vehicles to learn from collective driving experiences and scenario data. This distributed multi-agent structure enables scalability and fleet-level optimization, supporting real-time scenario simulation, validation, and model updates that enhance safety and performance. The necessity for this multi-agent approach stems from the complexity and safety-critical demands of autonomous driving. No single monolithic model can efficiently and reliably handle the wide range of subtasks required in diverse driving environments. Instead, modular agents specializing in perception, mapping, prediction, and planning enable parallel processing, robustness, and modular upgrades. Real-time interaction of these agents ensures continuous adaptation to changing conditions, while fleet-wide coordination facilitates system improvements on a large scale.

agentic AI

2. Why agentic AI is the next wave

(and bigger than generative AI)

To understand why agentic AI will overshadow generative AI, we must look at what it fundamentally changes. Generative AI produces content. That is powerful; coding assistants like Cursor or ChatGPT can generate boilerplate, transform legacy systems, and help developers. But content generation has natural limits: content is the output of tasks, not the tasks themselves.

Agentic AI flips this relationship. It automates the task, not the output. Instead of generating an email, the agent reads the email, opens Salesforce, retrieves context, drafts a response, updates the opportunity, books a meeting, and files a ticket. Instead of summarizing policy documents, the agent updates compliance workflows, sends approval requests, writes the audit trail, and coordinates with five other internal systems.

An agent is not a “smart model”; it is a worker. And when workers scale, so does value creation. This shift has three systemic implications:

First, agentic AI attacks the coordination costs of the firm, the deepest cost structure identified by Coase. If an agent can schedule meetings, allocate resources, file claims, run ETL pipelines, reconcile invoices, and coordinate inventory, the firm transforms from a hierarchy of labor into a network of autonomous processes. Productivity is no longer linked to headcount.

Second, agentic AI enables stacked multipliers: one agent helps sales; ten agents run an entire sales pipeline; fifty agents automate a supply chain. Generative models do not scale this way.

Third, agentic AI displaces GenAI-native companies because generation is commoditized. Once agents automate tasks directly, the value shifts from content to execution. Cursor helped developers write code; agentic systems automate the entire issue lifecycle: triage → fix → test → deploy → notify stakeholders. The more agentic systems mature, the weaker the standalone GenAI-only products become.

3. Why agentic AI will reshape employment

(more than GenAI ever could)

Predictive AI affected forecasters and analysts. GenAI affected writers, coders, and creatives. Agentic AI affects everyone, because it automates the workflows that make up jobs.

Moreover, multi-agent systems will automate coordination, the highest-level human activity in firms.

Moveworks and ServiceNow already demonstrate that a single agent can absorb 40–70 percent of IT and HR tickets. In major companies, this is the equivalent of replacing dozens of support staff. ByteDance HiAgent coordinates advertising, logistics, and customer support, reducing labor requirements across multiple domains simultaneously. DingTalk agents in China automate HR, finance, and purchasing workflows for millions of SMEs. Unlike GenAI, which “augments,” agentic AI executes. It can read emails, log into systems, reason over multi-step workflows, call APIs and make decisions

Moreover, multi-agent systems will automate coordination, the highest-level human activity in firms: resource allocation, scheduling, negotiation, prioritization. This is why the shift is more profound than the move to GenAI. GenAI replaced creation, but agentic AI replaces coordination, which is what managers, administrators, and entire corporate functions are paid to do.

EARLY EVIDENCE AGENTIC AI IMPACT ON WORK

Moveworks

  • Used by >200 enterprises (DocuSign, Slack, Palo Alto Networks
  • 40 percent of all IT issues solved end to end by agents
  • Up to 70 percent in the most automated deployments
  • Equivalent to replacing 20–50 support staff in a 10,000-employee corporation

ServiceNow Agent Workspace & Now Assist

  • For Fortune 500 clients, GenAI+agents reduce 30–50 percent of service-desk workload.
  • One major European bank automated 2.4 million annual tickets, reducing staffing needs equivalently by 600–900 FTEs.
  • Toyota, Deloitte, and Target report double-digit reductions in manual case handling.

ByteDance HiAgent

  • One agent replaces 8–12 human operators in e-commerce operations.
  • Labor requirements in trial teams fell by 38–52 percent.

Alibaba DingTalk Agents

  • Used by >20 million SMEs in China.
  • SMEs reduce administrative staffing by 30–60 percent after agent deployment.
  • HR teams shrink from 5–7 staff to 1–3 in typical 200–500 employee firms.

4. Agentic AI may oblige GenAI-only startups to reinvent

The GenAI SaaS wave (2020–3) produced an explosion of startups offering “smart content.” But the economics of agentic AI destroy that value proposition. A GenAI-only product generates a document, a query, or a piece of code. An agentic AI system reads the requirement, executes the task, interacts with systems, and completes the process.

Cursor is already facing this reality. Although it is a brilliant coding assistant, agentic systems like Devin or GPT-based Code Agents can automate entire tickets end to end, making a coding editor assistant insufficient. Jasper and Copy.ai have declined sharply in usage because marketing agents can now plan campaigns, test variants, analyze CTR, adjust budgets, and post on social media, not just generate copy. The more agentic AI improves, the more GenAI-only tools lose relevance. Why use a coding assistant when an agent can build, test, deploy, and monitor features? Why use a customer-service chatbot when an agent can resolve the case?

GenAI tools focused on “generation” become components, not products. Agentic AI is not a new product category; it is a platform shift that absorbs generation entirely.

As an example, consider Moveworks. It represents the first generation of enterprise agentic platforms, a single agent with deep enterprise integration and thousands of pretrained workflows. Its competitive strength lies in the density of integrations, not the intelligence of the model. It is a mono-agent that behaves like an entire tier-1 support team. This is why ServiceNow acquired Moveworks: it fits into a broader agentic vision where each enterprise function gets an autonomous system.

CrewAI and LangGraph represent the second generation—multi-agent orchestration frameworks, where different agents assume different roles, negotiate tasks, and pass control. These frameworks are early, messy, and experimental, but they are the seeds of a future where enterprises run dozens or hundreds of cooperating agents across departments. In China, DingTalk and ByteDance are already moving toward multi-agent ecosystems with specialized agents that cooperate across logistics, finance, inventory, marketing, and HR. In many ways, China is executing the true multi-agent vision earlier, because its digital ecosystems are unified.

Conclusion: The age of agency will restructure the economy; be ready for it

The next wave of AI is not about models but about actions. It is not about intelligence but about coordination. It is not about content but about workflows. Agentic AI will reshape firms, collapse coordination costs, create new digital labor forces, disrupt GenAI-only companies, and permanently alter labor markets.

Mono-agent systems will dominate in the short term, but multi-agent cooperation will define the long-term landscape. China is ahead in deployment, the US in frameworks, and Europe in industrial integration, if it can lower its cost of deployment. Agentic AI is not the next step after GenAI. It is likely a replacement for it. The era of autonomous work besides simple robotics has begun.

Managers must shift from supervising workflows to owning outcomes, because agentic AI automates the coordination tasks that once defined managerial work. Their role becomes that of system architect, not task allocator, designing which workflows agents execute, setting guardrails, and auditing AI decisions. They must manage constraints, not steps: accuracy thresholds, compliance logic, escalation paths, and risk boundaries. Data stewardship becomes central, since agentic AI’s performance depends on clean data flows, standardized processes, and interoperable systems. Metrics move from micro-monitoring humans to macro-monitoring system productivity, error vectors, and escalation patterns. Managers must redeploy humans into high-judgment roles: exception handling, negotiation, creativity, and cross-functional sense-making. They must also master AI risk management through audits, drift monitoring, red-teaming, and scenario testing.

Ultimately, managers evolve into hybrid orchestrators of humans and agents, responsible for strategic alignment, workflow design, constraint definition, and organizational learning. The quantity of managerial labor declines, but the strategic intensity of what remains increases sharply.

About the Author

Jacques BughinJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

The post The Next Wave of AI: from Generation to Agency appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-next-wave-of-ai-from-generation-to-agency/feed/ 0
Moral Engineering: When AI Decides Where It’s Safe to Walk https://www.europeanbusinessreview.com/moral-engineering-when-ai-decides-where-its-safe-to-walk/ https://www.europeanbusinessreview.com/moral-engineering-when-ai-decides-where-its-safe-to-walk/#respond Sun, 18 Jan 2026 16:54:59 +0000 https://www.europeanbusinessreview.com/?p=241816 By Vladimir Spinko While AI is increasingly deployed in humanitarian, security, and disaster-response domains, the real challenge lies not in detection accuracy but in moral decision thresholds. This article examines […]

The post Moral Engineering: When AI Decides Where It’s Safe to Walk appeared first on The European Business Review.

]]>
target readers ie

By Vladimir Spinko

While AI is increasingly deployed in humanitarian, security, and disaster-response domains, the real challenge lies not in detection accuracy but in moral decision thresholds. This article examines how probabilistic AI systems translate uncertainty into life-critical actions, exposing hidden biases, accountability gaps, and the necessity of human oversight in high-risk environments.

Currently, most drone systems, whether used in humanitarian demining, disaster mapping or security surveillance, act primarily as monitoring tools. Their role is to gather sensor data (visual, thermal, radar, LiDAR, SAR, etc.) and present it to a human operator, who interprets the scene and makes decisions. The system’s value lies in extending human perception and reducing direct risk to operators; risk assessment, judgment, and decision-making remain human tasks.

AI systems deployed in high-risk humanitarian and security environments do not “know” that an area is safe. They operate on probabilistic inference. A drone-mounted ground-penetrating radar (GPR), thermal imager, or synthetic aperture radar (SAR) does not produce a binary output; it generates confidence intervals. A former minefield is not “clear”, it is classified as, for example, 98.2 % likely free of unexploded ordnance (UXO) based on sensor fusion and historical priors.

Yet, the ethical problem begins at the threshold: who decides whether 99 % confidence is sufficient, or whether 99.9 % is required? In humanitarian demining, the difference between these two numbers is not philosophical; it is operational. At 99 % confidence, one out of every 100 “cleared” zones may still contain a lethal device. At 99.9 %, that drops to one in 1,000 but the cost is non-linear. Survey time increases, sensor passes multiply, and operational budgets inflate rapidly.

The Black Box in the Field: Demining is context-dependent

The boundary between passive monitoring and autonomous action is increasingly blurred in security and demining operations. Traditional drone systems collect sensor data (visual, thermal, radar, LiDAR, SAR) for human operators to interpret, extending perception and reducing direct risk. Emerging applications, however, are moving toward systems that not only detect threats but reason about them and initiate action.

In security contexts, this shift is already apparent. In early November 2025, unauthorized drone incursions over European airspace forced temporary closures of Brussels and Liège airports, leading to dozens of cancelled or diverted flights and hundreds of stranded passengers. These incidents illustrate how sensor data can quickly escalate into high-stakes operational decisions, including airspace shutdowns, flight diversions, and emergency deployment of security personnel. In future deployments, AI may be tasked with evaluating whether a drone is hostile and determining the optimal response, effectively performing real-time risk assessment and action selection.

A similar evolution can be envisioned in demining. AI-enabled systems might integrate radar signatures, terrain models, and historical minefield data to compute probabilities that a given route contains unexploded ordnance. Based on these calculations, the system could recommend rerouting or flag high-risk zones, translating probabilistic assessments into operational decisions. In both domains, this represents a move from “what is” (detection of an object or anomaly) to “what should be done,” embedding normative judgments within machine reasoning.

Who Takes the Blame When AI Gets It Wrong And What Gets Lost When We Rush?

AI systems in high-stakes humanitarian contexts face a structural challenge: reproducing human decision-making where risk is non-linear and outcomes are irreversible. Human operators in demining, air traffic control, or evacuation coordination rely on tacit knowledge, pattern recognition, heuristics, and situational ethics. Two experienced deminers may assess the same terrain differently, and both outcomes can be operationally “successful,” yet these cannot be directly encoded as ground truth for AI.

Humanitarian demining and civil aviation, though working in very different environments, both rely on exceptionally detailed global standards designed to manage extreme, life-critical risks. These protocols – from step-by-step clearance procedures to tightly regulated maintenance and verification routines – exist precisely because even the smallest oversight can have catastrophic consequences. Yet history shows that even in the most regulated, checklist-driven industries, the probability of error is never zero. Conditions shift, edge cases emerge, and humans adapt in ways no guideline can fully capture.

In less regulated domains such as disaster relief, the uncertainty becomes even more acute. There are no globally standardized guidelines, decisions must be made within minutes, and information arrives incomplete or distorted. Drones play a crucial role in rapid damage assessment and victim search. Still, the cost of error is high: a misread thermal signature or an incorrectly flagged “safe” corridor can redirect rescuers away from those who need help most. In the chaos of an unfolding emergency, even a subtle algorithmic bias can escalate into a life-critical failure.

This creates a critical risk of bias in training data. Systems trained on historical minefields – for example, ordnance from past European conflicts – may miss improvised or novel devices in new theaters. AI that performs well on familiar terrain can become dangerously overconfident elsewhere. Auditing such hidden biases requires both technical validation against diverse datasets and contextual review by experienced operators.

The problem is amplified when decisions involve value trade-offs, such as exposing one operator to protect many civilians or delaying action to gather more data. Models that reduce these judgments to single metrics can obscure moral and operational complexity. The key question remains: how do we audit for hidden bias and ensure AI outputs are interpreted with human judgment, especially when the system’s confidence may give a false sense of certainty?

Human-in-the-Loop vs. Human-on-the-Loop: Ethical Oversight in Life-Critical AI

AI and automated systems face a unique trust dynamic: when a robot or algorithm errs, the perceived penalty is higher than for an equivalent human error. This phenomenon, known as algorithm aversion, has been documented in aviation and automation research, where a single automation failure reduces operator trust more sharply than a comparable human mistake. Even statistically sound AI recommendations may be questioned or rejected because the error seems opaque or the rationale is difficult to interpret.

From a moral and engineering standpoint, this raises the question of how humans should be integrated into decision loops. Should operators remain fully “in the loop,” approving every AI-generated action, or is it acceptable for them to be merely “on the loop,” monitoring decisions without direct intervention? In life-or-death safety contexts — whether clearing minefields, controlling air traffic, or directing emergency responses — maintaining a human “in the loop” is arguably ethically mandatory to ensure accountability and prevent catastrophic outcomes.

The broader concern is a slippery slope: today it is demining, tomorrow it could be AI assessing structural integrity after earthquakes, or planning evacuation routes during wildfires. Designing these systems requires embedding core ethical principles from the outset – including transparency, explainability, and explicit human oversight to prevent misuse or unintended harm. Without such safeguards, even highly accurate systems risk eroding trust and producing errors with consequences far beyond what their statistical performance might suggest.

From Mortal Consequences to Asset Loss: AI Risk Across Domains

There is a fundamental difference between systems used in life-critical domains and those used in security or interdiction tasks. In demining or air traffic control, a single error can directly translate into human death or serious injury. A false “safe” decision is irreversible. In these contexts, acceptable error rates approach zero, and every design trade-off is implicitly a moral decision about how much human risk can be tolerated.

The “hostile drone” problem operates under a different cost structure. If an innocent drone is misclassified and destroyed, the outcome is usually a financial loss – a $500-$2,000 asset written off not a human casualty. That asymmetry changes how risk is framed: systems are allowed to be more aggressive because the downside of error is economically acceptable when weighed against the potential threat to civilian aircraft. Treating both domains as equivalent hides this reality and produces dangerously misleading safety assumptions.

Decision-Making AI in Agriculture: Benefits and Minimal Consequences

Smart or precision agriculture illustrates a very different ethical and operational landscape. Unlike life-critical systems, AI here is often tasked not only with monitoring but also with decision-making, such as targeted fertilization, irrigation, or pest control. The advantage is clear: robots can optimize input use, reduce waste, and adjust treatments with high spatial precision, improving efficiency and crop yield.

However, the stakes of error are low. A misapplied fertilizer or missed weed patch rarely causes irreversible harm: the cost is usually economic or environmental, not human. Moreover, these systems operate in relatively well-understood domains with few input dimensions (soil moisture, nutrient levels, weather), and training paths are straightforward and highly supervised. In other words, precision agriculture AI is almost a “toy problem” compared with humanitarian or aviation applications: the margin for error is large, consequences are reversible, and the path from training data to safe deployment is relatively obvious.

About the Author

Vladimir SpinkoVladimir Spinko is the founder of Aery Bizkaia, a deep-tech startup developing AI- powered CSAR radar systems for autonomous landmine detection. A graduate of MIPT and former COO at Aeroxo, he combines advanced physics, aerospace innovation, and humanitarian impact to redefine post-conflict safety.

The post Moral Engineering: When AI Decides Where It’s Safe to Walk appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/moral-engineering-when-ai-decides-where-its-safe-to-walk/feed/ 0
Authentic Leadership Shifts in the AI Age https://www.europeanbusinessreview.com/authentic-leadership-shifts-in-the-ai-age/ https://www.europeanbusinessreview.com/authentic-leadership-shifts-in-the-ai-age/#respond Tue, 13 Jan 2026 03:11:46 +0000 https://www.europeanbusinessreview.com/?p=241285 By Mostafa Sayyadi and Michael J. Provitera When the AI train pulls in to the platform, the accompanying steam clouds of uncertainty may disorientate those waiting in the station. Here, […]

The post Authentic Leadership Shifts in the AI Age appeared first on The European Business Review.

]]>
target readers-cv

By Mostafa Sayyadi and Michael J. Provitera

When the AI train pulls in to the platform, the accompanying steam clouds of uncertainty may disorientate those waiting in the station. Here, Mostafa Sayyadi and Michael J. Provitera suggest that authenticity and psychological capital are key concepts in the success, or otherwise, of leadership in restoring clarity and confidence among the workforce.

Artificial intelligence is bringing new changes in the future of business.1,2,3,4 Authentic leaders are those executives who can effectively guide their organizations through these tough transitions. AI has the potential to improve human skills. However, it also worries employees. Employees may fear that AI will take their jobs. Authentic leaders have an important duty now. They can introduce AI ethically and thoughtfully to balance the path forward. These leaders are very optimistic about AI’s potential to aid humans. They are also very realistic about employees’ concerns. With open discussions, authentic leaders will ease uncertainty about AI. These leaders explain how AI can work with employees, not against them. They also focus on ethics and people, not just efficiency. Adapting to change is hard. Nevertheless, an organization can thrive in the digital future with authentic leadership focused on empowering people.

Authentic leaders persevere in finding ethical AI approaches, choosing human values over convenience.

Moreover, authentic leaders inspire trust to guide transitions. Their motivation and belief in people’s abilities provide reassurance despite uncertainty. Authentic leaders persevere in finding ethical AI approaches, choosing human values over convenience. These leaders also show flexibility in readying their workforce to use AI responsibly. Psychological strengths like hope and resilience help authentic leaders manage AI’s risks. 5,6 With an ethical digital culture focused on human collaboration with AI, authentic leaders can steer their companies through complex changes. Employees will, then, feel excited by AI’s potential, not threatened.

Digitalization, AI Data Reliance, and the Critical Role of Authentic Leadership

Digitalization brings many changes, from remote work and skills gaps to complex, matrixed organizations that present leaders with daunting challenges when modeling desired behaviors such as transparency and morality. To battle these challenges, executives need to adopt authentic leadership mindsets to help their companies in the direction of digital transformation. Authentic leaders can make people feel more confident in their ability to guide the organization through big, complex changes due to new technologies like artificial intelligence. Their self-confidence and related motivation help give reassurance and make people feel less uncertain, even when things feel unclear or worrying because of the changes, managing transitions capably. They can also reassure people amid uncertainty by openly displaying their drive, motivation, and leadership capability through their words and actions.

In addition, authentic leaders are aware of the impact on values of AI’s reliance on data. This reliance may compromise traditional values like transparency and morality, leading to leaders who rely more heavily on algorithmic insights than on their human reasoning capability and judgment learned over years of experience.7,8 As pressures to rely solely on data increase, authentic leaders who excel in psychological capital elements such as hope can identify alternative ethical AI approaches that respect transparency and human impacts. Authentic leaders aspire to ethical methods over easy shortcuts that could compromise values. These leaders seek hope over convenience when faced with tough situations.

Machine Learning, AI Opacity, and the Critical Role of Authentic Leadership

Executives who adopt an authentic leadership style know how to deal with the impacts of machine learning and AI opacity. AI systems tend to be inherently opaque, concealing their inner workings from users and powered by proprietary black boxes. This lack of transparency contradicts authentic leaders’ emphasis on openness and transparency. Authentic leaders prioritize open dialogue when making and explaining decisions. AI systems make it impossible to explain AI-generated recommendations or insights. This can lead to distrust and anxiety among employees affected by mysterious AI systems. Employees may worry that AI could be biased and unethical, leading to unethical uses that go undetected and unchecked by leaders who cannot monitor audit algorithms to guarantee their moral behavior.

Authentic leaders act to mitigate any gaps between opaque AI systems and their values of openness.9,10 These leaders also take proactive steps (e.g., ethics reviews, algorithm audits, or explainable AI techniques) to restore transparency and build trust against this artificial opacity created by systems that mismatch these values. Otherwise, the result may be distrust among users.

However, a key question remains. What can help authentic leaders manage opaque AI systems? Resilience is an interesting part of psychological capital. Psychological capital is a new theory in the area of motivation. Positive organizational behavior focuses on developing the follower or employee to become all they can become by setting stretch goals. These goals could be quantum leaps for the organization and employees. When employees reach their full potential, their performance increases, along with their well-being. Psychological capital is a combination of creating an organization that has high levels of each resource coupled with organizational capital, human capital, and social capital, which are intangible features of organizations that may improve not only competitive advantage but also profitability.

Recent research shows that resilience helps authentic leaders adapt to changes from opaque technologies like AI.11 Resilience improves these leaders’ ability to recover from problems and introduce algorithm auditing to restore transparency. Authentic leaders who adapt well to change show flexibility in the face of AI disruption by proactively planning training to develop responsible AI skills in their workforce. From healthcare research, it emerges that adaptability empowers authentic leaders high in resilience to put in place ethical checks on black-box algorithms and find new ways to maintain transparency when faced with opaque AI and machine learning systems.

Authentic Leadership Shifts in the AI Age

Organizational Trust, Ethical Digital Culture, and the Critical Role of Authentic Leadership 

Authentic leaders foster organizational trust in AI. Since employees fear job losses due to automation and AI, authentic leaders proactively build employee trust by communicating openly and transparently about how AI will augment roles rather than replace jobs.12 Authentic leaders also strike the right tone when discussing how AI and technology will impact jobs. They are very realistic. AI will transform roles and potentially replace some jobs. Authentic leaders transparently acknowledge this fact while providing reassurance and optimism and focusing on how AI can augment human skills and talents. Technology has historically created more opportunities than it destroys, and AI can significantly boost human productivity.  This balanced, hopeful realism can ease fears of change. Authentic leaders can build trust through various means. Their self-assurance allows them to explain openly how AI enhances human skills, while their motivation helps workers believe they can succeed with AI’s help, prioritizing human skills. Authentic leaders can effectively design systems that enable teamwork between humans and AI and foster trust. Their vision for employees can also provide reassurance against fears of replacement. Authentic leaders use self-belief to openly discuss this idea while confidently conveying how AI enhances human skills.

Another key point for executives is that the era of AI also promotes new models of ethical use of technology and shapes a new model of corporate culture in which employees should be aware that AI can assist humans and not replace them. Here, fostering an ethical digital culture is critical. As AI becomes more integrated, authentic leaders vocally and visibly promote responsible and ethical use of the technology. These leaders make it clear that AI is meant to enhance, not diminish, uniquely human abilities like creativity, empathy, and judgment. Authentic leaders demonstrate ethics to build employee trust in AI. They create an ethical organizational culture where AI assists humans rather than replacing them. Their optimistic approach includes monitoring the ethical aspects of AI while keeping its focus on helping humans. These actions contribute to creating an ethical climate while shaping a culture where opportunities for human enhancement arise.

AI Assessment, Data-Driven, People-Centric Metrics, and the Critical Role of Authentic Leadership 

Authentic leaders can also answer a key question: Does the assessment of AI impact people’s oversight? Every executive should be aware that, as any company or organization considers investing in new technology like robots or advanced computer programs, its leaders often start with an assessment, including financial, work process, and organizational considerations. But authentic leaders completely understand that it’s about more than numbers. They spend time considering how these significant changes might impact the people who work there through factors like job security and well-being. In particular, leaders need to ask key questions regarding any proposed changes. These key questions are: “Will these changes cause someone else’s job or career to change?” or “Who might find adapting difficult?” In addition, authentic leaders have an ethical responsibility when decisions could impact others. They consider this by providing extra training or other forms of support to employees who feel threatened by any forthcoming adjustments. These leaders will dig further to determine why, before devising strategies.

In addition, authentic leaders have an ethical responsibility to prioritize workplace humanity. They emphasize transparency, ethics, and morality to create trust when implementing AI or emerging technologies. Executives implementing AI must not accept data-driven metrics or efficiency gains alone as justification. This represents an ethical breach. They have a moral duty to consider how AI might subjugate human dignity or dehumanize aspects of work. If an AI system’s metrics will treat people like disposable cogs, then its implementation must be rejected on moral grounds. They must mitigate these risks by showing openness, carefully assessing impacts, and upholding an unwavering dedication to safeguarding people’s humanity. When integrating AI systems, authentic leaders communicate the purpose and limitations of these technologies and consider any risks that might occur, rather than dismiss them outright. They also guard against profit-driven attempts to turn work into mechanical, data-driven processes devoid of meaning. Authentic leaders prioritize respecting human dignity over productivity gains or cost savings. By doing this, these leaders in the digital era champion timeless human-centric leadership virtues such as morality, empathy, and ethical empowerment as an antidote against technology’s potentially dehumanizing effects. Accordingly, authentic leaders set an example for an ethical AI deployment that augments rather than diminishes workplace humanity.

Figure 1

Practical Guidance for Leading Through the AI Era 

The new changes as a result of AI are challenging for leaders across the globe. They can adopt an authentic leadership style to impact their subordinates. Employees can feel threatened that AI will take their jobs. Authentic leaders can communicate openly and honestly. They also need to explain how AI can help people do their jobs better and focus on more human activities like creativity and empathy. Authentic leaders can make sure that AI is used ethically. They can also highlight that AI makes processes more efficient. Here, authentic leaders also consider the human impacts. They frankly reject uses of AI that diminish human dignity or treat people like machines. They can assess the risks and communicate them. Therefore, these leaders emphasize AI supporting humans rather than replacing them.

The next key point is that authentic leaders reinforce the culture or the organization by providing extra support and training to help workers adapt. In doing this, they may consider the following recommendations:

Stay optimistic and focus on how AI can enhance human skills. Be open and have ongoing dialogues with workers about impacts. Provide extra training and support to help them adapt.

Implement AI carefully and ethically. Before deploying it, do complete assessments on how it will affect people and jobs. Reject uses that could diminish human dignity. Make ethical oversight of AI a visible priority.

Keep communicating with authentic transparency. Explain the purpose and limitations of AI. Be realistic yet hopeful about how roles will transform. Reassure people that new tech will augment, not replace, jobs.

Keep people at the center.

Shape an ethical climate of learning and trust. Adopt AI as a collaborative tool to expand what humans can achieve.

Practical Guidance for Developing Psychological Capital 

In this section, we present recommendations for executives to develop psychological capital coupled with their authentic leadership. With AI so prevalent, hope is important, and bouncing back from setbacks is equally important. Self-efficacy to deal with situations needs a transfer of talent when necessary. Self-efficacy is people’s belief in their capacity to influence events that may affect their lives. In our conversations with Australian executives in Melbourne, Sydney, and Perth, we found that the following tenets of psychological capital can strengthen authentic leaders as they manage change and encourage a culture of humans and artificial intelligence with employees. Based on our findings, we present the following practical ideas for authentic leaders:

  • Hope – Use hope to provide training for the visualization of quantum goals and contingency planning. Have leaders and followers practice flexibility in finding new solutions by providing hope coupled with resources and tolerance for possible mistakes when they occur.
  • Resilience – Use resilience to build high recovery from artificial intelligence drawbacks such as cyberattacks. Resilience can be manifested through workshops on managing stress, bouncing back from setbacks, and changing management skills to repackage negative stimuli.
  • Self-efficacy – Use self-efficacy to boost confidence by setting achievable targets that stretch skill levels. Provide mentoring from senior and tenured employees to those recently onboarded, and give positive feedback on accomplishments whenever possible. We found that cross-training in similar areas of the business units helps people grow in depth and breadth.
  • Optimism – Use optimism to foster techniques such as identifying negative thought patterns and changing them to positive ones, reframing the issues positively, and focusing on opportunities when they arise.
  • Transparency – Use transparency to enhance education and communication. Transparency needs to be coupled with clear communication of plans in an ongoing process, always inviting input and sharing from and across employees.

In Conclusion

By keeping people at the center, authentic leaders are those executives who can effectively and ethically integrate AI to elevate rather than diminish productivity and humanity. These leaders can guide their organizations through the digital transitions today. In addition, by empowering the elements of psychological capital, authentic leaders can enhance the work environment and more effectively open up opportunities to expand and grow organizations in the AI age. Once workers are aware of the power of AI coupled with authentic leadership and positive organizational behavior, better known as psychological capital, they become more innovative and creative. Effective authentic leadership will also provide a supportive digital culture to help employees feel empowered and not threatened by AI. This can help employees embrace all the dynamic decisions that follow an artificial intelligence mindset.

About the Authors

Mostafa SayyadiMostafa Sayyadi works with senior business leaders to effectively develop innovation in companies, and helps companies – from start-ups to the Fortune 100 – succeed by improving the effectiveness of their leaders. He is a business book author and a long-time contributor to top management journals and his work has been featured in top-flight publications.

Michael ProviteraMichael J. Provitera is an associate professor of organizational behavior at Barry University, Miami, FL. He received a B.S. with a major in Marketing and a minor in Economics at the City University of New York in 1985. In 1989, while concurrently working on Wall Street as a junior executive, Dr. Provitera earned his MBA in Finance from St. John’s University in Jamaica, Queens, New York. He obtained his DBA from Nova Southeastern University. He is quoted frequently in the national media.

References
1. Sheikh, H., Prins, C., & Schrijvers, E. (2023). “AI as a System Technology”. In H. Sheikh, C. Prins & E. Schrijvers (Eds.), Mission AI. Research for Policy. Springer. https://doi.org/10.1007/978-3-031-21448-6_4
2. Campos Zabala, F.J. (2023). “Future Trends in AI and Its Considerations for Business”. In F.J. Campos Zabala (Eds.), Grow Your Business with AI. Apress. https://doi.org/10.1007/978-1-4842-9669-1_23
3. Sheikh, H., Prins, C., Schrijvers, E. (2023). “Policy for AI as a System Technology”. In H. Sheikh, C. Prins & E. Schrijvers (Eds.), Mission AI. Research for Policy. Springer. https://doi.org/10.1007/978-3-031-21448-6_10
4. Hirsch-Kreinsen, H. (2023). “Artificial intelligence: a ‘promising technology’”. AI and Society, 1-12. https://doi.org/10.1007/s00146-023-01629-w
5. Kelly, L. (2023). Introduction. In L. Kelly (Eds.), Mindfulness for Authentic Leadership. Palgrave Studies in Workplace Spirituality and Fulfillment. Palgrave Macmillan. https://doi.org/10.1007/978-3-031-34677-4_1
6. Fateh, A., Mustamil, N., & Shahzad, F. (2021). “Role of authentic leadership and personal mastery in predicting employee creative behavior: a self-determination perspective”. Frontiers of Business Research in China, 15(1), 1-16. https://doi.org/10.1186/s11782-021-00100-1
7. Zhao, J., & Gómez Fariñas, B. (2023). “Artificial Intelligence and Sustainable Decisions”. European Business Organization Law Review, 24(1), 1–39. https://doi.org/10.1007/s40804-022-00262-2
8. Bonicalzi, S., De Caro, M., & Giovanola, B. (2023). “Artificial Intelligence and Autonomy: On the Ethical Dimension of Recommender Systems”. Topoi, 42(3), 819–32. https://doi.org/10.1007/s11245-023-09922-5
9. Felzmann, H., Fosch-Villaronga, E., Lutz, C., Tamò-Larrieux, A. (2023). “Towards Transparency by Design for Artificial Intelligence”. Science and Engineering Ethics, 26 (2), 3333–61. https://doi.org/10.1007/s11948-020-00276-4
10. Hayes, P., van de Poel, I., & Steen, M. (2023). “Moral transparency of and concerning algorithmic tools”. AI and Ethics, 3(1), 585–600. https://doi.org/10.1007/s43681-022-00190-4
11. Pitman, T. & Reilly, J.E. (2023). “The Challenge to Establish Authentic Leadership in the Digital Age”. In: Turcan, R.V., Reilly, J.E., Jørgensen, K.M. & Taran, Y. and Bujac, A.I. (eds) The Emerald Handbook of Authentic Leadership, Emerald Publishing Limited, Leeds. https://doi.org/10.1108/978-1-80262-013-920231021
12. Wang, J., Xing, Z., & Zhang, R. (2023). “AI technology application and employee responsibility”. Humanities and Social Sciences Communications, 10(1), 1-17. https://doi.org/10.1057/s41599-023-01843-3

The post Authentic Leadership Shifts in the AI Age appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/authentic-leadership-shifts-in-the-ai-age/feed/ 0
From Compliance to Conviction: Leadership Readiness as the Binding Constraint of the AI Era https://www.europeanbusinessreview.com/from-compliance-to-conviction-leadership-readiness-as-the-binding-constraint-of-the-ai-era/ https://www.europeanbusinessreview.com/from-compliance-to-conviction-leadership-readiness-as-the-binding-constraint-of-the-ai-era/#respond Thu, 08 Jan 2026 06:50:31 +0000 https://www.europeanbusinessreview.com/?p=241138 By Deepika Chopra As artificial intelligence becomes embedded in high-stakes decisions, the primary constraint on scale is no longer technology or regulation, but leadership readiness. This article introduces leadership readiness […]

The post From Compliance to Conviction: Leadership Readiness as the Binding Constraint of the AI Era appeared first on The European Business Review.

]]>
By Deepika Chopra

As artificial intelligence becomes embedded in high-stakes decisions, the primary constraint on scale is no longer technology or regulation, but leadership readiness. This article introduces leadership readiness as a governable operating condition—one that determines whether AI insight translates into decisive action or stalls in hesitation. Moving from compliance to conviction requires treating readiness as infrastructure, not culture.

In AI-enabled organizations, decision failure rarely looks like error. More often, it appears as delay.

Recommendations are reviewed repeatedly. Analysis is rerun without new information. Decisions are deferred until certainty can be restored—even when certainty is no longer available. These behaviors are often attributed to risk aversion or resistance. In reality, they signal something more structural: leadership systems designed for deterministic judgment are being asked to govern probabilistic intelligence.

This mismatch has quietly become the dominant constraint on AI at scale.

Leadership Readiness As An Operating Condition

Leadership readiness is often described as mindset, culture, or change management. None of these definitions are sufficient for AI-mediated environments.

When machine-generated insight enters the decision loop, readiness functions as an operating condition. It determines how judgment is exercised under uncertainty, how authority is distributed when intelligence is shared, and how accountability is maintained when outcomes are probabilistic.

When readiness is weak, organizations compensate with process. When readiness is strong, they rely on governance.

The difference is visible not in experimentation, but in execution.

Why Compliance Plateaus

Over the past several years, organizations have made meaningful progress on Responsible AI. Ethics reviews, model oversight, and regulatory alignment have matured rapidly—and appropriately.

But compliance governs permission, not performance.

It answers whether AI can be used safely and responsibly. It does not answer how leadership judgment must adapt once AI is used routinely in decision-making. As a result, many organizations reach a plateau: systems are compliant, yet impact remains inconsistent. Intelligence exists, but conviction does not.

This is not a failure of ethics or regulation. It is a governance gap.

Decision Ownership Under Uncertainty

AI introduces a leadership challenge that many organizations have not named: decision ownership becomes ambiguous precisely when insight becomes abundant.

When recommendations are machine-generated, leaders must navigate questions that traditional governance never fully addressed:

  • When should the system be trusted?
  • When is override appropriate?
  • How should divergence from algorithmic output be explained?
  • Where does accountability sit when outcomes are probabilistic?

Absent explicit governance, these questions are resolved informally—through hierarchy, politics, or delay. Over time, informal resolution becomes normalized, and execution slows without appearing broken.

This is why AI adoption often stalls not at experimentation, but at commitment.

Readiness Must Be Measurable To Be Governable

What remains invisible cannot be governed.

Leadership readiness becomes actionable only when it is made visible across a small number of dimensions that directly influence decision behavior: trust in AI-generated insight, clarity of decision rights, confidence in escalation and override, and shared understanding of how AI should be used in context.

Structured diagnostics—such as a Human–AI Alignment Score™ (HAAS™) —can surface where these conditions are strong, where they are fragile, and where leadership attention is required. Used properly, such diagnostics do not evaluate individuals; they reveal system stress.

This allows leaders to intervene early, before hesitation hardens into execution drag.

The Compounding Cost Of Hesitation

In high-stakes environments—capital allocation, strategic investment, enterprise transformation—hesitation compounds quietly.

Decisions that should accelerate slow instead. Teams revalidate insights rather than act on them. Accountability diffuses across committees. Value erosion occurs incrementally, often unnoticed until recovery becomes expensive.

These patterns are not caused by insufficient data or flawed models. They are the predictable outcome of leadership systems operating without readiness governance.

Leadership Systems, Not Leadership Traits

It is tempting to frame readiness as a function of individual capability. That framing is incomplete.

Readiness is systemic. It emerges from how organizations define decision rights, reinforce accountability, and normalize uncertainty communication. Individual leaders operate within these systems, but do not create them alone.

As AI reduces the effectiveness of authority without alignment, leadership systems that reward clarity, coherence, and shared ownership outperform those that rely on positional control. This represents a structural shift in how leadership effectiveness is determined.

From Transformation Initiative To Operating Standard

Many organizations still treat AI as a transformation initiative—something to be rolled out, managed, and completed.

Leadership readiness cannot be implemented that way.

It functions as an operating standard, shaping how decisions are made continuously rather than episodically. Once established, it reduces friction rather than adding oversight. It replaces escalation with clarity, and process with conviction.

Leadership’s Obligation

The AI era will not be defined by the sophistication of systems, but by the maturity of leadership structures capable of carrying them. Organizations that treat readiness as infrastructure will convert intelligence into conviction. Those that do not will discover that no amount of analytical power can compensate for governance systems never designed to operate under uncertainty.This is not a technology challenge—it is a leadership responsibility.

About the Author

Deepika ChopraDeepika Chopra is Founder and CEO of AlphaU AI and the author of Move First, Align Fast. She works globally with leaders, boards, and investors on leadership readiness and decision-making in complex, high-stakes environments, focusing on how Human–AI collaboration can be governed to strengthen judgment, accountability, and execution at scale.

 Move First, Align Fast (Wiley 2025)

Get the book: Wiley or Amazon

The post From Compliance to Conviction: Leadership Readiness as the Binding Constraint of the AI Era appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/from-compliance-to-conviction-leadership-readiness-as-the-binding-constraint-of-the-ai-era/feed/ 0
Europe’s Software Crossroads: Why Product Thinking is Now Key to The Future https://www.europeanbusinessreview.com/europes-software-crossroads-why-product-thinking-is-now-key-to-the-future/ https://www.europeanbusinessreview.com/europes-software-crossroads-why-product-thinking-is-now-key-to-the-future/#respond Sun, 04 Jan 2026 13:06:55 +0000 https://www.europeanbusinessreview.com/?p=241068 By Roman Eloshvili For decades, Europe’s software industry has been built on services: outsourcing, custom development, and consulting. That model created jobs and export revenue, yes, but it is now […]

The post Europe’s Software Crossroads: Why Product Thinking is Now Key to The Future appeared first on The European Business Review.

]]>
target readers ie

By Roman Eloshvili

For decades, Europe’s software industry has been built on services: outsourcing, custom development, and consulting. That model created jobs and export revenue, yes, but it is now reaching its limits. As margins compress and AI reshapes standards, European software firms must rethink their future. Productization is now becoming a strategic necessity. And here is why.

Europe’s software sector always had a clear and reliable engine powering it. Talented engineers, strong technical education, competitive costs — there were many factors that, when put together, made the region a natural hub for outsourcing, custom development, and IT consulting.

For a long time, this model worked well: it created thousands of companies and millions of job positions, ultimately helping position Europe as a critical supplier to global tech ecosystems. But today, that engine is losing momentum.

The ironic thing is that when we look at the overall IT services market, numbers show that it keeps growing. By 2029, it is expected to reach $5.17 trillion. On paper, sound impressive, doesn’t it? And it is. But in practice, it also means a very crowded market where providers constantly compete on just about every front: speed, prices, efficiency of services, etc.

As a result, for individual projects, it becomes harder and harder to find success. Pressure is intense, clients expect the same quality of outputs on lower budgets, and retaining top talent is a constant battle in its own right.

Yet at the same time, product-led companies show a major contrast. Despite often being smaller in headcount, they are scaling faster, raising capital more easily, and building far more resilient businesses. Private equity investment in SaaS saw a 66% jump in 2025, indicating that investor preferences lean increasingly toward scalable product companies rather than traditional service businesses.

The way I see it, this shift brings up a question that we must consider in earnest: can Europe’s service-first software companies remain competitive without evolving into product-first organizations?

Why the Service Model is Hitting a Ceiling

The traditional service model has three structural limitations that are becoming impossible to ignore.

First, services scale linearly, which, in simple words, means that revenue growth depends heavily on headcount growth. That made sense when demand was booming, and talent was abundant, but today, hiring is increasingly expensive and scaling teams across borders introduces a lot of operational complexity. Even well-run service firms eventually hit a growth ceiling under such conditions.

Second, like I already mentioned, margins are under constant pressure. With AI tools increasingly automating many development processes, tasks that used to justify large budgets and teams are now becoming faster and cheaper to deliver with fewer people.

From the client’s point of view, this leads to a question: why should I pay the same price for something that now takes less time and effort? So they start pushing for lower rates and shorter service times. Project revenue drops, and as costs remain largely intact, as companies still need to invest in AI tools and professionals to maintain them.

And finally, ambitious talent is harder to retain when you run a service-model business. Strong engineers are often driven by the desire to build something of their own, instead of executing someone else’s roadmap. And when your workers leave to join product startups or launch their own ventures, retaining success in the long run becomes a much heavier task.

None of this means services are “dead,” but it does mean the model, on its own, is no longer enough to sustain growth and competitiveness.

Why Productization is the Way to Go

Basically, it’s because product-led companies operate under a different set of rules. They decouple revenue from headcount, they create reusable value, and they can serve thousands of customers with the same core technology. Most importantly, they build their value by building assets, not just cultivating relationships.

This is why investors consistently reward product companies with higher valuations. It is also why ecosystems that produce strong product businesses tend to generate more innovation, capital reinvestment, and global influence over time.

Here’s a very recent example that proves this dynamic: Databricks recently raised $4 billion in yet another funding round. It’s a product-driven company that continues to bring in capital at significant valuations (the latest being $134B) and reinvests in its platform. This round is the third major one the company had this year, and it’s clear proof that product-led firms can build resilient revenue models.

Because of this, for Europe, productization is a strategic necessity if the region wants to move up the value chain. IT businesses here need to put greater focus on creating proprietary software, platforms, and data-driven products.

Making the Transition is the Hardest Part — What to Account For?

That said, moving from services to products is not so simple as flipping a switch. A software company can’t do it overnight.

The very first major wall that you need to break through here is the difference in mindsets. Service businesses are built around predictability and optimized for client satisfaction. You sign a contract, and you deliver on a short time-scale. But product companies must tolerate a lot of uncertainty that comes with experimentation and delayed returns.

You build products without really knowing when — or even if — they will pay off. Progress is often uneven, and in order to push through on this change, company leadership must think beyond utilization rates and monthly revenue reports. They need to think in terms of long-term value creation.

The second challenge is how capital allocation works. In a service business, most spending is tied directly to client work: if you have a project, you have a budget. But product development requires you to invest time and money long before any kind of revenue has a chance to exist.

You must be willing to put in the work over time, funding your development teams and accepting that without this upfront dedication, your products will never reach a stage where they can start paying themselves off.

Finally, the third key challenge is keeping focus. Many companies fail to make the transition because they spread themselves too thin; try to do everything at once: continue to run service operations, try to build multiple products at the same time, customize features for early customers. More often than not, this haste results in exhaustion and half-baked ideas that never become full-fledged products.

If you want to succeed, start small and with a much narrower approach. Pick one real, proven, and recurring problem that your company already understands deeply, and build a solution around that. This gives you a realistic chance to find your footing first — scaling can come later.

Image of the Future: Who Will Win

Despite all the hardships, I fully believe that what Europe’s software sector is facing now is not a decline or stagnation. What stands before it is a signal to change, and that signal needs to be heeded.

Companies that treat services not as the end goal, but as a foundation, will find themselves going a step further. Think of it like this: your previous client work can be used to understand what problems they’re forced to deal with. And that knowledge can then be turned into products that have good odds of being accepted since they answer the practical needs of your audience.

“Productization” is the next stage of maturity for Europe’s software industry, and the key to building companies that last in the days to come.

About the Author

RomanRoman Eloshvili is a founder and chief executive officer of XData Group, a B2B software development company. There, he directs the development of AI in banking while navigating investor relations and fostering business scalability. Mr. Eloshvili is a C-level executive with an extensive background in developing fintech solutions for banks and a serial entrepreneur with over 10 years in business administration across Europe.

The post Europe’s Software Crossroads: Why Product Thinking is Now Key to The Future appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/europes-software-crossroads-why-product-thinking-is-now-key-to-the-future/feed/ 0
5 Technologies that Will Reshape Travel in 2026 (And it’s Not What You Think) https://www.europeanbusinessreview.com/5-technologies-that-will-reshape-travel-in-2026-and-its-not-what-you-think/ https://www.europeanbusinessreview.com/5-technologies-that-will-reshape-travel-in-2026-and-its-not-what-you-think/#respond Sun, 04 Jan 2026 13:04:46 +0000 https://www.europeanbusinessreview.com/?p=241070 By Nick Filatov While investment pours into customer-facing AI, the real $1.7 trillion cost is hidden in manual backend “shadow work.” This analysis identifies five foundational technologies – led by […]

The post 5 Technologies that Will Reshape Travel in 2026 (And it’s Not What You Think) appeared first on The European Business Review.

]]>
target readers ie

By Nick Filatov

While investment pours into customer-facing AI, the real $1.7 trillion cost is hidden in manual backend “shadow work.” This analysis identifies five foundational technologies – led by the shift to Agentic AI – that will redefine operational efficiency, scale, and profitability for the travel industry by 2026.

Have you ever spent hours on hold with an airline after a flight cancellation? Or waited days for a ticket exchange? The problem isn’t the employee – it’s the immense complexity behind the scenes. While consumers see sleek apps and AI chatbots, the travel industry’s engine room still relies on duct tape and sheer human effort. But change is coming.

The next wave of innovation won’t be about another chatbot. It will be about rebuilding the very foundations of how travel operations work. Based on the convergence of market need and technological maturity, here are the five technologies poised to move from promise to reality in 2026.

1. The bot that does: why agentic AI is a game-changer, not a gimmick

We’ve all interacted with the AI: the chatbots that efficiently answer frequent questions and manage simple requests. They’ve become a standard part of the customer service toolkit. But the industry hits a wall when a request moves from providing information to executing a complex action. What happens when the answer is “yes, you can change your flight,” but fulfilling that promise requires a human agent to spend half an hour navigating multiple specialized systems?

Enter Agentic AI. Forget simple conversation, think execution. This is AI that doesn’t just suggest actions but performs them end-to-end. It interprets complex fare rules, calculates reissue values across multiple airlines, and executes the change directly within the GDS. It turns a 30-minute specialist task into a 30-second automated process.

The bottom line? This isn’t about making existing chatbots smarter. It’s about a fundamental leap from systems that assist to systems that act. This is how we finally tackle the multi-billion dollar problem of manual post-booking operations.

2. MCP: the strategic bridge between legacy power and AI innovation

The industry’s core distribution systems (GDS) are marvels of reliability and scale, processing billions of transactions. The challenge lies in creating a seamless dialogue between these established systems and modern AI. The Model Context Protocol (MCP) is emerging as that crucial bridge.

MCP acts as a universal translator, allowing AI agents to securely and reliably interact with any data source or tool – from legacy systems to modern NDC APIs. It provides the standardized protocol that makes complex, trustworthy automation possible. In 2026, we’ll see MCP not as a technical novelty, but as the essential middleware that unlocks the full potential of existing infrastructure for the AI era.

3. From tools to foundation: the rise of AI-native infrastructure

The initial wave of AI in travel focused on adding “AI-powered” features to existing processes. The real breakthrough comes from building AI-Native Infrastructure – a new operational layer designed from the ground up for autonomy, yet built to complement and enhance current systems.

This is the difference between buying a tool and upgrading your company’s operational DNA. This infrastructure handles the complexity of multi-system integration, real-time policy checks, and transactional execution autonomously.

The result? Businesses can scale their ticket volume without the painful linear growth of their support teams. They can guarantee compliance not through manual checks, but by design. This is how we turn operational overhead into a competitive advantage.

4. From fighting fires to preventing them: the era of predictive resilience

Today, a flight cancellation triggers a costly, stressful scramble. In 2026, it will be a managed event. Predictive Disruption Management uses data not just to react, but to pre-empt.

By analyzing patterns across weather, air traffic, and historical data, systems will forecast high-risk disruptions hours in advance. They won’t just alert you; they will preemptively generate optimal rebooking scenarios  for affected passengers. This shifts the paradigm from reactive firefighting to proactive readiness, allowing teams to execute pre-approved plans in minutes rather than hours, saving millions in operational costs and transforming the customer experience during the most stressful travel moments.

5. The end of siloed experiences: true personalization through data orchestration

Today’s “personalization” is a paradox. Marketing offers a tailored deal, but the support team has no context when things go wrong. Real hyper-personalization in 2026 will come from breaking down these data silos.

Imagine a system where every customer interaction is informed by a unified view of their journey. Support knows their itinerary before they even ask. Dynamic packages are built in real-time based on deep behavioral understanding. Loyalty is earned not through points, but through flawless, frictionless experiences. This is personalization that works for the business, not just the marketing dashboard.

The new operational playbook

The companies that will lead in 2026 aren’t those chasing the shiniest new AI feature. They are those making a strategic bet on a new autonomous operational backbone. The technologies that matter are the ones that work together to create self-healing, self-optimizing systems that enhance, rather than replace, the industry’s proven infrastructure.

This is a fundamental shift from using technology to assist people, to building systems that reliably handle core operations. The prize is immense: not just incremental gains, but a complete redefinition of efficiency, scalability, and customer satisfaction in travel. The race to rebuild travel’s backend is finally on.

About the Author

Nick FilatovNick Filatov, founder of GDS42.ai. With a 25-year tech career and 14 years in TravelTech – including building and exiting a major OTA – he now focuses on developing AI infrastructure to automate travel operations.

The post 5 Technologies that Will Reshape Travel in 2026 (And it’s Not What You Think) appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/5-technologies-that-will-reshape-travel-in-2026-and-its-not-what-you-think/feed/ 0
Ten Things Every Manager Must Understand About China’s AI Strategy https://www.europeanbusinessreview.com/ten-things-every-manager-must-understand-about-chinas-ai-strategy/ https://www.europeanbusinessreview.com/ten-things-every-manager-must-understand-about-chinas-ai-strategy/#respond Thu, 18 Dec 2025 06:57:29 +0000 https://www.europeanbusinessreview.com/?p=240456 By Jacques Bughin China’s AI strategy reshapes competition far beyond foundation models and chatbots. As Jacques Bughin explains, managers must look at scale, integration and deployment rather than raw model […]

The post Ten Things Every Manager Must Understand About China’s AI Strategy appeared first on The European Business Review.

]]>
By Jacques Bughin

China’s AI strategy reshapes competition far beyond foundation models and chatbots. As Jacques Bughin explains, managers must look at scale, integration and deployment rather than raw model power. Understanding these dynamics helps decision makers assess risk, opportunity and partnership in a market treating AI as national infrastructure, not isolated innovation.

Understanding China’s position in the AI race requires stepping away from Western assumptions that the contest is mainly about foundation models. In China, the model is only the starting layer. The real competitive engine lies upstream in cloud architecture and downstream in national-scale deployment. A manager evaluating partnership, competition, or opportunities in China must grasp ten essential realities that define the Chinese AI trajectory. Each is rooted in evidence, company cases, and the way the Chinese market actually operates.

When a manager evaluates the efficiency of a Chinese platform, it is the integration, not the model, that explains the leap in adoption.

The first reality is that China is not building AI as a loose collection of apps and platforms but as an integrated technology stack. This stack connects foundation models such as Alibaba’s Qwen or ByteDance’s Doubao with cloud providers including Alibaba Cloud, Tencent Cloud, and Baidu’s AI Cloud, and links further downstream to workflows running inside DingTalk, WeChat Work, Alipay, Meituan, and entire municipal service systems. This means that AI deployment in China is fast, uniform, and often invisible to the user. When a manager evaluates the efficiency of a Chinese platform, it is the integration, not the model, that explains the leap in adoption.

A second reality is that China builds for scale from day one. DingTalk, with hundreds of millions of users, deploys more workplace agents in a month than most Western enterprise SaaS firms deploy in a year. These agents are not demos but operational capabilities handling HR approvals, procurement flows, financial checks, travel validations, and compliance steps. This scale acts as an engine for rapid iteration, meaning China’s agentic systems evolve through millions of real-world feedback loops per day. Managers must understand that this scale advantage compresses innovation cycles dramatically.

A third truth is that China’s digital ecosystems are structurally unified. A Western manager is accustomed to siloed systems: ERP, HRIS, CRM, ticketing, payments, messaging. In China, the same firm may run daily operations, messaging, approvals, payments, forms, file storage, customer interactions, and analytics all inside a single super-app environment. DingTalk for enterprises, WeChat Work for SMEs, and increasingly ByteDance’s Feishu enable agentic automation with almost no integration overhead. This is why multi-agent workflows already appear in logistics, commerce, and city services: the ecosystem makes agent-to-agent coordination genuinely feasible.

A fourth factor is that Chinese firms prioritize multimodal and real-time agents rather than text-only assistants. ByteDance’s Doubao is optimized for video, image, and real-time signals because Douyin, TikTok’s sister platform, runs on real-time multimodal behavior. Baidu’s models focus on real-time reasoning because Apollo, its autonomous driving system, requires agents to coordinate across perception, planning, and fleet routing within milliseconds. Chinese AI strategy is shaped by sectors where real-time autonomy matters: retail operations, logistics, mobility, and urban services.

A fifth insight is that multi-agent systems are far more advanced in China than in Europe or the United States. In Baidu Apollo taxis, dozens of agents operate simultaneously: a perception agent, a prediction agent, short-horizon and long-horizon planning agents, and a fleet coordination agent. In ByteDance’s e-commerce engine, pricing agents, advertising agents, inventory agents, and logistics agents work in asynchronous negotiation loops to optimize conversion and cost. Tencent’s financial platforms use evaluator agents to monitor fraud-detection agents. These systems are operational, not prototypes. A manager analyzing competition should understand that China is not theorizing about multi-agent AI; it is deploying it.

A sixth reality is that China’s industrial and manufacturing base is uniquely suited to agentic automation. Firms like Haier, Midea, Geely, BYD, and CATL already run digitalized factories with IoT, MES systems, centralized scheduling, and real-time data visibility. This foundation enables agentic systems to take over scheduling, quality control, machine setup, procurement coordination, and energy optimization. Siemens and ABB operate globally and are strong in Europe, but China is deploying at a faster internal velocity because the country has more greenfield plants and fewer legacy integration obstacles.

A seventh point is that regulatory structures in China support rapid iteration of enterprise AI. China’s AI regulations emphasize platform accountability rather than restrictive usage controls. For enterprise AI, this means firms can deploy agentic systems across workflows without facing the friction of overlapping data, privacy, or compliance requirements found in Europe. Chinese privacy law is real, but enforcement patterns focus on misuse and societal harm, not innovation. Managers should understand that regulatory speed is part of China’s competitive advantage.

An eighth truth is that Chinese consumer behavior accelerates agent adoption. Chinese users are accustomed to automation, from mobile payments to autonomous delivery robots. This cultural readiness dramatically reduces the adoption friction for AI-driven services. It is no accident that Meituan deploys hundreds of autonomous delivery units or that JD.com uses intelligent warehouses with agents coordinating robots. The population accepts and expects automation. This allows Chinese companies to deploy agentic systems at a depth Western companies cannot match without cultural change.

A ninth insight is that China’s mobile-first economy forces AI companies to optimize for inference efficiency, not model size. Chinese AI firms build leaner, faster reasoning models such as Qwen-1.5B, Doubao Lite, and Tencent’s small Hunyuan variants because these models run directly on smartphones, point-of-sale terminals, and industrial devices. Managers who believe the Chinese AI race is about parameter counts misunderstand the real technology direction: China’s competitive edge lies in low-latency, cost-efficient, highly-deployed agent models.

China is building not a collection of AI tools but a national operating system for agentic intelligence.

A final and crucial point is that China’s AI strategy is not simply technical; it is geopolitical. Every deployment of an agentic workflow inside DingTalk, every multi-agent system inside a factory, every city adopting Baidu’s fleet-level autonomy, and every ByteDance agent operating cross-border commerce strengthens China’s position in global value chains. Managers must realize that AI in China is tied to industrial policy, national competitiveness, and economic sovereignty. When a Chinese firm deploys an AI agent, it is not merely automating a task; it is reinforcing the country’s position in global supply chains.

Together, these ten elements form a picture of a deeply coordinated AI economy. China does not win because it trains bigger models. It wins because it deploys agents deeper inside digital ecosystems, industrial infrastructure, and consumer environments.

Western managers must stop evaluating China through a generative AI lens and start evaluating it through the lens of agentic automation. China is building not a collection of AI tools but a national operating system for agentic intelligence.

About the Author

Jacques BughinJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

The post Ten Things Every Manager Must Understand About China’s AI Strategy appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/ten-things-every-manager-must-understand-about-chinas-ai-strategy/feed/ 0
Business Beyond the Algorithm: Understanding Data for Effective AI Deployment https://www.europeanbusinessreview.com/business-beyond-the-algorithm-understanding-data-for-effective-ai-deployment/ https://www.europeanbusinessreview.com/business-beyond-the-algorithm-understanding-data-for-effective-ai-deployment/#respond Sun, 16 Nov 2025 15:43:47 +0000 https://www.europeanbusinessreview.com/?p=238689 By Dr. Nadia Morozova, Tamara Miner, and Karen Taylor Crowe Successful deployment of AI solutions continues to be a challenge for many organizations. In this paper, the authors provide recommendations […]

The post Business Beyond the Algorithm: Understanding Data for Effective AI Deployment appeared first on The European Business Review.

]]>
target readers-cv

By Dr. Nadia Morozova, Tamara Miner, and Karen Taylor Crowe

Successful deployment of AI solutions continues to be a challenge for many organizations. In this paper, the authors provide recommendations to senior leaders on how to approach developing and deploying AI solutions to ensure that they empower ethical AI culture in their organizations and allow them to avoid harmful business mistakes.

Introduction

How can we ensure that AI brings real value to our organizations? How can we take advantage of AI’s opportunities while mitigating the risks? Behind success stories of profitable business transformations and productivity gains, unintended consequences lurk – bringing operational, reputational and legal risk.

As a business leader, how can you ensure your organization uses AI effectively and responsibly? This starts with data. To provide accountability and oversight for their organizations, leaders need to know what questions to ask.

Here, we provide a practical framework for deploying your data effectively in AI systems:

  1. Strategy first: align your AI use cases to your business goals
  2. The right people: find the experts, internally and externally, to lead data review and governance
  3. Know your assets: Identify the data you hold, their quality, accuracy and relevance
  4. Mind the gaps: find and fix gaps in your data (internal and external)
  5. Future proof: clarify how data will be updated and evolve over time
  6. Human-in-the-loop: establish procedures for human oversight of results and protocols to adjust systems quickly

Figure 1: Framework for building ethical AI culture in an organization

Framework for building ethical AI culture in an organization

Strategy First

As with any business transformation project, start with your strategy. Your business goals and key performance indicators (KPIs) drive your data and AI requirements. An example use case: predicting consumer demand for a product. Building an AI model that represents your market landscape as accurately as possible means gathering and synthesizing multiple datasets.

Understanding the purpose of the model will help you:

  • identify relevant datasets
  • plug any data gaps
  • minimize ‘noise’ by removing unnecessary information

For very targeted use cases, smaller datasets can enable faster deployment and more accurate results than Large Language Models (LLMs)[1]. Datasets should be large enough to provide a workable model of ‘reality’, but not so large that more room for error is introduced[2].

The Right People

This is why you need the experts! Internal experts and external consultants can help with the nuances of model development and delivery. Thoughtful talent strategies and organizational development are critical to overall success. Data scientists are your friends!

Know Your Data

Anything we think of as ‘information’ is data: pricing charts, sales records, product specifications. Many companies are sitting on underexploited datasets they could leverage to drive revenue growth or operational efficiency.

Use these questions to evaluate your use cases for AI deployment:

  • Size and scope: Is your dataset right-sized and high quality enough to provide accurate predictions? Is it well-defined for smooth deployment and updating. Reliability and repeatability are the goals.
  • Time: Does your data cover a long enough period to encompass seasonal fluctuations in demand or changing economic conditions? Balance historical data for accuracy with new information to avoid your model becoming outdated and inaccurate. What’s the right cadence and pruning strategy?
  • Variables: What variables are relevant for your particular use case (e.g. product lines, regions, customer segments, channels)?
  • Completeness: If you don’t have all the data you need in-house, what datasets could fill those gaps? Always evaluate the integrity of external datasets (How often are they updated, how will you receive them, how are they gathered and processed)?

Mind the Gaps – Data Normalization 

Data normalization (standardization) is critical for successful AI deployment. Whilst some AI tools can work across data which has not been enhanced or edited, AI tools achieve their best results using ‘structured’ data. Structured data is information that has been sorted, labeled (‘tagged’) and formatted for consistency allowing relevant information to be surfaced easily within the AI model.  Simple examples include names, dates, or prices.

Data normalization also helps flush out missing information e.g. prices omitted in some of your sales records. Gaps can be filled through manual human effort, such as ‘best guesses’. Where a price is missing, for example, you could take the average of known price points for the same product and use that. While not historically perfect, this creates a more complete dataset thereby giving more accurate results once operational.

Consult with your data scientists and experts to understand the level of accuracy you really need for particular use cases. Some gaps might not be meaningful, while others could throw your entire model off[3][4]! Don’t spend time and money on perfection unless it drives tangible business outcomes. Sometimes good enough is good enough.

Future-Proof Your Data

LLMs and internal business information systems are reliant on data from the past, but the variables that feed the models are subject to change. That means they can stop representing ‘reality’ very quickly.

In our consumer demand example, the model needs to be continuously updated to reflect new information, such as changing sales patterns or prices. Developing a scalable and sustainable AI solution means knowing when sources are updated, and regularly reviewing changing model outputs as new data enters the system[5].

Good data stewardship also involves ‘pruning’ data when it is no longer relevant. If you eliminate a product line or stop selling to a particular segment, that information might need to be culled from the model, so it better reflects your current business reality.

Data Science Handbooks help to align and track changes in how you do data science. Document your standards for quality, integrity, and accuracy for both internal and external data sources. Clarify roles, responsibilities, and decision-making protocols for data changes. Establish escalation pathways for unintended consequences. In other words, you need humans in the loop!

Human-in-the-Loop & post deployment monitoring is mandatory

Cutting edge technological advances in AI do not negate the need for human oversight of AI use. Quite the opposite! Deploying AI tools means understanding your business strategy and goals, making decisions about which use cases to prioritize, which datasets to leverage, and monitoring results and post-deployment impacts[6]. These are all human responsibilities (i.e. leadership).

The critical difference between ‘standard’ data-enabled systems or predictive models (which you might already be using) and AI-powered solutions is that AI is a self-learning system. There is a compounding effect at play, which risks producing more of the same information over time. Increasingly self-referential, fragile systems can produce misleading results with serious business and societal impacts[7].

Unintended Consequences

A powerful example of an AI solution compounding inherent issues rather than solving them was Zillow’s real estate auto-purchasing system. Initial trials went well, generating huge profits for the platform. However, as the market shifted, homeowners found ways to drive more profit to themselves than the platform, and Zillow did not adjust their predictive model to reflect this new ‘reality’. Without human oversight and regular review, by the time Zillow could see what was happening and decommissioned the system, they had lost millions of dollars[8].

As a business leader you cannot simply launch and leave AI solutions. Any major changes of process, people, or systems require benchmarking and ongoing monitoring to secure the intended business benefits. Vigilance and quick action to address ‘unintended consequences’ is critical. As self-learning systems, AI tools can – and do – go off-piste and human behavior when interacting with these systems is unpredictable. We are nowhere near a post-human world yet! Business leaders must exercise judgment and put guardrails in place to mitigate negative impacts on business performance and stakeholders. Also unintended consequences can be unexpected opportunities! Don’t miss out because you’re not monitoring the ripple effects of new tools on your teams or markets.

Just this one example (out of many!) demonstrates just why business leaders need to ask the right questions when evaluating AI development proposals and data use. Managing AI opportunities and risks sits squarely with leaders. It is not a ‘technology’ or ‘legal’ issue. Profitable and responsible deployment of AI tools requires commitment to good governance with effective, accurate use of data. The bottom line is, you need to protect your bottom line, to drive, not destroy, business value.

Takeaways

To take advantage of the growth and profitability potential of AI:

  • Start from what you want to achieve as a business.
  • Bring the right expertise on board, internal or external.
  • Get familiar with key terminology and tools: understand what your data can do for you, and how to use it effectively and responsibly.
  • Put people and processes in place to observe, measure, and adjust inputs and outputs as your business needs, markets and models evolve.
  • Don’t wait for a crisis or miss opportunities! Profitable and responsible data governance and AI deployment relies on human leadership and accountability.

Who guards the guardrails? You do!

About the Authors

NadiaDr. Nadia Morozova is Chief Analytics & Insights Officer at Enriched Insights and Strategic Industry Advisory Board Member at Warwick Manufacturing Group – University of Warwick (United Kingdom). Her research is focused on data-driven organizational culture change and consumer neuroscience.

TamaraTamara Miner has worked on infrastructure, data, and developer tools for 20 years in the US and Europe. She is a Chief Technology Officer and strategy advisor, launching SaaS products for several London-based startups, Riot Games, and Microsoft Azure. Her current focus is democratizing data via ethical AI at Climate Policy Radar.

KarenKaren Taylor Crowe is the Founder & Strategic Advisor at Aviina Growth Consulting. She is a global SaaS and legal-tech executive specializing in data-driven product innovation and responsible AI. She helps organizations transform decision-making, governance, and growth through intelligent systems and ethical design, drawing on deep leadership experience across IP management, analytics, and enterprise software.

References
[1] Whiting, K. (2025). What is a small language model and how can businesses leverage this AI tool? Available at: https://www.weforum.org/stories/2025/01/ai-small-language-models/
[2] Hoerl, R.W., Redman,T.C. (2023). What Managers Should Ask About AI Models and Data Sets. Available at: https://sloanreview.mit.edu/article/what-managers-should-ask-about-ai-models-and-data-sets/?event=work24
[3] Redman, T.C., Hoerl, R.W. (2024). AI and Statistics: Perfect Together. Available at: https://sloanreview.mit.edu/article/ai-and-statistics-perfect-together/
[4] Titah, R. (2024). How AI Skews Our Sense of Responsibility. Available at: https://sloanreview.mit.edu/article/how-ai-skews-our-sense-of-responsibility/
[5] Popovic, D., Lakhtakia,S, Landecker, W., Valentine, M. (2024). Avoid ML Failures by Asking the Right Questions. Available at: https://sloanreview.mit.edu/article/avoid-ml-failures-by-asking-the-right-questions/
[6] Panikkar, R., Saleh,T., Szybowski,M., Whiteman, R. (2021). Operationalizing machine learning in processes. Available at: https://www.mckinsey.com/capabilities/operations/our-insights/operationalizing-machine-learning-in-processes
[7] Dilmenagi, C. (2025). Synthetic Data vs Real Data: Benefits, Challenges. Available at: https://research.aimultiple.com/synthetic-data-vs-real-data/
[8] Glaser, V.L., Omidvar, O., Safavi, M. (2023). Predictive Models Can Lose the Plot. Here’s How to Keep them on Track. Available at: https://sloanreview.mit.edu/article/predictive-models-can-lose-the-plot-heres-how-to-keep-them-on-track/

The post Business Beyond the Algorithm: Understanding Data for Effective AI Deployment appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/business-beyond-the-algorithm-understanding-data-for-effective-ai-deployment/feed/ 0
AI Literacy: A Leadership Imperative https://www.europeanbusinessreview.com/ai-literacy-a-leadership-imperative/ https://www.europeanbusinessreview.com/ai-literacy-a-leadership-imperative/#respond Sun, 16 Nov 2025 15:05:10 +0000 https://www.europeanbusinessreview.com/?p=238667 By Hans-Petter (”HP”) Dalen AI literacy has become a critical leadership competency, enabling executives to assess risks, opportunities, and ethical implications of AI adoption. Closing the skills gap ensures responsible […]

The post AI Literacy: A Leadership Imperative appeared first on The European Business Review.

]]>
target readers-cv

By Hans-Petter (”HP”) Dalen

AI literacy has become a critical leadership competency, enabling executives to assess risks, opportunities, and ethical implications of AI adoption. Closing the skills gap ensures responsible integration, drives productivity, and aligns AI with business strategy. Leaders who cultivate literacy can future-proof operations, foster innovation, and guide organisations through transformative change.

Today, nearly every organisation is experimenting with artificial intelligence (AI) — yet how many have mastered its effective use?

Recent research shows that 60% of leaders report an AI literacy skills gap within their organisations,  underscoring the urgent need for education. This gap is not just technical — it is strategic. AI literacy has become a core leadership competency, enabling executives to make informed, ethical, and effective decisions in an AI-driven world.

What AI literacy really means for business leaders

AI literacy is about more than understanding what AI can do. It’s the ability to question outputs, recognise limitations, and assess implications for strategy, operations, and culture. Importantly, it is everyone’s responsibility, not just that of executives. Those who work with processes day-to-day often have the clearest insight into where AI can create value. Empowering them to identify opportunities and shape AI solutions ensures adoption is practical, effective, and fully aligned with organisational goals.

But it exists on a continuum. At one end lies awareness – recognising AI’s capacity to automate tasks or generate insights. At the other is deep expertise – overseeing responsible implementation, assessing ethical risks, and anticipating organisational impact. Most leaders sit somewhere in between: intrigued by AI’s potential but unsure how to deploy it effectively.

This gap matters. Without a foundational understanding of how AI can and should be used, blind spots emerge, leading to inefficiencies, missed opportunities, or reputational risks. Embedding AI successfully requires literacy at the top: leaders who can identify where it adds value, align it to business goals, and ensure it is applied responsibly.

AI’s impact on workforce strategy

AI is already reshaping how organisations operate, from automating routine tasks to transforming recruitment and employee engagement. Two-thirds of the 3500 senior leaders across Europe and the Middle East who responded to a recent IBM and Censuswide survey reported already seeing significant productivity gains from AI at their organisations.

Leaders who lack AI literacy risk mismanaging this opportunity — creating resistance, talent gaps, or misaligned strategies. For example, generative AI and workplace assistants can enhance productivity and employee experience, yet misinterpretation or oversight can introduce bias, privacy, or governance risks. Understanding these dynamics is essential for leaders to integrate AI responsibly through clear guardrails and human oversight.

Bridging the literacy gap

To harness AI’s full potential, leaders must align learning and development initiatives with business objectives. Building a culture of AI literacy — where curiosity, critical thinking, and ethical awareness are encouraged — ensures that AI is viewed as both a competitive advantage and a responsibility.

AI-literate leaders know when to automate, when to apply human judgment, and how to evaluate the return on investment. They understand how AI can enhance productivity, predict outcomes, or improve the employee and customer experience — without losing sight of the human factor.

Without this understanding, AI adoption often remains superficial, and leaders will struggle to capture tangible metrics which show its true value and impact on operations. To deliver lasting results, organisations need systemic integration — embedding AI intelligently into strategy, operations, and talent management.

IBM as Client Zero: Embedding AI literacy from within

At IBM, we treat AI literacy as a leadership and cultural priority through our Client Zero philosophy — becoming the first user of our AI products before they reach the market. This hands-on experience helps leaders understand how AI works in practice, its limitations, and the ethical considerations involved.

Initiatives such as the watsonx Challenge, which trained nearly 170,000 employees to design AI agents, and the AskHR agent, which supercharged our HR chatbot to handle 11.5 million interactions in 2024, have strengthened AI literacy across the workforce. Leaders and employees alike report greater confidence in applying AI responsibly and making informed decisions about adoption and oversight.

By testing internally, we can also reduce privacy, accountability, and governance risks – while ensuring leaders gain practical knowledge to guide the organisation effectively.

The leadership imperative

Closing the AI literacy gap is no longer optional – it’s a leadership imperative. Executives who can critically assess AI’s risks, rewards, and real-world implications will shape the next era of business transformation.

As AI continues to redefine work, commerce, and society, the depth of a company’s leadership literacy will directly determine its ability to innovate, build trust, and future proof its operations.

Leaders who commit to continuous learning and ethical integration will not only unlock AI’s full potential but also guide their organisations responsibly through one of the most profound technological shifts of our time.

About the Author

HansHans-Petter (”HP”) Dalen, is IBM’s Business Executive for AI in EMEA. He has 25 years of experience in IBM and has for the past many years discussed AI Use Cases and how to operationalize them across many industries.

The post AI Literacy: A Leadership Imperative appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/ai-literacy-a-leadership-imperative/feed/ 0
What Dentistry’s AI Revolution Teaches Other Industries https://www.europeanbusinessreview.com/what-dentistrys-ai-revolution-teaches-other-industries/ https://www.europeanbusinessreview.com/what-dentistrys-ai-revolution-teaches-other-industries/#respond Fri, 14 Nov 2025 07:34:10 +0000 https://www.europeanbusinessreview.com/?p=237909 By Frank Cespedes and Ben Plomion If you’re looking for an instructive case study of the absorption of AI technology into a somewhat conservative industry, the dentistry sector should be […]

The post What Dentistry’s AI Revolution Teaches Other Industries appeared first on The European Business Review.

]]>
startup trailblazer

By Frank Cespedes and Ben Plomion

If you’re looking for an instructive case study of the absorption of AI technology into a somewhat conservative industry, the dentistry sector should be high on your candidate list. Sit back comfortably as Frank Cespedes and Ben Plomion conduct a thorough dental examination.

Much current “analysis” of AI’s impact is mainly armchair thinking, trying to divine the future by extrapolating speculative assertions about evolving technological capabilities – what the economic historian Robert Gordon has called the musings of “techno-optimists.”1

But we don’t need to think about AI in these abstract terms. Truly impactful technologies like electricity, Wi-Fi, or GPS are ones that we don’t see anymore, because they are embedded in workflows and tasks, and this is happening with AI in places you might least expect – e.g., dentistry where, in just a few years, AI has progressed from research prototypes and clinical pilots to FDA-cleared tools embedded in daily patient care. This article examines the dental industry as an instructive case study for AI-driven change and the strategic lessons for other industries.

From X-rays to revolution

Dental services, including diagnostic, endodontic, and periodontic procedures, as well as cosmetic dentistry, were a $519 billion industry globally in 2024; it’s a $155 billion market in the U.S. alone, and, driven by demographic and social trends, growing at 4.5 percent annually.2

The dental industry includes a mix of many small independent practices, group-practice Dental Service Organizations (DSOs), public-sector and state-affiliated clinics, a wide range of equipment and service suppliers, and highly regulated procedures where payment is often tied to reimbursement procedures from insurance or governmental entities – a traditional recipe for an industry resistant to change. Yet AI has already driven major changes in dental diagnostics, clinical workflows, and administrative tasks, while increasing productivity and patient care and trust.

What Dentistry’s AI Revolution Teaches Other Industries

Diagnosis:

Onsite Dental, a DSO, runs clinics at company locations. Some of its units are spread across miles, as in Georgia where Onsite Dental teams visit 70 different carpet factories on a rotation basis. Others are consolidated, as in an office campus or at a big shipbuilding facility in Newport News, Virginia.

In 2024, Onsite began using AI-powered software for patient diagnostics. It provides the dentist with a second opinion, saving time and increasing confidence in the diagnosis. As one dentist notes, AI often is more granular in displaying the care situation, “showing me, for example, that a cavity has progressed into the dentin whereas my eyes are saying it was maybe just short, convincing me that we need treatment now rather than being reevaluated 6 to 12 months from now.”

Equally important, patients could see AI-generated, color-coded dental imagery, often in 3D, on a screen, which helped them better understand the diagnosis and increased trust in the treatment plan. As another doctor puts it, “Patients see that something objective says I have a cavity and the doctor agrees.” The results are better diagnoses and preventative care, and also better business. In one Onsite office, the average revenue per visit went from $27 per patient to $137, a 400 percent increase without the addition of new services.

In turn, AI’s improvement of a core task like reviewing X-rays had positive ripple effects throughout Onsite’s locations from faster diagnoses to better patient engagement. Like most DSOs, Onsite traditionally audited care providers via random selection; a dentist was chosen for review and their charts were audited. But data from AI diagnostics allows Onsite to zoom in by location and provider, pinpointing which clinics or clinicians may need additional support or training. Former VP of technology Behrod Ganjifard notes that “you’re able to say, OK, this doctor has a high or low misdiagnosis rate, and hone in on the exceptions or the opportunity,” a more comprehensive approach driven by network-wide insights that improve aggregate performance and care.

Other practices use AI applications for creating odontograms, graphical charts that visually represent a patient’s mouth, including tooth condition and treatments performed or planned. These charts provide a record of a patient’s dental health, and dentists use its standardized numbering system to communicate with patients and colleagues.

Traditionally, dental assistants recorded the details manually, e.g., the location of crown, cavity, or fillings. Now, AI analyzes dental X-rays to populate the odontogram, which the dentist then reviews and edits, if necessary. Studies show that about 70 percent of AI-generated odontograms are accurate and do not need changes. This saves time, speeds exams, makes the process easier for doctors and patients, and allows dental teams to see more patients daily.

In many clinics, the patient-dentist interaction now starts with an AI simulation, not just to save time but to build trust.

Another widely adopted tool is an AI simulator which shows how teeth can move with aligners. After a quick 3D scan, the software creates a before-and-after smile preview in minutes. Dentists say this makes it easier to explain the orthodontic treatment and patients are more willing to accept the treatment when they can see the expected result. Dentists still review the output, but the tool saves about 30-50 percent of time, depending upon specific patient conditions. In many clinics, the patient-dentist interaction now starts with an AI simulation, not just to save time but to build trust.

Workflows:

AI is also transforming the operational side of dental practices. One persistent issue is missed calls, often due to understaffed front desks, peak call volumes during business hours, or the time-consuming nature of listening to voice mails and manually dealing with requests.

Now, AI-powered communication platforms transcribe and analyze incoming calls in real time, highlighting the caller’s intent, identifying unanswered questions, and even flagging high-priority messages. This allows staff to follow up faster and more effectively. One group practice reported over 50 percent fewer missed calls, as well as better visibility into patient interactions across locations.

They also found that AI tools improved core workflow planning, helping practices anticipate patient needs, adjust staffing proactively, and prepare the office before the patient arrives. For example, if a patient needs a treatment like deep cleaning but has not booked an appointment, the AI automatically sends a reminder, including the last X-ray image. Patients can then schedule directly from their phones. As it gathers data over time, moreover, the algorithm learns the best time of day to reach each patient, increasing the likelihood that more people actually book appointments.

Many clinics, especially group practices, now use AI receptionists after hours. These systems can answer routine questions, schedule appointments, and triage urgent needs – for example, directing a patient with severe tooth pain to an emergency provider – without human staff present. This improves patient satisfaction, allows best practices to be more easily diffused and adopted across offices, and promotes a continuous improvement ethic, even if the staff doesn’t notice the changes. It’s just how things work.

AI systems for insurance claims, typically a time-consuming and resented activity in healthcare, now check records and reimbursement codes, improving accuracy and payment. If something is missing, the tool alerts staff in the dental practice. Then, the AI generates detailed annotations and explanatory notes on the X-rays, highlighting findings such as cavities, restorations, or areas requiring treatment. These visual cues and summaries make it easier to understand for the insurance claims administrator, reducing back-and-forth requests for clarification. Some results show that AI cuts the time spent on this work by 40 percent and speeds payment as well.

Conversely, for insurance companies, the AI-driven data improves productivity in a transactions-intensive aspect of their business, and helps them better detect claims mistakes and fraud, which costs the industry an estimated $12.5 billion annually.3 It also helps these companies go beyond a curt “coverage denied” response and make it clearer to patients and dentists why reimbursement is not applicable, and/or what specific information might be required to get reimbursed for the procedure.

In dentistry, AI is not just a tool to do a search or create a chatbot, but part of daily routines, increasing productivity for all parties, while decreasing risks.

Lessons for other industries

Dentistry’s AI adoption shows how even old industries can change when the right things come together, illustrating important lessons for other industries.

Workflow integration:

Research indicates that people evaluate new products and tools relative to their current usage system and see any required behavioral changes as “losses,” not gains – the phenomena known as “loss aversion” and the “endowment effect.”4 Hence, as in dentistry, smooth workflow integration is usually essential for productive adoption of a new technology.

Other industries are starting to recognize that the real power of AI lies in integrating it into workflows, not as a separate tool. In construction, AI has been embedded in project management platforms, automatically generating cost estimates based on historical data, flagging safety issues via real-time site monitoring, and drafting responses to contractor queries – all as part of daily processes that these teams use on the job. As one executive observes, “AI is a very powerful general-purpose capability, [but] you’ve got to meet users where they are.”5 

This has implications for questions that leaders should ask before deploying AI:

  • Is the AI inside daily work, or a tech layer on top of core workflows?
  • What change does productive use of the AI tool require, and how can you minimize the required change(s) in behavior?

The focus of most people in a firm most of the time is on near-term operating issues, not a technological revolution. There’s nothing wrong with that, but adoption of new tools means embedding them in those operating activities. Word processing was a niche technology as a stand-alone product, but became ubiquitous once integrated into daily software like email and other applications.

Here, advancements in AI software are important.  Because AI allows code creation, bug fixing, and feature iteration at higher speed and lower cost, the bottleneck is shifting from building AI tools to understanding user needs in the flow of work. And that is a managerial, not a technology, issue.

What Dentistry’s AI Revolution Teaches Other Industries

Trust and transparency:

In dentistry, patients don’t trust AI because they believe it’s flawless; many know that AI can make mistakes and sometimes surface more issues than a dentist would typically treat. But trust grows when patients can see and understand the results, making it easier to have informed conversations with their provider. There is a generalizable principle here.

In his work, Daniel Kahneman noted that radiologists who evaluate X-rays as “normal” or “abnormal” contradict themselves 20 percent of the time when they see the same picture on separate occasions, and he cited over 40 studies that show similar and often higher levels of inconsistency by auditors, pathologists, managers in various areas, and other professionals. He emphasized that “this level of inconsistency is typical, even when a case is reevaluated within a few minutes.”6

People as well as AI algorithms make mistakes. Working together, however, there are fewer mistakes and, as dental groups have discovered, diagnostic accuracy improves. In FDA-reviewed clinical trials, dentists using AI detected pathologies such as caries and bone loss with up to 37 percent greater accuracy compared to unaided practitioners. Clinics also report more consistency in diagnoses and adherence to clinical standards when AI is integrated into workflows.7 Equally important, the combination of technology and practitioner with domain expertise provides the patient with more visibility into the diagnostic logic and results. Dental groups report that when they use AI images, treatment acceptance increases by 30-40 percent8, because the patients feel that the diagnosis is more comprehensive and they trust dentists more. Also ask these questions about AI tools:

  • Can people see and understand what the AI tool is doing?
  • How best can a credible and shared language be established for the resulting outputs?

Companies often do this in their sales activities to demonstrate the operational value of products and services, and justify price, with customers. “Value calculators” in many sales contexts take customer input data and, with a transparent process, help to quantify and demonstrate the total cost of current procedures versus the life-cycle cost of the seller’s product. Making good use of AI is aided by this kind of activity, internally and in external customer or supplier activities.

Institutional endorsement and data:

One reason why AI has been adopted extensively in dentistry is a set of ecosystem benefits. Approval from regulators like the FDA and Europe’s Union of Medical Device Regulation was necessary and, once obtained, aided trust and transparency. Just as important, the technology reduced transaction costs on both sides of the dental practice–insurance company exchange. In turn, this spurred adoption of AI in a self-reinforcing manner.

In addition, AI became relevant as the industry was undergoing structural change. Over the past decade in the U.S. and Canada, many solo practices have consolidated into group DSOs, which now account for about 23 percent of the dentistry market in those countries.9 This model separates dentistry from business management, allowing dentists to focus on their clinical skills, while leaving administrative and operational tasks to the DSOs – a better work-life balance for many dentists. DSOs also provide patients and dentists with advantages ranging from access to multiple locations and specialties to negotiating leases with landlords and reimbursements with insurance companies to more purchasing power for supplies including technology.

DSOs were early adopters of AI because their group structure gives dentists more opportunities to network and collaborate with colleagues, explore new applications in the flow of work, and disseminate the benefits across practitioners. Their access to proprietary data across multiple practices allowed them to improve data inputs, which remain crucial in developing, maintaining, and improving AI algorithms. Then, DSOs used the outputs from AI to improve performance monitoring. This combination helped to accelerate adoption compared to independent practices.

In many other industries, omni-channel buying means multiple groups in a channel impact the customer journey from search to purchase to service. Successful AI adoption depends on building and coordinating its foundations and use among multiple stakeholders. The biggest time and expense in these sectors is often not the purchase of AI tools, but cleaning up and keeping relevant the data inputs for those tools. So, also ask these questions about AI in your industry:

  • Are you building robust data sources for planned AI initiatives?
  • Who else in the ecosystem is relevant to both data and relevant use of AI tools?

How dentistry embedded AI in its processes is more than an interesting use case; it is also a lesson for survival and how to win in markets increasingly influenced by AI.

About the Authors

Frank CespedesFrank Cespedes teaches at Harvard Business School, has written for numerous publications, and is also the author of six books including Aligning Strategy and Sales and Sales Management That Works: How to Sell in a World That Never Stops Changing (Harvard Business Review Press).

Ben PlomionBen Plomion is Chief Operating Officer of Pearl, a startup in the healthcare sector. He also writes on innovation and emerging technologies for various publications.

 

References:
1. Robert J. Gordon, The Rise and Fall of American Growth (Princeton University Press, 2016), xi.
2. https://www.precedenceresearch.com/us-dental-service-market?utm_source
3. https://phmic.com/dental-fraud-12-5-billion-dollar-problem/
4. For the core academic research and concepts, see Daniel Kahneman and Amos Tversky, eds., Choices, Values, and Frames (Cambridge, England: Cambridge University Press, 2000). For research indicating the impact in various industry contexts, see P. Chatterjee, C. Irmak, and R. Rose, “The Endowment Effect as Self-Enhancement in Response to Threat,” Journal of Consumer Research 80 (October 2013).
5. https://www.wsj.com/tech/ai/what-is-ai-best-at-now-improving-products-you-already-own-f6087617
6. Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus, and Giroux, 2011), 225.
7. https://hellopearl.com/blog/topic/the-growth-of-ai-in-dental-radiology?utm_source=chatgpt.com
8. https://theleadmagazine.com/ai-insights-from-pearl/?utm_source=chapgpt.com
9. https://www.precedenceresearch.com/us-dental-service-market?utm_source

The post What Dentistry’s AI Revolution Teaches Other Industries appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/what-dentistrys-ai-revolution-teaches-other-industries/feed/ 0
The Generative AI Gap: How Universities are Struggling to Keep Up https://www.europeanbusinessreview.com/the-generative-ai-gap-how-universities-are-struggling-to-keep-up/ https://www.europeanbusinessreview.com/the-generative-ai-gap-how-universities-are-struggling-to-keep-up/#respond Wed, 05 Nov 2025 08:37:49 +0000 https://www.europeanbusinessreview.com/?p=238215 By Alexey Pokatilo As generative AI reshapes higher education, universities worldwide struggle to define consistent rules for its use. Drawing on an analysis of 50 universities worldwide, this article explores […]

The post The Generative AI Gap: How Universities are Struggling to Keep Up appeared first on The European Business Review.

]]>
By Alexey Pokatilo

As generative AI reshapes higher education, universities worldwide struggle to define consistent rules for its use. Drawing on an analysis of 50 universities worldwide, this article explores how inconsistent AI policies, flawed detection tools, and faculty double standards are eroding trust and fairness in academia.

In February 2024, Ella Stapleton, a senior student at Northeastern University, discovered that her professor had used AI to prepare lecture materials. Hidden in the slides, she found a prompt-like phrase: “make content more detailed.” The irony was clear – the same professor had banned students from using AI in any form.

This small incident captured a global paradox. Across higher education, generative AI has blurred the line between innovation and academic integrity. Some universities encourage students to explore AI tools, while others impose strict bans. In many cases, the decision is left to individual professors, leaving students confused and vulnerable to inconsistent rules – even within the same institution.

An analysis of AI policies across 50 global universities – based on data we collected – shows how unevenly institutions are adapting to the rise of generative tools.

A Fragmented Policy Landscape

The absence of clear institutional leadership on AI has created what researchers describe as the most fragmented policy landscape in modern academia. Students can face entirely different rules in each class -Тwhere the same behavior might be praised in one course and punished in another.

According to the analysis, universities fall into four broad categories:

  • 55% follow an “Instructor Discretion” model, leaving rules to individual professors.
  • 20% take a permissive stance, allowing AI use with proper attribution.
  • 20% prohibit AI-generated work entirely.
  • 5% have issued no formal policy at all.

AI Policies Across Universities

At Harvard, for example, faculty are told to include an AI policy in their syllabus – but what that policy says is up to them. Meanwhile, Oxford and Yale explicitly permit AI as a learning tool, as long as usage is disclosed. Columbia Business School takes the opposite approach, banning AI unless authorized in advance. The result is what students call “a policy minefield” – a patchwork of rules that undermines fairness and consistency.

Flawed Detection and False Accusations

Compounding the problem is the heavy reliance on AI-detection software such as GPTZero, which remains widely used despite major reliability issues. Tens of thousands of essays have been falsely flagged as AI-generated, leading to disciplinary actions, suspensions, and even expulsions.

One high-profile case involved Haishan Yang, a student at the University of Minnesota who was expelled after being accused of AI-assisted writing – a decision that led to lawsuits and the loss of his student visa.

Similar incidents have surfaced globally, especially among non-native English speakers whose writing styles are statistically more likely to be misclassified by these tools.

The result is a growing climate of fear: students report avoiding digital spellcheckers or rewriting their work in simpler language to avoid being flagged as “too AI-like.”

Faculty Double Standards

Adding to the tension is what students increasingly describe as a “double standard.” Professors now routinely use AI for grading, lecture preparation, and administrative tasks, yet often prohibit students from doing the same.

The contradiction undermines trust. Instead of mentors, teachers risk becoming “AI police,” spending hours running texts through detectors rather than offering feedback that builds writing skills.

A 2025 Fortune investigation found that more than 80% of faculty already use platforms such as Canvas or Google Suite with embedded AI features – often without realizing it. Yet fewer than 15% of universities officially require or acknowledge such use.

The Cost of Policing AI

The lack of clear policy also carries a heavy financial cost. U.S. universities now spend an estimated $196 million annually on AI enforcement – including time spent investigating suspected misuse, handling appeals, and managing administrative reviews. Each case consumes an average of 162 minutes of faculty and staff time.

In the UK, institutions have reported a 400% rise in academic integrity cases since the release of ChatGPT. Many universities have had to reassign teaching assistants and administrative staff simply to manage the surge in misconduct reports. Meanwhile, continental Europe has largely avoided this crisis by emphasizing AI literacy and ethics over prohibition.

The Human Toll

The consequences extend beyond policy. In a Student Voice survey of 5,000 undergraduates, one-third said they were unsure when it was acceptable to use AI in coursework. Only 16% reported that their institution had clearly communicated an official AI policy.
The uncertainty fuels anxiety, mistrust, and even burnout – with students dumbing down their language or avoiding collaboration altogether for fear of being accused.

A Path Forward

Despite the chaos, several universities are offering constructive models.
Stanford University’s “AI Playground” allows students to experiment with generative tools in a secure, guided environment. MIT’s RAISE initiative integrates AI literacy into the curriculum. Oxford requires students to disclose AI use, transforming transparency into a learning opportunity rather than a trap.

Such initiatives suggest that the future of AI in education lies not in restriction, but in education itself – teaching students and faculty alike how to use these tools responsibly and effectively.

Our recent education research echoes this conclusion, calling for a shift from “AI prohibition” to “AI fluency.” Rather than banning technology, universities must establish clear, consistent frameworks that promote transparency, safeguard creativity, and prepare graduates for a future where AI literacy is as essential as writing or critical thinking.

Blanket bans aren’t working. What students are asking for – and what universities must deliver – is simple: clarity, fairness, and trust in an age of intelligent machines.

About the Author

Alexey PokatiloAlexey Pokatilo is the founder and CEO of Litero.ai. He is an education technology researcher focusing on ethical and transparent applications of generative AI in higher education. His work focuses on developing ethical and transparent applications of generative AI in higher education.

The post The Generative AI Gap: How Universities are Struggling to Keep Up appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-generative-ai-gap-how-universities-are-struggling-to-keep-up/feed/ 0
Nine Logics of AI Deployments and the Artificial Integrity Imperative https://www.europeanbusinessreview.com/nine-logics-of-ai-deployments-and-the-artificial-integrity-imperative/ https://www.europeanbusinessreview.com/nine-logics-of-ai-deployments-and-the-artificial-integrity-imperative/#respond Tue, 28 Oct 2025 09:05:09 +0000 https://www.europeanbusinessreview.com/?p=237721 By Hamilton Mann In attempting to visualize the issues implicit in the adoption of AI in business, we commonly picture a two-dimensional relationship, such as AI vs productivity or AI […]

The post Nine Logics of AI Deployments and the Artificial Integrity Imperative appeared first on The European Business Review.

]]>

By Hamilton Mann

In attempting to visualize the issues implicit in the adoption of AI in business, we commonly picture a two-dimensional relationship, such as AI vs productivity or AI vs employment. However, as Hamilton Mann makes clear, getting anywhere near true understanding requires us to consider a whole new axis.

Much of the early debate on artificial intelligence, and GenAI in particular, has borrowed from familiar strategy playbooks, contrasting efficiency against differentiation, automation against augmentation, disruption against continuous improvements. These frameworks, useful in their time, tend to flatten organizational reality into binary trade-offs. They capture broad patterns but leave out the subtleties of how AI actually reshapes firms, workforces, and societies.

In reality, AI is not a flat choice between cutting costs or market expansion. It unfolds across nine distinct strategic pathways, defined not just by growth potential, whether Low, Medium, or High, and by employment impact, whether jobs are Killed, Preserved, or Created, but also by a foundational axis that has long remained implicit: integrity alignment.

Integrity itself spans three states: it can be Damaged, when dignity, autonomy, and resilience are eroded; Compromised, when outcomes remain ambiguous or fragile; or Upheld, when results sustain and elevate human capacity.

By surfacing integrity axis as a structural dimension, this 3×3 framework, elevated into a cube, reveals the paradoxical effects of AI.

Nine Logics of AI Deployments and the Artificial Integrity Imperative

In some cases, AI rationalizes organizations into leaner forms, optimizes operations for measurable gains, or displaces workers at scale in pursuit of high growth. These pathways deliver efficiency or expansion on paper, but integrity shows their hidden cost: what appears as progress may in fact be systemic fragility, eroding the human capacity and resilience on which long-term prosperity depends.

In others, AI assists professionals by easing their burdens, enhances workflows through targeted support, or accelerates growth by multiplying human capacity without large-scale displacement. These pathways preserve employment and appear more human-centered, yet integrity reveals their ambiguity: jobs remain, but autonomy and judgment risk narrowing, as machine logic increasingly sets the terms of work.

In another set of logics, AI augments organizations with new expertise, restructures talent flows to unlock latent potential, or empowers entirely new markets and industries. These pathways promise genuine expansion, yet integrity poses the decisive question: are new roles and opportunities elevating human capabilities, or merely subordinating workers to algorithmic dependencies that erode judgment, stifle creativity, and atrophy essential human faculties.

Here, empowerment can mark either the highest alignment of prosperity and dignity, or its most sophisticated illusion.

It raises the stakes for leaders navigating not just efficiency and growth but also resilience, sustainability, and the dual test of social acceptability and humane legitimacy.

Rationalize AI

AI strips out labor to cut costs but fails to unlock new sources of demand. Organizations become leaner, but not stronger. Productivity gains look promising, yet financial performance remains fragile. This is efficiency without prosperity: systemic fragility grows as human capacity shrinks.

This is where the two-dimensional grid fails. Measuring jobs and growth alone suggests a leaner organization, yet the human system underneath becomes weaker. Without an integrity axis, leaders cannot see that rationalization is a false victory: it optimizes away people while corroding the very resilience required for long-term survival.

The case of Ocado:

In early 2025, Ocado, the British online grocery and technology company, announced that it would eliminate around 500 roles in its technology and finance divisions. The decision was not driven by collapsing demand or shrinking operations, but by the increased productivity of engineering teams equipped with AI systems. By automating tasks and streamlining processes, AI had made large portions of human labor redundant. For executives, the move was positioned as a rational step: lower costs, leaner operations, and greater efficiency.

Yet the broader picture reveals why this development exemplifies Rationalize AI. Although sales grew by 14 per cent, Ocado still posted a pre-tax loss of £374.5 million, while its technology sales growth slowed to 10 per cent, down from 18 per cent the year before. In other words, the efficiency gains generated by AI did not translate into sustainable growth or profitability. The company succeeded in cutting jobs and reducing costs, but it did not create new markets, unlock demand, or alter its trajectory of persistent financial losses.

This is the defining paradox of Rationalize AI: by using AI to strip out human labor, companies may succeed in lowering their cost base, but they do not necessarily strengthen their growth engine. Local productivity gains are achieved, but systemic performance remains stagnant or even deteriorates. Rationalize AI delivers leaner organizations, yet it does not deliver prosperity.

This case underscores the risks of mistaking efficiency for progress. In the short term, AI-enabled rationalization may reassure shareholders by showing cost discipline. In the long term, however, it leaves organizations more fragile, with fewer human capabilities to draw on and no new sources of demand to sustain growth. The strategic challenge for leaders is to determine whether they are deploying AI to transform their business or merely to shrink it. The deeper question is whether efficiency that weakens resilience can ever be called progress, or whether it signals an erosion of the very integrity needed for sustainable prosperity.

Optimize AI

AI substitutes for human labor in targeted functions while boosting organizational output. Companies enjoy measurable productivity gains and medium-tier growth. Yet profitability and resilience often remain uncertain, as efficiency masks latent vulnerabilities.

Optimization strips human resilience to the bone, producing brittle organizations addicted to quarterly gains.

As jobs are sacrificed and output improves, integrity shows the void beneath the numbers. Optimization strips human resilience to the bone, producing brittle organizations addicted to quarterly gains. Only the Integrity axis shows why optimization, while rational on paper, undermines the human foundations of sustainable growth.

The case of CrowdStrike:

In 2025, cybersecurity firm CrowdStrike announced a restructuring that revealed the double-edged nature of AI-driven efficiency. The company cut roughly 500 jobs, about 5 percent of its workforce, explicitly citing productivity gains from new AI systems as a key driver. The short-term results were striking: quarterly revenue reached $1 billion, a 25 percent year-over-year increase. Yet the bottom line told a more sobering story, with the company still posting a $92 million loss.

This trajectory illustrates the logic of Optimize AI. Jobs are eliminated, and the organization captures a measurable lift in productivity and revenue. The gains, however, remain fragile. AI delivered cost savings and output growth, but it did not guarantee profitability or long-term stability. Instead, the company now faces the challenge of sustaining performance without eroding the human and organizational capacities that underpin resilience.

This case underscores a pivotal strategic dilemma. AI can indeed optimize operations, but optimization is not the same as transformation. Leaders must decide if they are truly building stronger foundations for growth, or simply hollowing out their organizations in pursuit of short-term gains. Optimize AI highlights the risk of confusing efficiency with prosperity, a path where revenue may rise, but the structural capacity to generate enduring value remains uncertain. The real question is whether optimization strengthens the long-term fabric of the organization, or whether it locks firms into a cycle of fragile gains that sacrifice integrity for speed.

Displace AI

AI replaces labor at scale while fueling new industries, markets, and waves of consumption. Growth is rapid and expansive, but it comes at the cost of dismantling traditional employment structures. The result is high economic expansion coupled with deep social disruption.

This characterizes high-growth success despite job losses. And the integrity axis uncovers its true cost: systemic fragility and social disruption. It ultimately reveals that displacement is not just a trade-off between jobs and growth, but a governance failure that sacrifices resilience for expansion.

The case of Accenture:

By late September 2025, Accenture had laid off more than 11,000 people worldwide as part of an accelerated restructuring. The company described this not as a standard cost cutting program but as a deliberate exit of employees who could not be retrained fast enough for AI centered work. According to the Financial Times, Accenture’s CEO Julie Sweet made the message explicit in a call with analysts: “We are exiting on a compressed timeline people where reskilling, based on our experience, is not a viable path for the skills we need”. She added: “Those we cannot reskill will be exited”.

Accenture referred to this approach as “rapid talent rotation,” a phrase the company uses to describe exiting people quickly when reskilling is not seen as viable.

The severance costs were recorded as part of an $865 million business optimization program. The financial picture tells a story of expansion rather than distress. Quarterly revenue reached $17.6 billion, ahead of the company’s expectations, and full year revenue rose about 7% to roughly $69.7 billion. New bookings climbed above $21.3 billion in the quarter, and Accenture reported $5.9 billion in AI-related bookings over the full fiscal year. The company has nearly doubled its pool of AI and data specialists to 77,000 since 2023, and reported that more than 550,000 employees have already been trained in generative AI.

Here is the paradox: jobs are being destroyed at scale in the name of AI adoption, yet the stated ambition is not downsizing. Accenture has said it expects overall headcount to increase again in the next fiscal year, not shrink, and that savings from layoffs and divestitures will be reinvested into AI capability, new client delivery models, and talent that aligns with the markets the firm aims to lead.

This exposes the structural tension at the core of Displace AI. Accenture is capturing high value AI demand at global scale. It is winning multibillion dollar AI contracts, booking growth, and reinforcing its position as a preferred partner for clients who want to reinvent themselves with intelligent systems. At the very same time, it is rewriting the employment contract inside the firm. Roles are declared obsolete not because the company is failing, but because the company is succeeding in pivoting to AI faster than those workers can be re skilled. The public framing is one of reinvention and opportunity. The lived experience for thousands is forced exit.

The integrity axis reveals why this matters: Can growth that aggressively dismantles established employment structures at such a rapid scale claim to be progress if the social cost is externalized to workers who are no longer considered adaptable enough to remain inside the system? That is not a neutral efficiency decision. It is a societal decision about who is allowed to belong in the future.

This case exemplifies the Displace AI archetype with AI systems replacing human labor at scale, driving rapid business expansion. AI becomes the engine of new value creation and market expansion, revenues rise, bookings surge, investor narratives strengthen, the firm accelerates and society absorbs the shock. The question that integrity forces leaders to confront is whether this definition of success is aligned with the social contract between the firm and its employees, in particular when simultaneously claiming to be an inclusive, merit-based workplace that is free from bias and that seeks to foster a workplace culture based on respect and a sense of belonging. A “Great Place to Work,” so to speak.

Nine Logics of AI Deployments and the Artificial Integrity Imperative

Assist AI

AI supports workers rather than replacing them, reducing friction in workflows while keeping employment stable. Professionals still perform their core functions, but with fewer administrative burdens. Growth, however, remains incremental. The system is safeguarded, but not reinvented.

While this situation suggests stability, jobs are preserved and growth remains steady. Yet integrity reveals a subtler erosion. Workers may keep their titles, but their scope for judgment shrinks as AI dictates workflows. Only the Integrity axis makes this visible: jobs preserved, growth neutral, but dignity and autonomy partially lost.

The case of the NHS:

By 2025, NHS England began piloting AI-enabled ambient scribing tools designed to relieve general practitioners of the administrative burden of note-taking during consultations. Instead of manually recording symptoms, histories, and treatment plans, doctors could rely on an ambient AI system that listened, transcribed, and structured the conversation into a draft medical note. The promise was straightforward: give doctors more time with patients by letting AI handle the paperwork.

The early results confirmed that administrative time could indeed be reduced. GPs reported spending less time typing and more time maintaining eye contact, explaining diagnoses, or answering patient questions. But while the pilots improved quality of care and preserved the core role of the clinician, they did not produce new demand, new markets, or systemic economic growth. The number of jobs was not cut, but neither was the workforce significantly expanded. Doctors remained indispensable, and AI became an assistive tool rather than a transformative engine.

This illustrates the logic of Assist AI. Jobs are preserved, workflows are improved, and productivity gains appear locally meaningful. Yet the economic impact remains incremental, not expansive. The value lies in quality and efficiency at the margin, not in the creation of entirely new growth trajectories. Assist AI avoids the social disruption of mass job losses, but it equally avoids the disruptive potential of new industry formation.

This case highlights the double-edged nature of assistance as a strategy. When AI is deployed to support rather than supplant, it strengthens human roles and protects professional expertise. But by stopping short of reinvention, it also caps its growth potential. The strategic tension lies in whether AI is being used to protect the current system or to reshape it. Assist AI succeeds in safeguarding jobs and improving service delivery, but it risks entrenching existing limitations rather than overcoming them. The underlying but critical question is to what extent preserving stability without expanding human scope and perspectives amounts to genuine support, or instead quietly narrows autonomy under the appearance of protection.

Enhance AI

Without an integrity perspective, leaders miss a crucial question: are humans freed, or are they being deskilled by over-reliance on AI?

AI improves human productivity without cutting jobs, enabling smoother workflows and better services. Professionals are freed from repetitive tasks, but growth remains bounded. This path prioritizes human-centered efficiency, producing incremental but meaningful operational benefits.

At first glance, Enhance AI seems safe: jobs protected, workflows improved. Yet without an integrity perspective, leaders miss a crucial question: are humans freed, or are they being deskilled by over-reliance on AI? Integrity allows us to see whether technology extends human capacity or gradually hollows it out.

The case of Delta Air Lines:

From 2023, Delta said it was using AI to quickly make procedures known to reservations agents and to support in pricing which it presented as part of improving the speed and consistency of customer responses. As reported at the time, the airline has been testing AI that queries its internal policy and fare-rule databases in real time to surface the procedure the agent needs for a specific call, while a separate model has been proposing fare adjustments to human revenue managers rather than publishing prices automatically.

Delta has not presented these AI initiatives as a way to replace gate or call-center staff. Instead, it has framed AI as a way to enhance employees’ ability to serve passengers better and consistently, by giving them AI-surfaced information and recommendations.

“I think the initial foray into AI is on the customer service side”, said CEO Ed Bastian, responding to Morgan Stanley analyst Ravi Shankar’s question about the carrier’s use of the technology. “We’re working with our reservations team to try to help our reservations agents parse the historical policies and questions and things that you may you may call into a real agent”. This directly supports the employee-in-the-loop reading.

According to the 2024 Delta Difference report, as the airline rolls out advanced technologies it “takes a balanced approach to AI, using it to improve operations and enhance the customer experience while prioritizing our customers’ and employees’ safety, security and trust”. Delta’s headcount fell sharply in 2020 because of the pandemic, but by 2023–25 it had rebuilt to about 100,000 employees again, according to the company’s own disclosures.

Financially, over 2024–25, Delta reported revenue growth in the mid-single digits and highlighted improved operational performance. In 2024, at its Investor Day on November 20, the company told investors it was targeting mid-single-digit revenue growth as part of its differentiated-and-durable plan. In January 2025, when it released its December-quarter and full-year 2024 results, Delta reported record revenue and described “industry-leading operational performance”, with year-on-year revenue growth in that same mid-single-digit range.

This trajectory captures the essence of Enhance AI.

The gains came from smoother workflows, customers benefits from more responsive service, and incremental productivity increases, not from AI-driven workforce reductions. Yet the fundamental issue beneath the numbers is the tension between employees being truly empowered and their judgment being narrowed by dependence on machine-generated recommendations. The decisive challenge lies in determining whether technology in such cases expands human capability, or instead quietly deskills the workforce by reducing expertise to the execution of AI-suggested choices.

Accelerate AI

AI accelerates organizational growth while retaining and even amplifying human capacity. Firms preserve jobs, invest in their people, and use AI as a multiplier of productivity and innovation. Growth is significant, but it does not require large-scale workforce displacement.

The Jobs and Growth axis already shows this as a favorable case, but the integrity axis makes explicit why: growth is paired with the preservation of autonomy and dignity. With an integrity axis, Accelerate AI is revealed not just as a good strategy but as ethical leadership, proof that inclusion and prosperity can scale together.

The case of Cisco:

In a climate where many of its peers were trimming headcounts amid rising interest in AI, Cisco’s CEO Chuck Robbins offered a striking divergence. In a recent CNBC interview, Robbins emphatically stated, “I don’t want to get rid of a bunch of people right now,” underscoring Cisco’s strategic choice to harness AI as a productivity multiplier, not a vehicle for downsizing.

This decision is deeply resonant: Cisco’s fiscal Q4 results demonstrated significant gains and revenue rose 8 percent to $14.7 billion, buoyed by soaring demand for AI infrastructure. The company reported over $2 billion in AI-related orders, more than double its initial target.

These numbers encapsulate the essence of Accelerate AI: Cisco preserves its engineering workforce, even expanding AI development roles, while leveraging the technology to amplify innovation and performance. Rather than displacing talent, AI consolidates it, fueling growth without sacrificing employment.

This story is instructive in its counter-narrative to the efficiency-first mindset. By embedding AI as an enabling tool rather than a replacement, Cisco shows that scaling growth does not require shrinking human capacity. Accelerate AI asks leaders: can we harness AI to elevate our people and generate growth without sacrificing our workforce? Cisco suggests the answer is yes.

The enduring tension is whether such examples mark the beginning of a broader shift toward inclusive growth, or remain exceptional cases in a landscape still dominated by an exclusive efficiency-driven economy.

Nine Logics of AI Deployments and the Artificial Integrity Imperative

Augment AI

AI sparks the creation of adjacent roles—engineers, analysts, content creators—that sustain AI-driven lines of business. Growth is steady but limited. The organization reconfigures around new forms of expertise, but markets are not fundamentally transformed. The grid celebrates job creation here, but without integrity it cannot distinguish between jobs that empower and jobs that serve as a crutch for a temporary peak of demand. An integrity axis forces us to ask: do these new roles build enduring capabilities and pathways for human expertise, or do they merely anchor workers to the support of keeping the system running relative to short-term financial performance? Growth alone cannot answer that.

The case of Anthropic:

In 2025, the maker of the Claude AI model announced that it would create more than 100 new jobs across Europe, expanding in cities such as Dublin and London. These roles spanned engineering, research, sales, and business operations, and were positioned as additive hires to sustain the company’s rapid growth in AI services. Unlike rivals that leaned on layoffs or hiring freezes, Anthropic emphasized net job creation as it sought to build out the infrastructure and talent required to compete in a global AI market.

Yet this expansion came with a striking paradox. Even as Anthropic added new categories of expertise orbiting its core AI business, its CEO, Dario Amodei, warned publicly that AI could displace vast numbers of jobs, particularly in routine knowledge work. His remarks, widely reported and countered by NVIDIA’s Jensen Huang, underscored the tension between firm-level augmentation and system-wide disruption.

Financially, Anthropic’s growth was steady rather than transformative. Revenue gains reflected increasing adoption of Claude and its enterprise offerings, but they remained bounded within the competitive dynamics of the AI sector. The hiring drive demonstrated the emergence of new roles linked to the deployment dynamic of AI, but it did not reinvent markets or trigger exponential new demand.

This trajectory reflects the essence of Augment AI. Jobs were created in meaningful numbers, sustaining the company’s evolving ecosystem, but the growth remained incremental. Anthropic layered AI-focused roles onto its business model, turning technology into a catalyst for adjacent expertise rather than systemic reinvention.

This case also illustrates the inherent ambiguity of augmentation. The uncertainty lies less in the existence of new roles than in the conjunctural and opportunistic conditions under which they are created. When augmentation is driven primarily by short-term market demand, it remains fragile, vulnerable to fluctuation, and easily discarded if competitive pressures shift. Without anchoring in structural change, these roles risk becoming temporary adaptations rather than durable transformations.

Augment AI forces leaders to ask not only whether they are creating roles to meet momentum, but also whether those roles are a structural part of a new core system to sustain long-term growth or a fix to immediate competitiveness. The former is true augmentation. The latter is conjunctural augmentation and risks feeding into a broader tide of displacement.

Restructure AI

AI restructures organizations by enabling new internal pathways for talent and skills. Employees transition into new roles as firms align human capital with emerging technological needs. Growth emerges from within, not through market disruption, as companies unlock latent potential in their workforce.

While this place highlights job creation and moderate growth, its real value only emerges with an integrity perspective: Restructure AI sustains dignity by enabling reskilling and mobility, showing that technology can evolve with workers rather than against them. This distinguishes between cosmetic job churn and genuine empowerment.

The case of Walmart:

In 2025, Walmart launched a large-scale reskilling program that leveraged AI to transform how frontline employees navigated careers within the company. Rather than relying on automation to reduce headcount, Walmart invested in AI-driven career pathways that helped workers transition into emerging roles such as drone technicians, robotics supervisors, and technical support specialists. AI systems were deployed to analyze skills, identify adjacencies, and recommend reskilling journeys tailored to employees’ backgrounds.

The results have been striking. More than 50,000 cashier roles have been restructured into new, future-oriented positions, while thousands of other associates have accessed new learning opportunities and internal career moves. For individuals, the change has been transformative: one cashier, for example, retrained as a robotics supervisor after receiving a personalized AI recommendation and structured learning guidance. For Walmart, the benefits have been equally significant: internal talent redeployment has reduced turnover costs, enhanced operational resilience, and ensured that the company’s workforce evolves in step with its increasingly automated supply chains and stores.

This is a textbook case of Restructure AI. Jobs are not only preserved but actively created through the restructuring of organizational processes, while business growth is tangible even though moderate. Walmart has unlocked the value of human capital already inside the company, redirecting it toward the capabilities most needed in an AI-driven economy, while pairing this approach with a set of AI-enabled initiatives. Together, and building on this restructuring approach, Walmart has delivered measurable results: digital sales rose 25 percent year over year. Yet these gains, while tangible, have not led to radical expansion into new markets.

The core lesson here is that AI can act as a mechanism of organizational redesign rather than mere automation. By turning internal mobility into a dynamic, AI-enabled process, Walmart demonstrates how companies can avoid the false trade-off between efficiency and employment. Growth in such scenarios does not come from cutting costs or conquering new industries, but from reimagining the structure of work itself. The overarching challenge is whether such restructuring consistently elevates human flourishing, fulfillment and autonomy, or whether it risks being reduced to a managed rotation of labor that serves organizational needs more than individual growth.

Empower AI

AI unlocks entirely new markets and business models, creating jobs and scaling growth dramatically. This is AI at its most expansive: empowering individuals and industries, redefining access and opportunity. Yet it also destabilizes incumbents who rely on sustaining strategies, forcing leaders to adapt or be left behind.

While this scenario is the most celebrated, the integrity axis reminds us that not all empowerment is equal. Are new jobs designed to elevate autonomy, or do they risk locking humans into algorithmic dependence? Integrity is what distinguishes empowerment from manipulation.

The case of Duolingo:

By late 2024, Duolingo had become the most downloaded education app in the world, boasting more than 100 million monthly active users and 8 million paid subscribers, underpinned by AI-driven personalization, gamification, and adaptive learning mechanisms. This meteoric rise exemplifies Empower AI: new markets are unlocked, jobs are created, and growth is expansive.

Behind the numbers lies a broader employment story. As Duolingo scaled, its business growth materialized in tangible investments in human capability, hiring educational designers, community outreach specialists, AI content creators, and engineers to extend the platform beyond language instruction into new areas like music and mathematics. These roles were born not from administrative routines, but from the need to develop and expand Duolingo’s AI-driven learning ecosystem.

The strategic impact is profound. Duolingo did not merely displace labor with automation. Instead, it used AI to empower a new generation of workers, elevating the nature of jobs in education technology while creating value that extended beyond traditional markets. Duolingo has turned AI into a lever for inclusion, learning accessibility, and continuous expansion, starkly illustrating the promise of Empower AI.

This case challenges the belief that AI inherently streamlines or replaces work. For Duolingo, AI did neither. Instead, it sparked a wave of job creation and market expansion. Empower AI forces leaders to consider whether technology should be a tool of displacement or a gateway to human-centered growth. The defining question is whether such empowerment truly expands human agency and creativity, or whether it risks entrenching new forms of algorithmic dependence that erode the very integrity it claims to uphold.

Navigating the Nine Logics of AI

Leaders are not simply choosing between short-term productivity gains and long-term growth outcomes, or attempting to balance both, through AI. They are navigating nine distinct logics of AI deployment, each with its own promise and peril, hinging on integrity being damaged, compromised, or upheld.

As with any framework, it would be vain to expect organizations to sit entirely within a single logic. In practice, companies often run several AI deployments in parallel, embodying multiple logics at once. Besides, none of the embraced logics unfold in a vacuum; regulatory regimes, investor pressures, and labor market dynamics strongly influence which logics become viable.

Yet even within such systemic constraints, executives retain decisive agency in steering how AI is deployed.

AI deployment is not just a technological decision. It is a societal decision.

The nine archetypes of AI deployment remind us that technology is never neutral. Rather than framing a strict categorization, each pathway, whether rationalizing costs, optimizing operations, displacing labor, assisting activities and workflows, enhancing capacity, accelerating economic growth, augmenting expertise, restructuring organizations, or empowering human capital, offers guidance for strategic choice and brings the paradoxes to light. AI is celebrated as a driver of efficiency and growth, yet its impact is fractured across competing logics. Some strategies strip out human labor while leaving organizational fragility intact. Others preserve or create jobs but cap their growth potential. And a select few unleash expansive new markets while forcing difficult questions about resilience, equity, and sustainability. Integrity alignment turns this paradox into a sharper diagnosis: fragility arises where integrity is low, stagnation where integrity is partial, and sustainable empowerment only where integrity is high.

In essence, AI deployment is not just a technological decision. It is a societal decision. Whether organizations end up leaner or stronger, stagnant or expansive, exclusionary or empowering depends less on the capabilities of the technology than on the intentions of those who wield it. Their integrity makes this explicit, transforming abstract intentions into measurable questions of whether AI sustains or undermines human autonomy.

For executives, the challenge is to resist the seduction of efficiency alone. Rationalization and optimization may satisfy shareholders in the short term, but without a parallel commitment to empowerment and human development, they risk hollowing out the very foundations of long-term growth. Conversely, paths that invest in people and preserve resilience, assistive, augmentative, restructuring, or empowering logics require patience and strategic courage, but they promise outcomes that align profitability with legitimacy and social acceptability. Navigating these challenges with integrity alignment helps identify which paths truly strengthen both prosperity and legitimacy, and which merely defer systemic fragility under the illusion of efficiency.

This is why Artificial Integrity, rather than the limitation to mere mimicry of human cognition, must guide AI development and implementation to resist the assignment of being driven solely by raw performance, blind to ethical, social, and moral considerations. Exhibiting Integrity, not just intelligence, is the next frontier for AI, making it an intrinsic part of its functioning and aligning it with human values, to support the right path toward fostering approaches that reconcile human growth (jobs preserved or created), business growth, and integrity alignment.

Yet even with the prospect of such development, navigating the nine logics of AI deployment ultimately places the responsibility on executives, who must decide whether they are building organizations that grow only by shedding their human core, or cultivating ones resilient enough to expand by empowering. The answer will define not just the next generation of business leadership, but the social contract between organizations and the societies whose prosperity and work they reshape. Integrity will determine whether that contract is written on fragile ground or on a foundation capable of enduring, with AI.

About the Author

Hamilton MannHamilton Mann is an AI researcher and bestselling author of Artificial Integrity (Wiley). He  lectures at INSEAD and HEC Paris, and has been inducted into the Thinkers50 Radar.

The post Nine Logics of AI Deployments and the Artificial Integrity Imperative appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/nine-logics-of-ai-deployments-and-the-artificial-integrity-imperative/feed/ 0
Your Workforce is Changing, And AI is Leading the Shift https://www.europeanbusinessreview.com/your-workforce-is-changing-and-ai-is-leading-the-shift/ https://www.europeanbusinessreview.com/your-workforce-is-changing-and-ai-is-leading-the-shift/#respond Fri, 17 Oct 2025 09:11:29 +0000 https://www.europeanbusinessreview.com/?p=237198 By Carson Hostetter Burnout is rising fast, especially among younger workers, and businesses are paying the price in retention and productivity. To address it, employee experience must be treated with […]

The post Your Workforce is Changing, And AI is Leading the Shift appeared first on The European Business Review.

]]>

By Carson Hostetter

Burnout is rising fast, especially among younger workers, and businesses are paying the price in retention and productivity. To address it, employee experience must be treated with the same weight as customer experience. AI is the enabler here: by stripping away admin-heavy tasks and unifying communication, it frees employees to focus on growth, impact, and purposeful work.

Stress and burnout have the potential to become a workplace epidemic, especially for younger employees and the next generation of workers. In fact, 83% of Gen Z frontline employees report burnout at work, with over one-third saying they’d quit because of it. For businesses, this isn’t just a wellbeing crisis; it’s a risk to retention. This comes as research shows that to replace a salaried employee it can cost, on average, between 6-9 months’ salary.

This is why the employee experience (EX) must be more than an individual or HR concern. Employees, particularly Gen Z and younger millennials, expect employers to support balance and growth, not just through benefits but by rethinking how work itself gets done.

As a result, EX is increasingly discussed as a business priority, but in practice, it is too often subordinated to productivity metrics. While many companies invest heavily in crafting exceptional customer experiences, they often overlook the internal experience of the very people delivering them. That imbalance leaves EX undervalued in leadership metrics and is coming at a cost, in productivity, engagement, and retention.

It’s time for things to shift. Companies that treat employee experience as a core metric, just like revenue or customer satisfaction, are better equipped to retain talent and build resilient teams.

AI presents an opportunity – a new way for employees to reshape their careers, particularly for the next generation of workers.

AI as a career path launcher

In many businesses, roles have become overly consolidated, with junior team members or those starting their careers (often Gen Z), taking on extra admin tasks outside their remit, such as answering phones, logging information, or manually updating systems. These added duties dilute their real contributions, slow career growth, and contribute to burnout.

With tools like AI-powered receptionists, transcription and smart routing, those admin-heavy “extra hats” can finally come off. Instead of being stretched across multiple mismatched responsibilities, the next generation of workers can focus on higher-value contributions: solving complex customer issues, developing their skills, and growing strategically in the roles they were hired to do.

For SMBs especially, this challenge is magnified. With leaner teams, employees are often stretched across responsibilities that fall outside their core role. AI eases this pressure, resulting in employees with more focused roles, where they can thrive.

In this way, AI is helping to unlock people’s full potential. It rebalances the scale to ensure EX isn’t sacrificed for output but enhanced through meaningful work and accelerated growth. And when employees grow faster, they deliver stronger customer experiences, creating a reinforcing loop between EX and CX that futureproofs the business.

AI as the equaliser

At the heart of any positive employee experience is clear, consistent communication. Yet historically, creating a great employee experience required bespoke support, including different systems for frontline teams, hybrid workers, and multilingual teams.

Our data shows that employees spend 62 days per working year toggling between apps with 56% of workers using 6 or more business apps to communicate. This not only makes internal alignment or moving at pace with hybrid or remote teams particularly tricky, but it makes scaling that infrastructure harder.

However, AI is changing that. Modern AI-powered communication platforms unify channels, from phone calls to chat and video meetings, into one ecosystem, removing friction and eliminating the “tool-hopping” that leaves employees feeling scattered and stressed. Meetings are summarised, action items flagged, and updates shared instantly, without anyone manually piecing it all together.

Crucially, AI makes this scalable. This keeps frontline staff connected on the go, hybrid teams get smart notifications and instant summaries, and multilingual colleagues collaborate seamlessly with real-time translation. The same level of access, clarity, and support is available to everyone, regardless of role or location — levelling the experience across the workforce.

Borrowing from CX to design EX  

For years, customer experience has had the boardroom’s full attention. Companies map every touchpoint, anticipate needs, and use data to remove friction before it appears. Ironically, that same rigour is rarely applied to employee experience, even though the people delivering CX are employees.

The tools that personalise customer journeys, surface relevant information, automate repetitive steps, and prioritise what matters most, can be turned inward. Intelligent systems can guide employees through processes, pre-empt bottlenecks, and equip them with the right context in the moment they need it.

For instance, an associate supported by AI Agent Assist. Just as CX tools surface the next best step for a customer, EX tools surface the next best action for the employee, delivering real-time product details, customer history, or inventory without the employee breaking focus. In sales, AI tools can also apply the same principles, such as analysing conversations to provide personalised coaching, so employees continuously improve with clear, data-driven feedback.

When EX is implemented with the same care as CX, it creates a reinforcing loop: employees feel confident, supported and able to focus on meaningful work rather than struggling with complicated systems. AI becomes the enabler that helps them scale their impact without adding complexity and guides them in the moment. The result is agents who are less stressed, more effective and more motivated in their roles, which in turn strengthens the organisation as a whole.

Future-proofing the workforce isn’t just about flexible working hours. It’s about redesigning work so EX and productivity reinforce each other. When EX is treated with the same discipline as CX, employees grow faster, moving from admin-heavy tasks to higher-value work that builds skills and confidence. With AI as an enabler of this transformation, organisations can redesign work for the future. Every role is more purposeful, every customer call is answered, and every employee has room to grow.

About the Author

Carson HostetterCarson Hostetter is EVP & General Manager, AI and CX Solutions at RingCentral, where he is responsible for developing and implementing the company’s AI and CX strategy. He brings over 25 years of industry experience to this newly created role, focusing on delivering tangible ROI for customers.

The post Your Workforce is Changing, And AI is Leading the Shift appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/your-workforce-is-changing-and-ai-is-leading-the-shift/feed/ 0
Trust, Transparency, and the Future of Performance Marketing https://www.europeanbusinessreview.com/trust-transparency-and-the-future-of-performance-marketing/ https://www.europeanbusinessreview.com/trust-transparency-and-the-future-of-performance-marketing/#respond Sat, 04 Oct 2025 14:19:00 +0000 https://www.europeanbusinessreview.com/?p=236575 By Nick Waters In the rush to embrace the automation of advertising campaigns with embedded AI tools, marketers have given up a lot of control, particularly when it comes to […]

The post Trust, Transparency, and the Future of Performance Marketing appeared first on The European Business Review.

]]>

By Nick Waters

In the rush to embrace the automation of advertising campaigns with embedded AI tools, marketers have given up a lot of control, particularly when it comes to creative decision-making. Whilst offering speed and efficiency, these tools have made it harder to understand what’s really driving advertising performance. This article explores the challenge of relying on black box algorithms and discusses how leveraging complementary applied AI tools can regain visibility, reclaim influence, and unlock new avenues for strategic advantage.

Automation has transformed business operations, delivering unprecedented speed and scale. Yet with that progress comes a paradox: outcomes are rising, but understanding is receding. Executives now find themselves presenting results without the ability to explain them; a credibility gap that widens each time success cannot be attributed with certainty.

In marketing, advertising decisions that were once made by human judgment, such as where budgets are placed or which messages are prioritised, have shifted to ad tech platforms that reveal little about their inner workings. Outcomes can be observed, but not explained. Strategies can be deployed, but not interrogated. What was once grounded in measurable cause-and-effect is increasingly governed by opaque AI systems.

This shift has not diminished performance. It has diminished its transparency. And for leaders, the implications are significant: when effectiveness can no longer be explained, trust in the system begins to erode.

Speed at the cost of comprehension

The core automation systems now embedded in major marketing campaign platforms do not merely assist with execution; they subsume it. Their premise is efficiency, but their logic is inaccessible.

Businesses no longer see how their audiences are defined, which creative assets are prioritised, or which variables are driving performance. The tech systems that serve ads make these determinations in real time, but offer no audit trail, no context, and no means of understanding reason.

Performance fluctuations occur without attribution. Strategic decisions are made without visibility. The result is a fundamental decoupling: intent is separated from execution, and performance is dislocated from understanding.  Accountability remains, but it now operates without full visibility.

In this new paradigm, the marketer’s role is narrowed to objective-setting, budget definition, and retrospective analysis. Important functions, yes, but ones that offer limited influence in real time. Within teams, this lack of clarity introduces a growing credibility gap: results can be presented, but not explained. Stakeholders are briefed, but not convinced. The system may be working, but executives are no longer certain why.

The Changing Landscape of Creative Control in Advertising

For years, the design and creative elements of advertising campaigns were considered the final domain of human oversight. In a world of commoditised tools, distinctive messaging was thought to remain under human control, or so it seemed.

In reality, the advertising creative has been pulled into the same automation stream as media investment. Algorithms now determine which messages appear, to whom, and in what sequence. Analysts receive performance data only in aggregate, with little insight into which specific elements are resonating or why.

The process becomes speculative. Iteration is inhibited. Differentiation, once grounded in deliberate creative strategy, becomes an article of faith. This occurs precisely at the moment when creativity matters most: with regulation and privacy constraints limiting targeting signals, the ad creative is the principal driver of relevance. Yet it has become the least interpretable component of the system.

Restoring Cause and Effect in Ad Creativity

The answer is not to dismantle automation. It is to contextualise it. Third-party AI systems designed to operate alongside platform automation offer a way to do exactly that. They provide an interpretive, strategic layer that embedded systems do not.

These tools are not intended to replace platform automation, nor to second-guess it. Their function is both creative and diagnostic: generating advertising assets at scale while also surfacing the signals that drive performance. They reintroduce cause and effect by connecting specific creative elements to audience responses, and campaign structure to observed outcomes.

With that clarity, marketers regain the ability to guide. Optimisation becomes intentional. Creative development becomes iterative. Decisions can be justified with evidence.

Crucially, this restores executives’ ability to engage strategically with automation. Not with full control — that era has passed — but with informed oversight. Intelligent participation replaces submission to system logic. 

Beyond Automation

Automation is no longer a competitive advantage. It is a common denominator. What separates high-performance organisations now is not whether they use AI systems, but how intelligently they interact with them.

Those who rely solely on ad platform reporting will struggle to evolve, or, at best, cede control of how they evolve. Those who supplement automation with strategic visibility will outpace them. Interrogating outcomes, adjusting inputs, and iterating creative; these are the new core competencies.

It is not a question of choosing between automation and control. The modern enterprise requires both. And the only way to achieve that is through augmentation, by layering insight onto execution.

In a field where the tools are largely commoditised, advantage lies not in access but in application. Not in the automation itself, but in the insight built around it.

Redefining Strategic Authority

Automation will continue to advance. AI systems will grow more powerful, more efficient, more essential, and perhaps even more removed from human control — Zuckerberg has said he wants advertisers to simply hand over their budget and let Meta do the rest. But their evolution need not come at the expense of strategic intelligence.

By reintroducing interpretation into the process, businesses can create the conditions for informed decision-making. Visibility can be restored. Performance can be attributed. Outcomes can be explained. And control can be re-established.

Trust in automation will not be rebuilt through optimism or patience. It will be rebuilt through interpretation, by constructing the systems around automation that make it comprehensible.

In this way, the leader’s role is not diminished. It is redefined. Understanding performance is no longer a secondary task. It is the prerequisite for controlling it.

To lead in this new environment, executives must reassert strategic authority and control, not by resisting automation, but by making sense of it.

About the Author

Nick WatersWith a career spanning over two decades in the technology, media, and advertising sectors, Nick Waters has held senior leadership roles such as Group CEO at Ebiquity Plc, Executive Chairman for the UK & Ireland at Dentsu Aegis Network, and Regional EMEA CEO at Mindshare. His extensive background in international business is pivotal to Making Science’s strategic development in the UK, Northern and Central European regions.

The post Trust, Transparency, and the Future of Performance Marketing appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/trust-transparency-and-the-future-of-performance-marketing/feed/ 0
The Generative AI Gold Rush Revisited https://www.europeanbusinessreview.com/the-generative-ai-gold-rush-revisited/ https://www.europeanbusinessreview.com/the-generative-ai-gold-rush-revisited/#respond Thu, 25 Sep 2025 06:53:37 +0000 https://www.europeanbusinessreview.com/?p=236019 By Jacques Bughin The generative AI landscape has rapidly evolved, turning emerging concepts into proven drivers of business value. Jacques Bughin revisits ten key AI investment opportunities and examines how […]

The post The Generative AI Gold Rush Revisited appeared first on The European Business Review.

]]>
By Jacques Bughin

The generative AI landscape has rapidly evolved, turning emerging concepts into proven drivers of business value. Jacques Bughin revisits ten key AI investment opportunities and examines how advances such as agentic orchestration, synthetic data, and verticalized LLMs are reshaping competitive strategies. He provides practical insights to help executives capture AI’s full potential.

Given the transformative evolution of AI, the landscape has since evolved profoundly—maturing some areas into proven profit centers, introducing new AI paradigms like agentic orchestration and verticalized LLMs

By July 2024, or one year ago, we provided a visionary framework identifying ten key generative AI investment and impact opportunities for enterprises. Given the transformative evolution of AI, the landscape has since evolved profoundly—maturing some areas into proven profit centers, introducing new AI paradigms like agentic orchestration and verticalized LLMs, and anchoring AI as a fundamental driver of the way to competitive.

This article revisits and expands on those opportunities, weaving in recent advances in agentic AI orchestration, multi-agent coordination, and robotic augmentation. It provides an integrated perspective on how C-suite leaders should act decisively to capture AI’s full value and competitive edge.

The List

1. AI embedded deeply in business processes

AI’s value translates when integrated end-to-end into workflows. Deutsche Telekom’s “Ask Magenta” chatbot, an AI-powered assistant, offloads 70% of fiber-optic customer support queries, boosting customer satisfaction scores by 15 percentage points and reducing operational costs significantly. Similarly, Walmart’s European logistics AI enhances inventory forecasting and route planning, achieving a 30% cut in stock-outs and millions in annual savings.

Management insight: The recent case experience of rolling out AI states that only cross-functional AI operating models are able to deliver on the AI promises.

2. The Rise of Agentic AI Orchestration: autonomous yet coordinated AI workforces

Agentic AI—AI systems capable of independent decision-making, planning, and goal-directed execution—is rapidly scaling in enterprises. Ampcome, a European logistics AI platform, has demonstrated how multi-agent systems can autonomously coordinate routing, dispatching, and inventory management, achieving operational cost cuts of over 40%. Their agents combine Retrieval-Augmented Generation (RAG), pulling real-time data from complex sources, with autonomous decision-making, showcasing how agentic AI elevates from a reactive tool to a proactive orchestration framework

Wells Fargo’s corporate banking divisions implemented custom AI agents using Google’s Agentspace to unlock new efficiencies—bankers now spend significantly less time hunting for contract clauses or foreign exchange policies. Agents query hundreds of thousands of documents in seconds, enabling client-facing staff to focus on relationships and advisory. Their success underscores the necessity of deep integration with up-to-date internal data and human oversight for high-risk decisions

In manufacturing, Siemens embodies agentic orchestration’s physical extension. Their “Industrial Copilots” coordinate AI agents managing product design, production planning, real-time plant analytics, and robotic task execution, forming an intelligent operational swarm. Pilot factories report up to 50% productivity gains and improved machine uptime, thanks to modular agent orchestration layers that coordinate human and robot collaboration. This architecture allows seamless integration of third-party agents, laying a foundation for scalable AI ecosystems

A 2024 global survey involving 1,650 senior execs revealed 94% acknowledge process orchestration as crucial for AI success, highlighting that without this nervous system, agentic AI deployments often fail or stall. Governance frameworks mandating explainability and audit trails per the EU AI Act further emphasize the human oversight required in agentic ecosystems.

Management insight: Hence, agents are here to stay
and expand, but executives must prioritize investing in agent orchestration platforms, employee reskilling to manage AI interaction, red-teaming AI systems for risk, and establishing compliance protocols to unlock agentic AI’s full potential.

3. Synthetic Data

(At least, European) Leaders face stark regulatory constraints on data use. Synthetic data has emerged as a powerful solution to accelerate AI innovation without compromising privacy. Pfizer harnesses synthetic patient datasets to accelerate drug discovery timelines by 15%, sidestepping patient-identifying information. European fintech startups achieve 30% better fraud-detection model accuracy using synthetic customer profiles while maintaining GDPR compliance.

Top e-commerce companies are now using synthetic customer data to offer personalized shopping experiences. This new method is changing the retail world as retailers struggle with the challenge of offering personalization while still protecting customer privacy. Synthetic data solves this by creating detailed customer profiles without invading privacy. Big retailers such as Target  have seen huge boosts in sales with synthetic customer data., through a radical change in its marketing.

Management insight: Companies must embed synthetic data into their data strategies, engaging domain-focused vendors like MOSTLY AI and Hazy, while collaborating across legal and data science teams to ensure scalable and compliant synthetic data pipelines.

4. Responsible AI as a governance and trust Lever

The EU AI Act’s regulatory regime makes automated AI fairness, transparency, and auditability a competitive boundary in sectors such as banking and energy. European banks employing AI auditing tools reduced regulatory compliance costs by 75%, signaling that responsible AI directly impacts enterprise efficiency. Iberdrola strands regulatory workflows with AI-enhanced monitoring, both accelerating internal processing and promoting customer trust.

Management insight: Leadership mandates are shifting toward establishing dedicated AI ethics and compliance functions, integrating AI transparency by design, and proactively communicating responsible practices externally.

5. Sustainable AI

Reducing AI’s energy footprint has become urgent amid EU Green Deal commitments. Nordic cloud providers lead by cutting AI compute energy consumption by half using custom silicon and renewable power. Mercedes-Benz integrates AI for eco-driving assistance, tightly aligning vehicle AI with sustainability goals

Management insight: Top management teams must demand energy transparency, embed green compute into procurement criteria, and align AI infrastructure strategies with corporate ESG objectives.

6. Multi-Modal & Industry-Specific LLMs

Sanofi’s drug discovery harnesses unique vertical LLMs trained on clinical, chemical, and genomic data, trimming development phases by roughly 20%. Similarly, AI start-ups such as LegalFly, are  fine tuning LLMs for lawyers, boosting document analysis speed and accuracy by 35%.

Management insight: Forward-looking firms invest in domain-specific data assets and collaborate openly with academic and industry partners to continuously evolve their vertical AI capabilities.

7. MLOps—The Backbone for Reliable AI Deployment at Scale

Many organizations suffered a high rate of AI pilot failures until MLOps tools matured. Maersk’s MLOps infrastructure now drives near 90% success on production AI deployment, a leap from under 20%. Renault slashed model retrack costs by over 60% through rigorous ML governance.[9]

Management insight: Governance that unifies IT, data science, and business teams around model monitoring, drift detection, and remediation is now a board-level imperative.

8. AI Cybersecurity: Defending and advancing with AI

Vodafone leverages AI to shrink cyber incident response times fourfold, cutting false alerts by 30%. Dutch financial institutions use generative AI to accelerate phishing detection and regulatory compliance, tripling incident handling speed.

Management insight: Senior leaders must fund AI-augmented cyber defense programs and conduct regular threat simulation exercises.

9. Robotic Augmentation

The boundary between digital and physical is dissolving. Siemens’ copilot factories, GE Healthcare’s autonomously calibrated devices, and Bavaria’s robotic logistics fleets show how agentic orchestration is extending into robotics—fusing multi-agent ecosystems with physical action.

Management insight: Best practices  prioritize pilot sites where robotic augmentation can deliver compounded gains—productivity, uptime, and regulatory assurance.

10. Data, Talent, and Ecosystem as Strategic Assets

Without serious investment in these complements, no AI strategy can sustain competitive advantage.

An AI moat will depend on orchestrating three scarce resources: domain data, partner ecosystems, and reskilled workforces. Without serious investment in these complements, no AI strategy can sustain competitive advantage.

Management insight: Build European data consortia, scale workforce reskilling, and establish venture-style partnerships to access external AI innovation at speed.

Beyond the List – The Recipe?

The best AI adoption journey for companies in 2025—beyond just focusing mechanically on the “10 opportunities”—is about strategic selectivity, speed, and bold reinvention. 

Strategic focus: own some, partner for others

Top-performing firms do not try to own all AI capabilities. Instead, they:

  • Prioritize building proprietary AI where it creates a unique competitive advantage, especially domain-specific AI and core orchestration platforms.
  • Leverage third-party technologies and platforms for commoditized AI functions (e.g., infrastructure, foundation LLMs, synthetic data vendors).
  • Adopt a hybrid build-buy-partner model to accelerate value capture and manage risk.

Speed and front-Loading matter

Enterprises that acted swiftly are outpacing cautious wait-and-see approaches. Successful adopters move rapidly from pilots to scaled deployment, investing early in data infrastructure and MLOps to avoid costly retrofits.

Conclusions

The best journey is selective ownership combined with strategic partnerships, rapid—but disciplined—scaling, and organizational transformation. Companies that own only critical AI capabilities, integrate AI deeply into business processes, and reskill their workforce while front-loading governance and infrastructure investments will lead. Those who delay or attempt to do everything internally risk lagging.

About the Author

Jacques BughinJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

The post The Generative AI Gold Rush Revisited appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-generative-ai-gold-rush-revisited/feed/ 0
How to Win at the Game of AI Leapfrog https://www.europeanbusinessreview.com/how-to-win-at-the-game-of-ai-leapfrog/ https://www.europeanbusinessreview.com/how-to-win-at-the-game-of-ai-leapfrog/#respond Sat, 13 Sep 2025 15:00:24 +0000 https://www.europeanbusinessreview.com/?p=235359 By Gary Waldon AI is reinventing the kids’ game of Leapfrog into a serious, high-stakes business, with players vying for the Master of the Universe title. We have become players […]

The post How to Win at the Game of AI Leapfrog appeared first on The European Business Review.

]]>

By Gary Waldon

AI is reinventing the kids’ game of Leapfrog into a serious, high-stakes business, with players vying for the Master of the Universe title. We have become players in this multi-dimensional, hyper-paced game, as we try to stay on top of each new release. Here are 5 old-school strategies to take back control and help you win at AI leapfrog.

I have never been a gamer, but I think the pace of AI is making me into one. We are all caught up in a high-paced game of AI Leapfrog, and here is how we can get the upper hand.

The rules of playground leapfrog are simple, you all line up, crouch down, and then the person at the back leapfrogs over the players in front, until they reach the head of the line. Then the next person at the back has their turn. It can go on forever, as there is no real winner, it’s just good fun. However, AI Leapfrog turns this kids’ game into serious high-stakes business, where the winner could be crowned Master of the Universe. Here is how it’s playing out. OpenAI releases GPT-XX. Days later, Google counters with Gemini updates. Not to be outdone, Grok jumps in with a native integration. Anthropic quietly upgrades Claude. Meta makes a move. Apple hints. Microsoft flexes. Then OpenAI leaps again.

While it may seem that, as mere mortals, we are just observers, however in reality we are also players in this multi-dimensional, hyper-paced game. While the trillion-dollar stakes are the prizes for the key AI players, we also have a lot at stake. The costs became obvious the other day when a friend asked how she could create a business plan using AI. As a transformational specialist, I wanted to ensure I gave the best advice, so I asked which AI she was subscribed to. She responded, “none, I don’t know which one is best”. As I rattled off the various benefits of the main players, I realised my overview was probably only current for that exact point in time. Things change in days, sometimes hours, as the next AI release leapfrogs its way to the front, leaving us wishing we hadn’t committed our hard-earned dollars to subscribing to something already outdated.

I pay subscriptions to three of the main players, but am guilty of yearning for an endless budget to allow me to subscribe to all of them. “Imagine what we could achieve,” the creative and business voices inside my head argue. However, there is a larger cost to pay for trying to stay ahead in AI leapfrog. There is the obvious financial hit, but there is an even greater personal price as we become more AI addicted. We can spend endless hours trying to master the latest releases, researching YouTube clips titled “Here are 10 insane AI things you need right now”, or similar. Or create business plans for ideas we don’t have the time, or the money, to bring to reality because we are too busy trying to stay in the leapfrogging race.

Here are 5 old-school playground strategies you should play right now to help you reinvent and win at AI leapfrog.

1. Only leap when it’s your turn

Take the pressure off trying to be over everything. You don’t need to jump every time an AI player makes a move. AI Leapfrog will continue to play out at breakneck pace, with or without you. What would happen if you didn’t play this round? If you miss your turn, you will get another chance to rejoin and play again when you are ready. Take control, and avoid comparisonitis by sticking to your game, not playing someone else’s.

2. Stick the landing before your next jump

The pace AI leapfrog is played, often leaves us feeling like there is no time to get our footing, causing anxiety and a fear of missing out. Voices of self-doubt and not being enough will only get louder if you don’t allow yourself a win and celebrate it before you keep playing. Remember, trying every new tool isn’t mastery, it’s struggling to keep up. Get to know a few tools well before moving on to the next one. Mastering three is better than having tried twenty.

3. Build your personalised AI toolkit

Start building your personal AI toolkit that will help you get your job done. Maybe use GPT-XX for writing, Perplexity for research, Claude for summarising, and Firefly for design. Or, keep it simple, and maybe one AI tool can do your job well enough. Allowing you to compromise on those costly less critical traits . Choose tools that help you achieve your goals by playing your game, not the leaderboard.

4. Ask these questions to avoid AI overload

Before leaping into anything new, run it through this filter:

  • Why am I interested in the new functionality?
  • Will it help me achieve my goals?
  • Can I succeed without it?
  • Does it inspire or excite me?

If it doesn’t meet at least three, skip it until the next round.

5. Make it a game, not a grind

These are exciting times, and the AI game should excite you, not create unnecessary anxiety. If you find yourself in survival mode, then you are no longer playing a game, you are working to keep your fear under control. AI tools should expand your thinking, creativity and skills, not drain you. If managing your AI toolbox is more work than the tasks it was meant to simplify, it’s time to reassess.

The game of AI leapfrog will continue to play out with, or without us. And because leapfrog is a game without an end, we can choose to step in and out as it suits us. Any changes in the AI leaders board will become inconsequential history when we choose to rejoin the game, because we will be playing in the most up-to-date AI ecosystem. With a reinvention mindset we will be able to quickly adapt and bring ourselves up to speed, allowing us to succeed in the latest game. So, take back control and remember change is inevitable, however reinvention should be intentional.

About the Author

Gary WaldonGary Waldon is the bestselling author of Mastering the Art of Reinvention ($32.95). He is a transformation specialist who works with people at all levels from CEOs, CIOs, business leaders and professional athletes through to teachers and anyone who needs to reinvent themselves when life changes. Find out more at www.garywaldon.com

The post How to Win at the Game of AI Leapfrog appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/how-to-win-at-the-game-of-ai-leapfrog/feed/ 0
How Business Leaders Can Use AI Content to Drive Real Growth https://www.europeanbusinessreview.com/how-business-leaders-can-use-ai-content-to-drive-real-growth/ https://www.europeanbusinessreview.com/how-business-leaders-can-use-ai-content-to-drive-real-growth/#respond Thu, 11 Sep 2025 07:24:40 +0000 https://www.europeanbusinessreview.com/?p=235226 Growth doesn’t always come from doing more, it often comes from doing things differently. Business leaders are beginning to realize that traditional content marketing is no longer enough to gain […]

The post How Business Leaders Can Use AI Content to Drive Real Growth appeared first on The European Business Review.

]]>
Growth doesn’t always come from doing more, it often comes from doing things differently. Business leaders are beginning to realize that traditional content marketing is no longer enough to gain traction. The rise of AI-powered content isn’t just a shift in execution, it’s a complete change in how companies build visibility, authority, and trust in digital spaces.

For early-stage teams, this shift presents a rare opportunity. An AI SEO Agent for Startups KIVA reshapes how content strategies are developed, deployed, and discovered by both search engines and generative AI models.

Wellows’ KIVA is built for a new era of search. It turns hours of manual SEO into a streamlined workflow that delivers strategies, briefs, and optimized drafts in minutes. Startups gain visibility not only on Google but also on AI-driven engines like ChatGPT and Gemini. The result is content that’s faster, smarter, and built for the future.

How Well Does Your Content Perform in a Search and AI World?

Content remains one of the most powerful growth levers available to any business. But how people find and interact with that content has changed dramatically.

Ten years ago, ranking on Google was the primary goal. Today, customers are also discovering brands through AI engines like ChatGPT, Claude, and Gemini. If your content isn’t optimized to appear in both search and AI-generated responses, you’re missing a growing segment of visibility.

This means business leaders need to ask new questions:

  • Is our content structured to be referenced by AI models?
  • Are we publishing what people are actively asking online right now?
  • Can we scale content without hiring a full SEO team?

Answering these questions requires a new kind of content engine—one that’s built to serve today’s search habits.

The Role of AI SEO Agents in Driving Business Growth

Enhancing Business Growth with AI SEO

Unlike traditional SEO tools that offer data dashboards and keyword lists, an AI SEO Agent acts as a content operations partner. It analyzes real-time trends, automates research, and generates publication-ready briefs—all while keeping your brand voice and growth goals in focus.

For business leaders, here’s what that looks like in practice:

  • Speed-to-market without sacrificing content quality
  • Smarter briefs and outlines, grounded in live search demand and LLM patterns
  • Unified content strategy that serves both organic search and AI-assisted discovery
  • Reduced operational load, especially for small or overstretched teams

This isn’t about replacing marketers—it’s about giving them a high-performance teammate that works 24/7, at scale, without micromanagement.

Why Startups Are Leading This Shift

Startups don’t have the luxury of bloated teams or slow marketing cycles. That’s why AI SEO agents like KIVA are built with startup workflows in mind.

KIVA adapts to how fast-moving teams work:

  • It analyzes keyword clusters based on user intent, LLM behavior, and topical authority.
  • It generates structured briefs with outlines, tone, PAA suggestions, and competitive context.
  • It drafts long-form content that’s not just SEO-optimized, but also citation-worthy by AI engines.
  • It even audits for readability, originality, and brand alignment automatically.

Instead of managing multiple SEO tools, spreadsheets, and workflows, founders and marketers can focus on what matters: publishing content that ranks, gets referenced, and drives qualified traffic.

And with visibility baked in from both search engines and LLMs, KIVA turns SEO from a long game into a short-term win generator.

From Unknown to Unmissable: The Power of LLM Visibility

One of the biggest shifts in digital growth is the influence of large language models (LLMs). More users now ask questions directly to AI assistants than ever before. If your brand isn’t part of the answers, it’s being left out of the conversation.

AI SEO Agents like KIVA solve this by understanding what LLMs prefer:

  • Structured content
  • Semantic relevance
  • Authoritative sources
  • Clear topical depth

By creating content with these patterns in mind, startups are seeing their brands surface in ChatGPT answers, be cited in Perplexity searches, and even referenced in AI-generated summaries across the web.

This creates a second layer of visibility beyond search rankings—and gives lean teams a shot at outsized reach.

What Business Leaders Need to Know

Future of SEO for Business Leaders.

The future of SEO isn’t just about beating competitors on Google. It’s about becoming a trusted source for both human and AI readers. That requires content that performs across multiple discovery channels—without adding more weight to your team.

Here’s how to think about it as a leader:

  • SEO is no longer a department—it’s a growth function. AI content has blurred the lines between marketing, product, and sales enablement.
  • Your brand voice must scale. An AI SEO Agent like KIVA ensures your unique tone and messaging stay consistent across every touchpoint.
  • Speed matters. The brands getting cited, ranked, and surfaced fastest are those who can go from insight to content in hours—not weeks.

Most importantly, this is no longer experimental. It’s working right now for startups across industries who have chosen to adapt early.

Final Takeaway

AI content doesn’t mean giving up control. It means designing systems that amplify your vision, automate your workflow, and increase your brand’s discoverability in the places that matter most.

For business leaders ready to future-proof their content strategy, the answer isn’t more tools—it’s smarter ones.

And in the case of startups, the smartest choice might just be bringing on an AI SEO Agent that acts like part of your team.

The post How Business Leaders Can Use AI Content to Drive Real Growth appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/how-business-leaders-can-use-ai-content-to-drive-real-growth/feed/ 0
Employee Chatbot Use Will Shape Your Company—Here’s How to Guide it https://www.europeanbusinessreview.com/employee-chatbot-use-will-shape-your-company-heres-how-to-guide-it/ https://www.europeanbusinessreview.com/employee-chatbot-use-will-shape-your-company-heres-how-to-guide-it/#respond Sat, 06 Sep 2025 13:07:06 +0000 https://www.europeanbusinessreview.com/?p=234940 By Nick Kabrel Employees increasingly adopt conversational chatbots at the workplace. Yet without strategic oversight, such usage might default to shallow, efficiency-driven usage, undermining the opportunities for personal and organizational […]

The post Employee Chatbot Use Will Shape Your Company—Here’s How to Guide it appeared first on The European Business Review.

]]>

By Nick Kabrel

Employees increasingly adopt conversational chatbots at the workplace. Yet without strategic oversight, such usage might default to shallow, efficiency-driven usage, undermining the opportunities for personal and organizational growth. Therefore, leaders should intentionally shape an AI use culture that facilitates human flourishing and organizational innovation. Here’s how to achieve it.

Introduction

The rise of advanced AI, and conversational chatbots in particular, has created an urgent leadership challenge that most executives are overlooking. How your employees use chatbots isn’t just their individual choice – it has broader organizational implications. Collective chatbot usage patterns are shaping what I call an organizational “AI use culture.”

Because chatbots are cheap, fast, and accessible, many employees adopt them independently to accomplish work tasks. This means AI use culture will emerge whether you intentionally shape it or not. The key difference is that by intentionally guiding it, you can define how it unfolds. If you ignore it, the culture will likely default to shallow, efficiency-driven uses, like obtaining ready-made ideas, copy-pasting drafts, and outsourcing critical thinking. These approaches feel productive in the moment but gradually undermine the learning and creativity of employees that drive long-term organizational growth.

To avoid this prospect, organizational leaders should actively cultivate a more human-centered AI use culture that balances efficiency with learning, creativity, and collaboration.

Shaping effective AI use culture

Shaping effective AI use culture: A practical guide

Human-centered AI use at the workplace means that chatbots are used in ways that enhance key human flourishing factors and facilitate the fulfillment of professional needs, not undermine them. This doesn’t mean chatbots should be avoided or prohibited. Instead, it requires a shared understanding of how AI should be used and why those choices matter.

As a leader, you can’t afford silence when it comes to chatbots. Your employees are probably already using them, the only question is how exactly. Acknowledge this reality explicitly, and if possible, conduct interviews or cross-department surveys to reveal the general chatbot use patterns. Based on the obtained insights, you will clearly see whether the tendencies for chatbot usage are aligned with human development or are simply outsourcing strategies. For example, AI chatbots for healthcare help practices streamline patient communication, manage appointments efficiently, and reduce administrative workload. This demonstrates how thoughtful AI adoption can enhance both productivity and human connection.

For example, looking at the data, you can ask yourself: Are these usage patterns aligned with a need for professional growth? Does this contribute to skill development? Does this enhance mastery, autonomy, and creativity of employees? If everyone in the company uses chatbots like this, will we have a strong human potential and innovation over the long term? If you find any red flags, it can be a sign to intervene with the following strategies.

1. Establish the “sandwich approach”

One of the most effective methods for preserving human agency while leveraging AI capabilities is what can be called the “chatbot sandwich rule.” Within this method, an employee generates “raw material” first, that is, writing a draft, developing initial ideas, creating a presentation structure, designing a pipeline, whatever their work requires (the bottom layer). Then they use a chatbot for feedback, critical evaluation, and reflection on their ideas (the middle layer). Finally, they rewrite or redesign based on that feedback (the top layer).

For example, before presenting an idea to a project leader, an employee might run through several rounds of critical revision with a chatbot, using it to identify weaknesses, explore alternatives, and strengthen their argument. This approach preserves learning and authenticity while potentially saving time and improving quality.

2. Position AI as an intellectual sparring partner

Instead of asking “Write this for me,” employees should learn to prompt chatbots with “Challenge this idea,” “What am I missing here?” or “How could this approach fail?” Chatbots excel as question-askers and can help employees get to the right answers on their own, thereby learning the pathway to a solution and solving it independently next time. Encourage employees to use chatbots as sparring partners or performance coaches that help define goals, challenge assumptions, and evaluate ideas from multiple angles. This transforms AI from a content-generation tool into a thinking enhancement tool.

3. Develop AI literacy as a core competency

Your employees need skills to evaluate AI outputs critically, understanding potential biases, limitations, and gaps. Train them to ask probing questions: What assumptions are built into this analysis? Where might this information be incomplete? How does this align with our specific organizational context?

4. Balance AI and human collaboration

Some of the best organizational thinking emerges from human discussions where different perspectives result in unexpected connections. Regular human check-ins serve multiple purposes: they reality-test AI-assisted work, ensuring it remains grounded in practical constraints. They bring contextual knowledge, emotional intelligence, and diverse experience that AI cannot replicate. And they challenge assumptions based on real-world implementation experience and insights rooted in organizational culture and politics.

5. Avoid AI creativity trap

To complete this guide: here’s something that every executive should reflect on: if your employees use chatbots the same way as your competitors’ employees, which basically means generic question-answer sessions, quick rewrites, standard brainstorming, how likely is it that your company will be much more creative to innovate your way past competitors?

Research suggests that when organizations rely on similar AI strategies, their outputs might begin to converge toward similar ideas and structures. This convergence isn’t immediately obvious because each company’s outputs appear unique in isolation. But zoom out, and you’ll see troubling patterns of similarity.

Therefore, you should promote non-conventional, creative chatbot use cases. For example, train your project managers to use chatbots as a harsh critic, systematically exploring how initiatives could fail before they launch. Encourage your training departments to use AI as a “learning coach,” helping employees create personalized development pathways. Or ask your HR managers to analyse qualitative survey data with chatbots to reveal implicit information they might be overlooking.

Final thoughts

Your organization’s AI use culture is forming right now, shaped by hundreds of daily interactions between your employees and chatbots. You can either let this happen by default, risking a workforce that becomes dependent rather than empowered, or you can actively cultivate an approach that enhances human capabilities while leveraging AI’s strengths.

The choice you make will determine how adaptive, creative, and innovative it remains as AI continues evolving. In a world where everyone has access to the same powerful AI tools, your competitive advantage won’t come from the technology itself. Rather, it will come from how thoughtfully your people use it.

About the Author

Nick KabrelNick Kabrel is a research associate at the University of Zurich and a Digital Society Initiative Excellence Fellow. His research focuses on organizational behavior and human-centered AI at the workplace.

The post Employee Chatbot Use Will Shape Your Company—Here’s How to Guide it appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/employee-chatbot-use-will-shape-your-company-heres-how-to-guide-it/feed/ 0
Getting Ahead of EU AI Literacy Requirements – How Businesses Can Stay Compliant and Competitive https://www.europeanbusinessreview.com/getting-ahead-of-eu-ai-literacy-requirements-how-businesses-can-stay-compliant-and-competitive/ https://www.europeanbusinessreview.com/getting-ahead-of-eu-ai-literacy-requirements-how-businesses-can-stay-compliant-and-competitive/#respond Sun, 31 Aug 2025 07:12:32 +0000 https://www.europeanbusinessreview.com/?p=234639 By Jonathan Armstrong AI is transforming business, but the understanding of AI is not universal. Jonathan Armstrong, Partner at Punter Southall Law, outlines the new legal requirements under the EU […]

The post Getting Ahead of EU AI Literacy Requirements – How Businesses Can Stay Compliant and Competitive appeared first on The European Business Review.

]]>

By Jonathan Armstrong

AI is transforming business, but the understanding of AI is not universal. Jonathan Armstrong, Partner at Punter Southall Law, outlines the new legal requirements under the EU AI Act and explores how companies will need to adapt, explaining that organisations that fail to train staff properly risk compliance headaches, liability, and reputational damage.

Getting ahead of EU AI literacy requirements – how businesses can stay compliant and competitive

In most companies, AI is being used in business functions from HR and marketing to customer service. Figures reveal 78% of global companies use AI, with 71% deploying GenAI in at least one function[i]. However, often employees don’t fully understand how these tools work, and this gap can no longer be ignored.

The EU AI Act, particularly Article 4, addresses this by making AI literacy a legal requirement. Since February 2025, any organisation operating in the EU, or offering AI-enabled services to EU markets, must ensure their employees, contractors, and suppliers have a sufficient understanding of the AI tools they use. It is not enough to deploy technology responsibly; organisations must demonstrate that their workforce knows what they are doing.

What’s more, AI literacy isn’t just for developers or data scientists. HR teams using AI in recruitment, marketing teams using Generative AI for campaigns, and customer service staff managing chatbots are all included. Third-party contractors and vendors fall under the same obligations.

The European Commission defines AI literacy as the skills, knowledge, and understanding required to interact with AI responsibly. This includes:

  • Knowing how AI systems function and the data they use
  • Recognising risks such as bias, hallucinations, or discrimination
  • Understanding when and how human oversight is needed.
  • Being aware of legal obligations under the EU AI Act and other relevant frameworks

Why businesses can’t afford to ignore it.

Some organisations may assume AI literacy does not apply to them because they are not in tech. But if you deploy AI systems, you are in scope. Even seemingly low-risk applications, like a customer service chatbot can create legal and reputational exposure if misused.

The risks extend to Shadow AI, too. AI bans rarely work; employees often turn to personal devices, creating hidden risks. This means that universal staff training and clear policies are not just sensible, they are essential.

There is also a generational aspect. Digital natives often find the tools they need via social media or search. Without proper guidance, this can increase organisational risk. A well-planned AI literacy programme mitigates misuse and strengthens compliance.

Who do the rules apply to?

Article 4 covers any organisation using AI in the EU, even if based elsewhere, including UK businesses deploying AI tools in EU operations or offering AI-enabled services to EU customers.

Non-compliance is not limited to the IT team. Misleading chatbots or biased hiring algorithms can create liability for the whole business. Regulators are paying attention, and complaints could be lodged with national authorities or even GDPR regulators if personal data is misused. Examples already exist, from social media firms to UK dating apps that used AI-generated icebreakers.

Consequences of non-compliance

While AI literacy obligations came into effect on 2 February 2025, enforcement by national authorities begins on 3 August 2026. Each EU Member State will determine enforcement approach and penalties, considering factors like severity, intent, and negligence.

The European AI Office provides guidance, expertise, and coordination but does not enforce Article 4 directly. For now, the primary risks for organisations are civil action, pressure groups, and reputational damage.

As a result, businesses can’t wait until 2026 as regulators are already planning audits and enforcement and litigation risks exist already. Preparation means addressing both governance and culture.

Here are five steps for legal and compliance teams:

  1. Map your AI estate
    Audit all AI systems, whether in-house or third-party, covering decision-making, customer interactions, and content generation.
  2. Develop targeted AI literacy training
    Training must be role specific. HR teams using AI in hiring, for instance, need to understand bias, data protection, and explainability.
  3. Review contracts and third-party relationships
    Ensure vendors meet AI literacy standards and reflect these obligations in contracts.
  4. Create internal AI policies
    Set clear rules for AI use, approval processes, and human review. Treat this with the same rigor as data protection or anti-bribery frameworks.
  5. Engage the board and embed a responsible AI culture
    AI is now a board-level issue. Leadership must set expectations around responsible innovation, transparency, and compliance.

Article 4 signals a regulatory shift: businesses must now prove that their people understand AI, not just deploy it responsibly. Like GDPR reshaped data handling, the EU AI Act is transforming how AI is implemented, monitored, and explained across the workforce. What was once best practice is now a legal requirement, and getting ahead of it is the smartest move any organisation can make.

About the Author

Jonathan ArmstrongJonathan Armstrong is a lawyer at Punter Southall Law working on compliance & technology. He is also a Professor at Fordham Law School. Jonathan is an acknowledged expert on AI and he serves on the NYSBA’s AI Task Force looking at the impact of AI on law & regulation.

Reference
[i] https://explodingtopics.com/blog/companies-using-ai

The post Getting Ahead of EU AI Literacy Requirements – How Businesses Can Stay Compliant and Competitive appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/getting-ahead-of-eu-ai-literacy-requirements-how-businesses-can-stay-compliant-and-competitive/feed/ 0
AI Industry Adoption and Its Management Implications https://www.europeanbusinessreview.com/ai-industry-adoption-and-its-management-implications/ https://www.europeanbusinessreview.com/ai-industry-adoption-and-its-management-implications/#respond Thu, 28 Aug 2025 05:12:30 +0000 https://www.europeanbusinessreview.com/?p=234501 By Roberto García-Castro and Dr. J. Mark Munoz AI technologies are spreading fast across industries, but adoption varies widely. This article helps managers understand where and how AI is gaining […]

The post AI Industry Adoption and Its Management Implications appeared first on The European Business Review.

]]>

By Roberto García-Castro and Dr. J. Mark Munoz

AI technologies are spreading fast across industries, but adoption varies widely. This article helps managers understand where and how AI is gaining ground across the main U.S. industries in the 2020-2024 period, based on large-scale textual analysis of annual corporate 10-K reports.

Introduction

The increasing relevance of artificial intelligence (AI) and data-driven technologies across sectors has prompted growing interest in measuring their adoption across firms and industries (Cockburn, Henderson, and Stern, 2018). While anecdotal evidence points to the widespread integration of analytics, machine learning, and automation into business models, systematic evidence remains limited, particularly outside of the technology sector (Seamans and Raj, 2018).i

This article explores AI-related adoption patterns across US industries between 2020 and 2024, using a large-scale textual analysis of 10-K filings. By tracking the frequency of AI-related keywords in regulatory filings submitted to the Securities and Exchange Commission (SEC), the authors constructed a novel, replicable dataset that offers insight into how different industries disclose and potentially adopt AI technologies over time. The dataset consisted of annual 10-K filings from 7,883 unique US firms, downloaded using the Edgar package in R, which interfaces with the SEC’s EDGAR system. The filings span the years 2020 through 2024 and are parsed as plain text for computational efficiency. Each filing is tagged with its firm’s industry using standard SIC codes, which we mapped to broad industry categories for visualization purposes. In total, we analyzed 131,603,450 words in all US filings available in the EDGAR system. The average 10-K filing contained 23,037 words of text.

The methodology builds on prior work that uses natural language processing (NLP) techniques to study digital transformation (Chen and Srinivasan, 2020; Li, 2010). Specifically, the authors follow a keyword-based approach to proxy for firms’ engagement with AI, drawing on terms such as analytics, automation, big data, cloud, and machine learning. These keywords, drawn from both academic and consulting literature (Bughin et al., 2017), represent complementary dimensions of what is often broadly referred to as AI. The attached heat map visualizes the result of this exercise and reveals patterns in the diffusion of AI-related discourse across industries and years, highlighting both the heterogeneity in adoption and the concentration of AI-related language in certain sectors.

Increase in Ai Engagement and Diversity of Industry Usage

Following existing literature on digital adoption (Chen and Srinivasan, 2020), we define a dictionary of AI-related terms covering six core categories: AI, analytics, automation, big data, cloud, and digitization. The dictionary with the keywords used is shown in Table 1 below.

Table 1 - AI Industry Adoption and Its Management Implications

This set of keywords aimed to capture different dimensions of how firms discuss the integration of AI-related technologies into their operations. We also included a composite count (“total”) that aggregates the count from all categories.

Keyword mentions are counted in each firm’s 10-K text using regular expressions and aggregated by industry-year-category, resulting in a three-dimensional matrix. Figure 1 next page shows the intensity of keyword usage across time, industry, and AI dimension. Industries are arranged vertically, and AI categories are horizontally, with separate panels for each year from 2020 to 2024. Color intensity indicates the log-transformed frequency of keyword mentions, normalized by the number of firms in each industry-year cell. The higher the color intensity, the higher the adoption of the technology in that industry year.

Figure 1 - AI Industry Adoption and Its Management Implications

The results shown in Figure 1 reveal clear and increasing engagement with AI-related terminology over time, though unevenly distributed across sectors. Between 2020 and 2024, there was a general increase in mentions across all keyword categories, indicating growing awareness or adoption of digital technologies. The total keyword count shows a marked increase in intensity in the Business Services and Engineering and Management Services sectors, consistent with these sectors’ role in IT, consulting, and optimization. The machinery, computer equipment, and electronics sectors also show an increasing use of AI language, particularly in the context of automation, AI, and cloud technologies. Financial institutions show occasional spikes, particularly in analytics and big data, reflecting their investment in data-driven risk modeling and algorithmic trading. In contrast, traditional industries such as agriculture, mining, and lumber show minimal activity, possibly reflecting slower digitization trajectories or less public discussion of AI in filings.

The limited presence of AI-related activity in the air and railroad transportation sectors is a noteworthy finding. These industries operate under stringent regulatory oversight, where safety, reliability, and compliance are of paramount importance. The integration of artificial intelligence technologies in such contexts often encounters regulatory resistance and protracted certification processes, which can hinder adoption timelines and diminish the likelihood of prominent disclosure in official filings such as 10-K reports. Historically, firms within these sectors have been laggards in the adoption of digital technologies, including AI, which has contributed to persistent challenges in customer experience, operational efficiency, and profitability.

As firms adapt to e-books, online journalism, and on-demand printing, they increasingly adopt technologies such as cloud computing, automated workflows, and digital content management systems.

Our results also underscore the ongoing digital transformation of the printing and publishing industry, shifting from traditional print to digital platforms. As firms adapt to e-books, online journalism, and on-demand printing, they increasingly adopt technologies such as cloud computing, automated workflows, and digital content management systems—all of which are commonly linked with AI-related terminology in 10-K filings.

Another notable pattern is the consistent adoption of AI-related language within the telecommunications sector, particularly in analytics and cloud computing. This reflects the sector’s ongoing investment in digital infrastructure, customer behavior modeling, and network optimization—areas where AI tools have become essential to managing high data volumes and enabling services such as 5G rollout, predictive maintenance, and real-time service personalization.

Different keyword categories exhibit distinct temporal and sectoral patterns. Analytics is the most broadly adopted term, with usage growing steadily across most industries. Automation and AI terms appear less frequently but show sharp uptake in select years and sectors. Big data and cloud are often mentioned together, reflecting their complementary use in scalable data architectures. Digitization appears more often in sectors undergoing internal transformation, such as retail and communications.

Management Implications

It is noteworthy that while keyword counts proxy for awareness or strategic focus, they may not directly measure actual investment or implementation. Mentions in filings could reflect forward-looking plans, reactions to competitive pressure, or boilerplate language. However, prior research has shown that textual disclosures can be predictive of future firm behavior and market reactions (Li, 2008; Chen and Srinivasan, 2020), suggesting that these proxies offer meaningful insights into industry-level AI engagement.

This article presents a method for tracking AI adoption across industries using textual analysis of regulatory filings. By systematically counting AI-related terms in 10-K documents for nearly 8,000 US firms, we visualize trends in adoption over time and across industries. Our findings indicate that AI adoption is expanding rapidly across industries, led by a few dynamic service sectors that are at the forefront of this transformation. Analytics and cloud computing remain the most widely adopted category, while more technical terms like “machine learning” and “deep learning” are still relatively rare in non-tech sectors. The approach offers a framework for further empirical work on digital transformation and sectoral readiness.

Based on the findings, contemporary managers will benefit from the following strategic approaches:

  • Prepare for greater AI engagement. During the course of the study, AI engagement has increased. This trend will likely continue. Managers will be well served by planning their AI agenda in advance and pursuing strategic workforce training.
  • Plan for diverse utilization of technological tools. The findings suggest that companies have used a diversity of AI and other technological tools. Managers need to understand that technological progress does not necessarily follow just one path but rather multiple paths at different speeds. Set goals as well as resource allocation need to align with this reality.
  • Design the right digital transformation framework. It is evident from the research that digital technologies are reshaping the playing field of industries. The breadth and scope of transformation varies from industry to industry. Managers need to carefully assess their organization alongside the industry they are in and plan for the optimal and most impactful digital transformation.

The adoption of AI across US industries is presently underway. As such, organizational transformations are unfolding as new technologies gain popularity, prominence and usage. Contemporary managers need to strengthen their strategic planning skills to manage emerging threats as well as exciting opportunities ahead.

About the Authors

Roberto García-CastroRoberto García-Castro is a Professor of Managerial Decision Sciences at IESE Business School. His research covers various areas in decision-making in organizations and has been published in journals like Strategic Management Journal and Managerial and Decision Economics. Prior to academia, he worked for Arthur Andersen as an auditor and consultant.

Dr. J. Mark Munoz

Dr. J. Mark Munoz is a tenured Full Professor of Management at Millikin University and a former Visiting Fellow at the Kennedy School of Government at Harvard University. Aside from top-tier journal publications, he has authored/edited/co-edited more than 20 books, such as Global Business Intelligence and The AI Leader.

References
1. Bughin, J., Hazan, E., Ramaswamy, S. et al. (2017). Artificial Intelligence: The Next Digital Frontier? McKinsey Global Institute.
2. Chen, W. and Srinivasan, S. (2020). Going Digital: Implications for Firm Value and Performance. Harvard Business School Working Paper, p. 19-117.
3. Cockburn, I. M., Henderson, R. and Stern, S. (2018). The Impact of Artificial Intelligence on Innovation. NBER Working Paper No. 24449.
4. Li, F. (2008). Annual Report Readability, Current Earnings, and Earnings Persistence. Journal of Accounting and Economics, 45(2–3), p. 221–247.
5. Li, F. (2010). The Information Content of Forward-Looking Statements in Corporate Filings—A Naive Bayesian Machine Learning Approach. Journal of Accounting Research, 48(5), p. 1049–1102.
Throughout this article, we use the term “AI” in a general sense to refer to all the various technologies covered in our study: AI, analytics, automation, big data, cloud computing, digitization, and machine learning. Then, in our analysis, we separate “AI” from all other technologies. In this latter case, AI refers specifically to artificial intelligence technologies as defined by our dictionary of keywords.

The post AI Industry Adoption and Its Management Implications appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/ai-industry-adoption-and-its-management-implications/feed/ 0
Gartner: The Leadership Blind Spot: Why Executive AI Literacy Will Shape Business Outcomes in 2025 https://www.europeanbusinessreview.com/gartner-the-leadership-blind-spot-why-executive-ai-literacy-will-shape-business-outcomes-in-2025/ https://www.europeanbusinessreview.com/gartner-the-leadership-blind-spot-why-executive-ai-literacy-will-shape-business-outcomes-in-2025/#respond Sat, 23 Aug 2025 14:02:12 +0000 https://www.europeanbusinessreview.com/?p=234325 By Carlie Idoine AI is rapidly shifting from a technical enabler to a core driver of business decisions. This piece explores why executive AI literacy, not just infrastructure or compliance, […]

The post Gartner: The Leadership Blind Spot: Why Executive AI Literacy Will Shape Business Outcomes in 2025 appeared first on The European Business Review.

]]>

By Carlie Idoine

AI is rapidly shifting from a technical enabler to a core driver of business decisions. This piece explores why executive AI literacy, not just infrastructure or compliance, will determine competitive advantage. Leaders fluent in AI’s risks and potential can govern responsibly, align strategy effectively, and sustain long-term performance.

As artificial intelligence (AI) continues to redefine business strategy, operations, and performance, one increasingly critical capability remains underestimated at the highest levels of leadership: the ability to understand and govern AI itself.

Much of today’s executive focus rests on funding infrastructure, choosing tools, and meeting regulatory demands. But as AI systems begin to influence, if not outright automate, strategic business decisions, success will depend less on the technologies deployed, and more on the fluency of the people directing them.

AI is no longer a back-office enabler. It is becoming a co-pilot for decision-making across pricing, product strategy, supply chain optimisation, customer engagement, and risk management. By 2027, it’s expected that 50 percent of business decisions will be augmented or automated by AI agents. These agents operate at speed and scale, but not always with contextual awareness or human judgment. That’s where executive oversight becomes essential.

From delegation to accountability

For years, AI lived squarely in the realm of data scientists, technologists, and innovation teams. Business leaders focused on outcomes, while technical teams handled model development, deployment, and management. But this model is no longer sufficient.

AI is now shaping core decisions that directly impact revenue, compliance, and reputation. Leaders must shift from passive sponsorship to active engagement. That means asking not just what an AI system does, but how it reaches its conclusions, what data it draws from, and where risks may arise.

Leaders who lack this fluency may inadvertently approve initiatives they don’t fully understand, overestimate what AI can deliver, or overlook critical gaps in governance. In a world where AI’s reach is expanding rapidly, that blind spot is more than inefficient, it’s a business risk.

AI literacy is strategic literacy

Executive-level AI literacy does not require coding skills or technical expertise. But it does require strategic intelligence: the ability to interrogate assumptions, evaluate risk, and align AI deployments with long-term business priorities.

This literacy gives leaders the tools to spot flawed logic, overhyped claims, or narrow use cases that don’t scale. It empowers better decisions about where to invest, how to govern, and when to pull back. It also enhances the quality of dialogue between business and technical teams, ensuring AI isn’t implemented for technology’s sake, but as a clear driver of value.

In organisations where this understanding is in place, the benefits are tangible. Gartner predicts that by 2027, organisations that emphasise AI literacy for executives will achieve 20% higher financial performance compared with those that do not.

From briefings to immersive learning

To build true fluency, executives need more than status reports or vendor demos. They need to engage with AI systems directly, piloting use cases, testing prototypes, and seeing the real-world implications of AI in action.

For example, a supply chain leader might test an AI agent that dynamically reallocates stock based on predictive demand. A marketing executive might use synthetic data to model campaign outcomes without relying on sensitive customer data. These experiences sharpen judgment, surface limitations, and build confidence in decision-making.

Synthetic data is one area that illustrates this well. It offers privacy-preserving innovation and diverse training data but also introduces new risks if not properly managed. Without understanding how synthetic datasets are generated and validated, leaders may find themselves relying on AI models that look accurate but fail in critical ways. Literacy enables the right questions to be asked, before risks materialise.

The governance mandate

As AI becomes deeply embedded in operational workflows, its influence on strategic decision-making is only growing. Boards and C-suites will soon be expected to govern AI with the same diligence applied to financial reporting or cybersecurity.

This requires not just high-level awareness but informed oversight. Within a few years, AI-generated insights will increasingly be used to challenge executive decisions. Leaders who understand how these systems operate, how data is structured, how outputs are generated, and how bias or failure might creep in, will be better positioned to lead with credibility and accountability.

Governance must evolve to include AI as a strategic priority, not just a technical or compliance issue. Executive literacy is the enabler.

A global imperative

While regulatory frameworks differ across regions, from the EU’s AI Act to emerging standards in Asia and North America, the underlying requirement is consistent: businesses must demonstrate not only responsible AI deployment but also competent leadership.

This is not a regional conversation. It is a global shift in expectations for how leaders engage with technology. The companies that lead will be those whose executives are equipped to balance innovation with governance, speed with safety, and experimentation with accountability.

Leading the next phase

The pace of AI adoption will only accelerate. But no matter how advanced the models become, they are only as effective, and safe, as the leadership that guides them.

Executive AI literacy is now essential for building resilient, forward-thinking and high-performing organisations. Those who invest in these capabilities today will be best positioned to harness AI for sustained competitive advantage.

Gartner analysts will further explore these insights at the IT Symposium/Xpo in Barcelona, from 10-13 November 2025.

About the Author

Carlie IdoineCarlie Idoine is a VP Analyst at Gartner, specialising in Data, Analytics, and AI. She advises clients on analytics and AI strategy, programme development, organisational design, and software portfolio management. Her work focuses on helping organisations apply advanced analytics and AI to complex business problems and navigate the convergence of data, analytics, and software engineering roles.

The post Gartner: The Leadership Blind Spot: Why Executive AI Literacy Will Shape Business Outcomes in 2025 appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/gartner-the-leadership-blind-spot-why-executive-ai-literacy-will-shape-business-outcomes-in-2025/feed/ 0
The Rise of the Chief AI Officer: Turning Ambition into Value https://www.europeanbusinessreview.com/the-rise-of-the-chief-ai-officer-turning-ambition-into-value/ https://www.europeanbusinessreview.com/the-rise-of-the-chief-ai-officer-turning-ambition-into-value/#respond Thu, 21 Aug 2025 00:44:41 +0000 https://www.europeanbusinessreview.com/?p=233999 By Dave McCann Long gone are the days when we referred to Artificial Intelligence (AI) as an emerging opportunity — simply put, it’s now an enterprise imperative. In boardrooms across […]

The post The Rise of the Chief AI Officer: Turning Ambition into Value appeared first on The European Business Review.

]]>

By Dave McCann

Long gone are the days when we referred to Artificial Intelligence (AI) as an emerging opportunity — simply put, it’s now an enterprise imperative. In boardrooms across Europe and beyond, the question for CEOs isn’t whether AI will deliver value, but how – and how soon. Eight in ten leaders are targeting AI-driven cost savings and growth within the next 18 months.

Yet beneath this ambition, lies a complex reality. Despite heavy investment, nearly 60% of companies remain stuck in pilot mode, with only 25% of AI projects delivering expected returns. As the technology matures, so too does the pressure to convert investment into tangible impact.

That’s where the Chief AI Officer (CAIO) enters the picture. Once a niche – or perhaps even unheard of – title, the CAIO is now finding a place around the boardroom table, as organisations step up their efforts to capitalise on AI. And with good reason. New research from the IBM Institute for Business Value and the Dubai Future Foundation finds that organisations with a CAIO report up to a 10% increase in return on investment (ROI) from AI.

Globally, just over a quarter (26%) of companies have appointed a CAIO, with adoption slightly lower across most European markets (22–26%). But there are signs of acceleration. In the UAE, where AI leadership has government-level backing, 33% of organisations have already made the appointment — with policies in place to embed CAIOs in every ministry.

So, what does it take to succeed in this emerging role — and what lessons can European organisations take as they move from AI ambition to enterprise-scale execution?

Technical Depth Paired with Strategic Authority

The most effective CAIOs are not just technologists. They are translators and integrators — individuals who can bridge the divide between deep technical expertise and strategic business leadership.

While many have backgrounds in data science (73%) or technology (54%), nearly as many bring experience in business strategy (57%). And that blend matters. Deploying AI across the enterprise is not a linear technology project; it’s a complex, cross-functional transformation — one that demands influence, orchestration, and foresight.

This combination of skills is reflected in the role’s rising prominence within executive leadership. Over half of CAIOs report directly to the CEO or board. Perhaps most tellingly, 76% say they are consulted regularly by other C-suite leaders on AI decisions. In other words, the CAIO is no longer a specialist — they are becoming a critical pillar of enterprise strategy.

Collaboration and Control – A Blueprint for Scale

AI doesn’t sit in a silo — and neither should the CAIO. Collaboration across the C-suite is a defining feature of successful AI leadership. Three in four CAIOs say they work closely with peers across finance, operations, marketing and risk to align AI programmes with broader business goals.

This matters; yet too often, AI initiatives are undermined by fragmented ownership and organisational silos. Legacy infrastructure, disconnected data systems and unclear accountability can all stall progress.

The structure of AI governance is also key. CAIOs leading centralised or hub-and-spoke models are twice as likely to scale pilots into full production — and report 36% higher returns on AI investments. In contrast, decentralised models dilute impact: nearly 40% of projects get stuck in pilot mode, and just one in 10 reach enterprise scale.

Distributed experimentation can still succeed — but only when guided by a central AI strategy, shared KPIs, and strong cross-functional alignment. Without this, even the most promising use cases risk becoming siloed and slow-moving.

The Measurement Gap

Even with the right leadership and structure, success hinges on how progress is measured. Despite 72% of CAIOs acknowledging that a lack of measurement could cause their organisation to fall behind, 68% still launch AI initiatives without knowing how success will be evaluated.

This gap between ambition and accountability stalls momentum. Without shared metrics and central dashboards visible to decision-makers, it’s difficult to assess what’s working — or where to scale. Embedding measurable outcomes from the start is essential.

The highest-performing AI teams pair technical experts — data scientists, machine learning (ML) engineers — with business strategists to ensure every project is grounded in tangible impact. Measurement isn’t a reporting task. It’s a strategic capability.

AI Success Starts with Leadership

The future of enterprise AI will not be defined by algorithms alone. It will be shaped by leaders who can connect the dots between data, people, operations, and outcomes. CAIOs can — and should — be at the centre of that transformation. But success demands more than technical fluency. It requires strategic influence, organisational depth, and a clear-eyed focus on outcomes.

To realise AI’s full potential, organisations must embed AI at the heart of their business strategy — not at the periphery. That means aligning programmes with C-suite priorities, ensuring shared accountability across functions, and building multidisciplinary teams that enhance, rather than compete with, existing capabilities. Above all, it means moving beyond experimentation and into execution, with clearly defined metrics and a commitment to scaling what works. Key steps on that journey include:

  • Get clarity on your role and responsibilities
  • Create—and measure—clearly defined KPIs
  • Understand how each of your C-suite colleagues can support you, and engage them
  • Scale the impact of your team. Blend business, industry, and technical skills to strike the right mix for your organisation.
  • Lead the charge to centralise the AI operating model.
  • Develop a roadmap for AI-enabled digital transformation. Identify areas where AI can drive business value, assess the organization’s AI readiness, and develop a plan for AI adoption and deployment.

The bottom line? AI’s promise is real — but value at scale will only be unlocked through leadership that is as ambitious as the technology itself.

About the Author

Dave McCannDave McCann is the Managing Partner for IBM Consulting in EMEA. He and his team help EMEA organisations use innovative technology to co-create the future of their businesses in the era of AI.

The post The Rise of the Chief AI Officer: Turning Ambition into Value appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-rise-of-the-chief-ai-officer-turning-ambition-into-value/feed/ 0
Avoiding the Janus Trap: Why Europe Must Commit to AI-powered Reinvention https://www.europeanbusinessreview.com/avoiding-the-janus-trap-why-europe-must-commit-to-ai-powered-reinvention/ https://www.europeanbusinessreview.com/avoiding-the-janus-trap-why-europe-must-commit-to-ai-powered-reinvention/#respond Sun, 27 Jul 2025 01:19:46 +0000 https://www.europeanbusinessreview.com/?p=233087 By Mauro Macchi, Matt Prebble, Dominic King and Laura Ann Wright Closing the competitiveness gap is a central pillar of the European economic reform agenda. AI has the potential to […]

The post Avoiding the Janus Trap: Why Europe Must Commit to AI-powered Reinvention appeared first on The European Business Review.

]]>

By Mauro Macchi, Matt Prebble, Dominic King and Laura Ann Wright

Closing the competitiveness gap is a central pillar of the European economic reform agenda. AI has the potential to boost productivity—but too few companies are making bold, transformative investments. Regional leaders need to move faster: business capabilities—in areas such as data, cloud and talent—and the regional AI ecosystem must be strengthened to capture the opportunity.

If the European economic reform agenda had a patron, it would be Janus. Like the two-faced Roman god of transitions, Europe appears perennially stuck between the future and the past: forcefully acknowledging the need for change—but often struggling to fully do so. The advent of artificial intelligence (AI) presents a crucial opportunity to revive flagging regional productivity. Businesses and policymakers must move quicker if action is to match rhetoric.

The latest attempt to kickstart European competitiveness came last year from the former head of the European Central Bank, Mario Draghi. His report highlighted AI as a potential solution to Europe’s productivity malaise, citing the technology’s transformative power over industries from automotive and energy to life sciences. The European Commission picked up the baton, releasing the EU AI Continent Action Plan—a roadmap for Europe to become a global leader in the responsible and productive deployment of the technology.

AI is not a silver bullet, but when combined with human ingenuity, it could help mitigate key challenges such as high energy prices. The promise is a new era of productivity growth and greater resilience—both essential if Europe is to achieve ambitious economic, social and environmental outcomes. World-leading companies across the continent are already investing in AI to boost competitiveness. For example, Anders Romare, CIO & Senior Vice President, Digital, Data & IT at Novo Nordisk, told us: “The productivity gains we are beginning to see—for example in drug discovery—are simply irresistible”.

Slow on the uptake

However, our recent paper—Europe’s AI Reckoning: Reinventing Industries for a New Era—reveals many business leaders continue to take an experimental approach. Just 8% of the ‘strategic bets’ we studied—major, transformational and sector-specific generative AI investments embedded into the core of the company’s value chain—are being scaled in Europe. And more than half (56%) of the 800 large European organisations we surveyed have yet to scale even one.

The internal barriers to scaling that respondents identified range from breaking up data siloes and bringing together multi-disciplinary teams to security risks. These are compounded by perennial European challenges, including regulatory complexity, a lack of risk capital and the fragmented single market.

That said, we found pockets of strength in certain industries. In automotive, for example, 70% of companies have scaled at least one strategic bet (with most focused on enhancing product design and customer engagement). Aerospace and defence follows (63%), with companies focusing mostly on improving simulations—such as crash tests and aerodynamics—and providing in-use data analysis. As Graham Smith, head of AI, data science and innovation at NatWest explains, the full potential of AI will only be realised “when you’re completely rethinking the way your business operates.”

Size matters

Our analysis also revealed that size matters when it comes to AI. Nearly half (48%) of European companies with US$10billion+ in annual revenues have scaled at least one strategic bet, on par with their US counterparts but well ahead of smaller peers (US$1 billion- US$9.9 billion) in the region (31%). This is a concern given that the US is home to a third more large companies than Europe.

So, what are the largest companies doing differently? We built an index to gauge the development and deployment of AI capabilities—from talent and data governance to the use of foundation models—that make scaling strategic bets possible.  These capabilities help organisations achieve value from AI investments by unlocking new ways of working that go beyond simply layering technology on top of current processes—rather, equipping organisations with the capabilities to reinvent for efficiency, democratise knowledge and enhance collaboration between humans and autonomous agents. The largest European companies score an average of 54 (out of 100), again equal with US peers; those in the revenue bracket right below score just 39.

These capability gaps weigh on reinvention potential; for example, larger European businesses are 3x as likely to have integrated autonomous AI agents into various functions. There are clear frontrunners in automotive (scoring 57 out of 100) and aerospace and defence (52), while significant opportunities to build the capabilities necessary to scale AI exist in sectors such as industrial—which contributes more than a quarter of European output—and those providing critical infrastructure, such as energy, telecoms and utilities.

Sovereignty rules

The onus clearly falls on companies to invest in AI—to open the door to new ways of working in which processes are reinvented for efficiency, knowledge is democratised and collaboration between autonomous agents and people is seamless. The end-goal is reaching a ‘cognitive digital brain’—a central nervous system for enterprise decision-making and continuous learning that organises, processes and acts on data about businesses and the wider world in real-time.

It’s a vision that will not only require business leaders to upskill their people at scale—but also to recognise that the flipside of continuous technological transformation is greater exposure to external threats such as unauthorized access and cyberattacks. Building a secure digital core to reduce vulnerabilities, redundancy and technical debt is therefore critical.

Another requirement is how to reimagine Europe’s AI ecosystem as geopolitical risks grow. We’ve seen a clear mindset shift since the recent imposition of US tariffs, as companies look to balance critical technology dependencies in terms of control, cost and innovation. To build resilience, European companies should adopt a three-layered decoupling approach that factors in data workload sensitivities:

  • Architectural: Use sovereign/private cloud for critical workloads to regain control over data.
  • Legal: Operate with European and global trusted entities to reduce exposure to extraterritorial laws.
  • Supply chain: Maximise open-source solutions to reduce dependence on proprietary software.

That said, individual company actions will only take Europe so far. Leaders across the public and private sectors need to jumpstart the development of a robust, competitive AI ecosystem that avoids duplications and creates more synergies across major countries. This should focus on the following priorities:

  • Help smaller companies level up on AI: Smaller organisations need access to more compute capacity and high-quality data, as well as the funding advice, networking and training to boost adoption of sector-specific AI solutions.
  • Nurture a sovereign European AI ecosystem: Foster work with European cloud providers and AI producers, while enabling access to innovation from trusted global players as they develop sovereign solutions and local legal entities.
  • Develop a coordinated industrial strategy: A federated AI ecosystem—underpinning a competitive and values-driven AI economy—should be grounded in interoperability, cross-industry and cross-border collaboration and regulatory alignment.

How Europe rises to the twin challenges of shifting geopolitics and maximising AI potential will shape its growth trajectory in the coming years. Larger companies must embrace AI faster—and smaller peers must follow their lead. It’s time to turn principles into action and create a resilient, inclusive, innovative AI ecosystem.

The current market turmoil presents a fresh, immediate opportunity to accelerate the Europe’s economic reform agenda. Janus—also the god of beginnings—would doubtless approve.

About the Authors

Mauro MacchiMauro Macchi is the chief executive officer for Europe, Middle East and Africa (EMEA) at Accenture, the chair of Accenture in Italy and a member of Accenture’s Global Management Committee. He has more than 30 years of experience at Accenture and has held various executive positions, including the Financial Services Europe Lead and the Strategy & Consulting Lead for Europe.

Matt Prebble Matt Prebble is the senior managing director for data and AI across EMEA. He works with C-suite executives and boards of the world’s leading organisations, helping them accelerate their data and AI reinvention to enhance competitiveness, grow profitability and deliver sustainable value.

Dom kingDominic King is the research lead for EMEA. He is currently focused on how AI and other technologies can drive competitiveness across Europe. Previous work includes building the commercial case for diversity and sustainability with organisations such as the World Economic Forum and International Finance Corporation.

Laura Ann WrightLaura Ann Wright is the public service research lead for EMEA. With a focus on data and AI, technology and digital transformation, she brings deep expertise in emerging technologies and strategic policy to deliver actionable insights that drive innovation and resilience in government and industry

The post Avoiding the Janus Trap: Why Europe Must Commit to AI-powered Reinvention appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/avoiding-the-janus-trap-why-europe-must-commit-to-ai-powered-reinvention/feed/ 0
AI is Making Us Stupider, And That’s Exactly Why We Need to Talk https://www.europeanbusinessreview.com/ai-is-making-us-stupider-and-thats-exactly-why-we-need-to-talk/ https://www.europeanbusinessreview.com/ai-is-making-us-stupider-and-thats-exactly-why-we-need-to-talk/#respond Sun, 27 Jul 2025 00:40:36 +0000 https://www.europeanbusinessreview.com/?p=233086 By Craig Wilson AI is changing how developers work. It helps teams move faster, but something important is being lost. Developers are skipping the hard parts where real learning happens. […]

The post AI is Making Us Stupider, And That’s Exactly Why We Need to Talk appeared first on The European Business Review.

]]>

By Craig Wilson

AI is changing how developers work. It helps teams move faster, but something important is being lost. Developers are skipping the hard parts where real learning happens. This article looks at how AI affects growth, why it matters, and what companies should do to keep their teams strong and skilled. 

Everyone keeps asking: Will AI replace developers? Wrong question.

The real issue is that AI isn’t taking jobs — it’s quietly removing the steps developers used to go through to get good at them. And the effects might not show up in sprints or velocity charts, but they’ll hit hard when no one knows how the code really works anymore.

In hands-on experiments across engineering teams, the pattern is becoming clear: productivity is up, but understanding is down. Developers skip past the uncomfortable middle — the part where skills are forged. And that middle used to be essential for turning good coders into great engineers.

The Disappearing Middle: What Developers Are No Longer Learning

Today’s junior developers can ship code in their first month. With copilots, chat-based debugging tools, and UI generators, they’re immediately productive. But in five years, when they’re “senior” on paper, will they be able to make changes to that code — or even understand how it works?

In one of our experiments, a frontend prototype went from Figma to working code in under 48 hours using AI tools. The speed was impressive. But when the team tried to extend or debug that code, it became clear: the developer who built it didn’t fully grasp the structure. Key decisions had been made by the model, not the human. The developer couldn’t retrace the logic or justify the architecture — because they hadn’t built it themselves.

Another team used GPT to generate comprehensive unit tests. When asked about certain test cases and why they existed, the team couldn’t say. The code passed, but no one knew exactly what was being verified or why those conditions mattered.

The middle of the learning curve — where devs used to build intuition through trial and error — is being automated out. We call this growth compression. And the risk is clear: a generation of engineers who can prompt, edit, and deploy, but not truly understand.

This also creates an internal tension. Senior engineers are increasingly pulled into review roles — not because juniors can’t code, but because no one fully understands what the AI wrote. Meanwhile, juniors are less likely to ask for help, because AI answer faster and without critique. Mentorship is quietly eroding, replaced by chatbot suggestions that don’t teach principles — just patterns.

And when seniors are reduced to validators of AI-generated content, rather than mentors or architects, they risk burnout and disengagement. The joy of building is replaced with the burden of quality control.

The Other Feedback Loop: When AI Starts Learning from Itself

It’s not just the developers who are stagnating. The AI is too.

AI models are trained on public data — repos, blogs, forums. Increasingly, that data is being generated by AI itself. Developers post AI-written code. Blogs get filled with LLM-generated tips. Models scrape that content. And the cycle repeats.

What happens when AI trains on AI? You get regression. Outputs become more uniform. Creativity declines. Errors compound. And the system starts reinforcing mediocrity instead of progress.

We’ve seen early signs of this already — hallucinated citations, recycled phrases, code suggestions that “look right” but lack context. And as the proportion of human-authored content declines, models will have fewer reference points grounded in expert understanding. They’ll be trained on approximations of approximations.

This doesn’t just affect quality — it affects trust. When no one understands how something works, but everyone uses it anyway, you get fragility at scale. And once AI becomes part of your delivery pipeline, the consequences compound quickly.

AI Is Changing How We Think About Work

One of the unintended consequences of AI integration is cultural. Engineering has always been a craft: a mix of logic, trade-offs, and experience. But when AI intermediates that process, some of the essential feedback loops vanish.

We’ve seen how developers begin to approach work differently once AI becomes embedded in the workflow. Instead of breaking down a problem, many jump straight to the prompt. Instead of discussing trade-offs, they rely on outputs. The result is faster delivery, but often shallower thinking.

This shift affects team dynamics, hiring expectations, onboarding, and long-term product maintainability. It changes how developers perceive their own value and how organizations measure it.

We believe the future of engineering won’t be defined by how well you prompt an AI assistant, but by how well you guide, correct, and build on top of what it produces.

Why Most AI Projects Will Stall and What to Do Instead

Gartner predicts that 60% of AI projects will be canceled within the next year. Not because AI isn’t powerful — but because too many companies chase results without building the foundation to support them.

The real challenge isn’t the model. It’s the environment around it.

AI can’t fix a broken architecture. It can’t create clarity where there’s no structure. And it can’t generate insight from poor data. That’s why the next wave of real AI value won’t come from more impressive outputs — it’ll come from more robust inputs.

We’re focused on strengthening the data foundation that makes AI genuinely useful — systems that are reliable, well-structured, and high in quality. No matter how advanced the model is. Without clean, contextual data and an environment it can operate within, it simply won’t deliver value.

The companies that win with AI won’t be those with the flashiest demos — they’ll be the ones with the cleanest data, the clearest processes, and the most thoughtful engineering cultures.

Where We Go From Here

We don’t need to slow down AI adoption. We need to make it smarter and more human-centered.

We believe in:

  • Giving developers the tools they need, without replacing the growth they deserve.
  • Preserving mentorship, reasoning, and decision-making alongside automation.
  • Designing engineering environments where AI augments — but never erodes — long-term capability.

We can build faster. But we also need to build better. That means protecting the steps, the questions, and the challenges that make developers great at what they do.

Otherwise, we’re just automating ourselves into irrelevance.

Let’s make sure the future of engineering doesn’t forget how to think.

About the Author

Craig WilsonCraig Wilson is the Co-Founder and Co-CEO of Opinov8, leading commercial strategy and business expansion across the US and EMEA. With over 25 years in the tech industry, he specializes in GTM strategy, M&A, and building global sales teams. His expertise includes agile development, nearshore/offshore outsourcing, and scaling technology businesses.

The post AI is Making Us Stupider, And That’s Exactly Why We Need to Talk appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/ai-is-making-us-stupider-and-thats-exactly-why-we-need-to-talk/feed/ 0
Algorithmic Attachment: How AI-Based Employee Monitoring Shapes Workplace Trust, Autonomy, and Well-being https://www.europeanbusinessreview.com/algorithmic-attachment-how-ai-based-employee-monitoring-shapes-workplace-trust-autonomy-and-well-being/ https://www.europeanbusinessreview.com/algorithmic-attachment-how-ai-based-employee-monitoring-shapes-workplace-trust-autonomy-and-well-being/#respond Wed, 23 Jul 2025 07:30:27 +0000 https://www.europeanbusinessreview.com/?p=232894 By Madeleine Roantree and Adrian Furnham Although AI-driven monitoring and performance tools are designed to optimise productivity, they may have an adverse effect on how employees perceive fairness, support, and […]

The post Algorithmic Attachment: How AI-Based Employee Monitoring Shapes Workplace Trust, Autonomy, and Well-being appeared first on The European Business Review.

]]>
By Madeleine Roantree and Adrian Furnham

Although AI-driven monitoring and performance tools are designed to optimise productivity, they may have an adverse effect on how employees perceive fairness, support, and control in the workplace. Attachment theory shines an illuminating beam on how individuals may react to algorithmic oversight.

Attachment Theory in the Workplace

Attachment is defined as the human propensity to seek out and develop close affectional bonds with others. Attachment theory, originally developed to explain interpersonal relationships, offers a valuable lens for understanding these dynamics. This framework posits that individuals’ attachment styles—secure, anxious, or avoidant—shape how they respond to perceived support or threat in relationships, including those mediated by technology.

This article examines how AI-based monitoring systems interact with employees’ attachment styles, influencing their sense of trust, autonomy, and well-being. It proposes a psychologically informed approach to designing AI systems and provides actionable recommendations for fostering workplace environments that prioritise human needs alongside organisational goals.

Attachment theory suggests that early childhood experiences profoundly shape all adult relationships.

Attachment theory suggests that individuals develop internal models of relationships based on early experiences with caregivers, which influence their interactions in adulthood. That is, early childhood experiences profoundly shape (all) adult relationships.  These attachment styles manifest in the workplace as follows:

  • Secure attachment: Individuals are comfortable with interdependence, view support systems positively, and adapt well to feedback.
  • Anxious attachment: Individuals seek reassurance, fear rejection, and may perceive monitoring as a sign of distrust or criticism.
  • Avoidant attachment: Individuals prioritise independence, may withdraw under scrutiny, and perceive monitoring as intrusive.

Research indicates that attachment styles significantly predict workplace outcomes, including engagement, collaboration, and stress resilience. When applied to AI-based monitoring, attachment theory suggests that employees’ reactions to algorithmic oversight depend on how these systems align with their relational expectations.

Research has demonstrated significant relationships between attachment styles and job performance, job satisfaction, burnout, feedback-seeking and acceptance, as well as organisational commitment. Attachment styles can be utilised to inform a host of organisational attitudes, behaviours, and other outcomes.

Over a decade ago, Harms (2011) showed that attachment theory could also explain how the characteristics of leaders foster positive and negative outcomes in their subordinates. He suggested that organisations could utilise attachment dimensions in their selection systems for supervisors. Also, they can be used to inform job design to ensure closer contact with supervisors for anxiously attached individuals who may experience a sense of loss when physically separated from their leaders. Moreover, performance reviews could be conducted and delivered in a way that is mindful of the fact that some followers may be particularly sensitive to feedback that may indicate that their leader has a negative perception of them. That is, they are suggestive of ways to strengthen the relationship rather than simply pointing out past behaviours that were seen to be disruptive or off-putting. 

AI Monitoring and Psychological Responses

Build algorithmic systems not just for efficiency, but for psychological safety.

Securely attached individuals may view AI tools as reliable support systems, whilst those with anxious or avoidant tendencies might interpret such monitoring as intrusive or distrustful. These responses can profoundly affect organisational outcomes—from engagement and innovation to burnout and turnover. We suggest a novel psychological framework for understanding algorithmic management, proposing that workplace technologies must be designed with human attachment needs in mind. By integrating insights from occupational psychology and behavioural science, it sets out principles for developing transparent, autonomy-supportive AI systems that foster trust rather than fear. We conclude with practical recommendations for leaders, HR professionals, and policymakers across Europe: build algorithmic systems not just for efficiency, but for psychological safety—thereby supporting both individual well-being and long-term organisational resilience.

The integration of artificial intelligence into workplace management has transformed how organisations monitor and evaluate employee performance. From tracking keystrokes to analysing communication patterns, AI-based tools promise enhanced efficiency and data-driven decision-making. However, their psychological implications remain underexplored. As organisations across Europe adopt these technologies, they must consider how AI-mediated oversight influences trust, autonomy, and emotional well-being—core components of a healthy workplace.

AI Monitoring and Psychological Responses

AI-driven tools, such as performance analytics and real-time productivity trackers, are often designed to optimise efficiency. However, their implementation can trigger varied psychological responses. Securely attached employees may perceive these tools as neutral or supportive, enhancing their sense of structure and fairness (Neustadt et al., 2011). Conversely, anxiously attached employees may interpret constant monitoring as evidence of mistrust, heightening stress and reducing engagement. Avoidant employees may disengage entirely, viewing AI oversight as an invasion of autonomy.

Studies show that perceived surveillance can reduce intrinsic motivation and increase burnout.

These reactions have tangible consequences. Studies show that perceived surveillance can reduce intrinsic motivation and increase burnout, particularly among employees with insecure attachment styles  (Warnock et al., 2023). Furthermore, excessive monitoring may undermine psychological safety—the shared belief that a workplace is safe for interpersonal risk-taking—leading to lower innovation and higher turnover.

A Psychological Framework for Algorithmic Management

To mitigate these risks, organisations must design AI systems that align with human attachment needs. This requires a framework rooted in three principles:

  1. Transparency: Employees should understand how AI tools collect and use data. Clear communication reduces perceptions of threat, particularly for anxiously attached individuals.
  2. Autonomy-Support: AI systems should empower rather than control. For example, offering employees access to their own performance data fosters a sense of agency, appealing to both secure and avoidant individuals.
  3. Psychological Safety: AI tools should be integrated into a broader culture of trust, where employees feel valued beyond their metrics. This is critical for fostering resilience across all attachment styles.

The European Commission’s AI Act (2024) provides a regulatory foundation for such principles, emphasising transparency and accountability in workplace AI. However, organisations must go beyond compliance to address the emotional and relational dimensions of technology use.

Practical Recommendations

To create AI systems that support trust, autonomy, and well-being, leaders, HR professionals, and policymakers should consider the following:

  1. Co-design with Employees: Involve employees in the development and implementation of AI tools to ensure they meet diverse psychological needs. This collaborative approach can enhance trust and reduce resistance (EU Agency for Fundamental Rights, 2023).
  2. Tailored Feedback Systems: Use AI to deliver personalised, constructive feedback rather than punitive metrics. For example, dashboards that highlight strengths alongside areas for growth can resonate with securely attached employees whilst reassuring those with anxious tendencies.
  3. Training for Managers: Equip leaders to mediate between AI systems and employees, fostering open dialogue about monitoring practices. This can mitigate avoidant employees’ withdrawal and support a culture of psychological safety.
  4. Ethical AI Guidelines: Policymakers should expand the AI Act’s principles to include psychological impact assessments, ensuring that workplace technologies are evaluated for their effects on trust and well-being.

Conclusion

As AI reshapes the workplace, its psychological implications cannot be ignored. By applying attachment theory, organisations can better understand how employees respond to algorithmic oversight and design systems that foster trust, autonomy, and well-being. Transparent, autonomy-supportive AI tools, embedded in a culture of psychological safety, can enhance both individual and organisational outcomes. For Europe’s business leaders and policymakers, the challenge is clear: build algorithmic systems that prioritise human connection alongside efficiency. In doing so, they will cultivate workplaces that are not only productive but also resilient and humane.

About the Authors

Madeleine RoantreeDr Madeleine Roantree is a UK-based psychologist and relationships expert. She divides her time between the NHS and private practice, working with individuals and couples.

 

Adrian FurnhamProfessor Adrian Furnham is a professor at the Norwegian Business School. He has long had an interest in the concept of attachment and how it applies to the workplace.

 

References
Berson, Y., Dan, O., & Yammarino, F. J. (2006). “Attachment Style and Individual Differences in Leadership Perceptions and Emergence”. Journal of Social Psychology, 146(2), 165–82.
Calboli, S., & Engelen, B. (2025) “AI-enhanced nudging in public policy: why to worry and how to respond”. Mind & Society.
European Commission. (2024). AI Act: Provisional agreement and guiding principles.
EU Agency for Fundamental Rights. (2023). Trust and technology in the workplace.
Fein, E. C., Benea, D., Idzadikhah, Z., & Tziner, A. (2019). “The security to lead: a systematic review of leader and follower attachment styles and leader–member exchange”. European Journal of Work and Organizational Psychology, 29(1), 106–25.
Kim, B-J., & Kim, M-J. (2024) “How artificial intelligence-induced job insecurity shapes knowledge dynamics: the mitigating role of artificial intelligence self-efficacy”. Journal of Innovation & Knowledge, 9(4), 100590.
Harms, P. D. (2011). “Adult attachment styles in the workplace”. Human Resource Management Review, 21(4), 285–96.
Neustadt, E., Chamorro-Premuzic, T., & Furnham, A. (2011). “Attachment at work and performance”. Attachment & Human Development, 13:5, 471–88.
Mikulincer, M., & Shaver, P. R. (2007). Attachment in adulthood: Structure, dynamics, and change. The Guilford Press.
Warnock, K. N., Ju, C. S. & Katz, I. M. (2024). “A Meta-analysis of Attachment at Work”. Journal of Business and Psychology, 39, 1239–57 (2024).

The post Algorithmic Attachment: How AI-Based Employee Monitoring Shapes Workplace Trust, Autonomy, and Well-being appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/algorithmic-attachment-how-ai-based-employee-monitoring-shapes-workplace-trust-autonomy-and-well-being/feed/ 0
The Real AI Battle: OS is the New Prize https://www.europeanbusinessreview.com/the-real-ai-battle-os-is-the-new-prize/ https://www.europeanbusinessreview.com/the-real-ai-battle-os-is-the-new-prize/#respond Fri, 18 Jul 2025 07:39:20 +0000 https://www.europeanbusinessreview.com/?p=232631 By Jacques Bughin The third wave of AI is shifting focus from generative outputs to agentic orchestration. This article explains how the real competitive edge lies in building AI operating […]

The post The Real AI Battle: OS is the New Prize appeared first on The European Business Review.

]]>
By Jacques Bughin

The third wave of AI is shifting focus from generative outputs to agentic orchestration. This article explains how the real competitive edge lies in building AI operating systems that coordinate autonomous agents, workflows, and tools. Control of this orchestration layer will determine which platforms lead the future of enterprise AI.

We are entering the third wave of artificial intelligence. The first wave was predictive, driven by pattern recognition and analytics: while confined to data science analysts and those mastering ML techniques, it proved that AI can deliver strong value. The second, generative AI, dazzled many of us with its ability to produce human-like text, code, and imagery. And in insights, the killer app is how it could make coding and software development a near-commodity.

But the third wave, now gathering dominance, is agentic AI: systems that don’t just generate, but act autonomously, plan, and reflect. What makes them transformative is not just intelligence, but orchestration: the ability to coordinate goals, tools, workflows, and learning loops across complex digital environments.

While agentic systems are still early in development, agentic augmentation has leapfrogged raw model upgrades in less than a year.

While agentic systems are still early in development, agentic augmentation has leapfrogged raw model upgrades in less than a year. Agentic systems are being deployed in production, showing strong executional advantages. Moveworks, acquired by ServiceNow in 2025 for $2.9B, uses agents to resolve IT tickets, HR queries, and access requests. In one municipal deployment, over 3,000 hours of human work were offloaded. Their agentic enterprise search tool reduced lookup time by over one hour per employee per day. Success rates exceed 95% for core workflows. Other agents, like OpenAI’s “Operator” and “Deep Research,” show real-world task execution: browsing websites, booking meetings, summarizing reports, and citing live sources.

In this context, and as seen in the past for any new platform, PC, mobile, and others, the new war is about about who will control the AI-native OS (operating system) — not merely models, but the substrate that governs agency: orchestration — both technically and strategically — is the defining axis of this battle.

Agentic AI needs an OS

Unlike predictive or generative AI, agentic systems pursue goals with minimal supervision. They break objectives into steps, select and invoke tools, execute actions, and reflect on outcomes. These systems are inherently asynchronous, interleaved, multi-agent, and multi-modal. As such, they require a dedicated orchestration layer to manage:

  1. Workflow memory and context
  2. Tool access and chaining
  3. Secure action execution
  4. Role separation between agents
  5. Compliance and guardrails

This orchestration requirement distinguishes the AI OS from traditional operating systems. It is more akin to a cloud-native runtime or a distributed coordination protocol. Without this layer, agents are brittle, untrustworthy, or confined to isolated domains. With it, they become scalable, enterprise-grade workhorses.

Orchestration also underpins economic control. Drawing on platform economics, the AI OS acts as a multi-sided platform: coordinating agents (supply), enterprise users (demand), data and workflow providers (complementors), and infrastructure (foundation). Whoever controls the OS controls pricing, access, monetization, and feedback loops.

The race is on

In this new universe, LLMs may become commoditized. The differentiation would lie in the orchestration stack—how agents are chained, how tools are invoked, and how memory is structured. The OS owner will set the standards, extract value, and control distribution across the ecosystem. The OS also might become the trusted intermediary for sensitive data, workflows, and compliance.

The race for the (agentic) AI platform is on. It includes Microsoft, and with Copilot+ now integrated into Windows and M365, Microsoft controls the agentic layer for over 400 million enterprise users, and its graph API and Semantic Kernel are orchestration initiatives.  Through ChatGPT Team/Enterprise and the Operator browser, OpenAI is building a Chromium-based OS for agents, complete with memory, app store, and execution capability, while Perplexity ‘s Comet is building a vertical agentic stack focused on search and information tasks.

ServiceNow is the most mature enterprise agile platform, embedding AI OS logic across ITSM workflows. And while not an OS vendor, Nvidia’s control of the orchestration runtime (GPUs + microservices) makes it foundational. Nvidia launched NeMo Retriever, NIM microservices, and the AI Workbench SDK to let developers orchestrate agents across devices and clouds. Google Vertex AI Extensions now support tool use, agent scheduling, and dynamic memory.

Call Out from the Roads: Driverless Cars as a Case of OS Deployment

To illustrate the stakes and logic of orchestration, consider the battle for control in autonomous vehicles. The race is not just about whose AI sees pedestrians better. It’s about who orchestrates decision-making, safety layers, navigation, and compliance in real time. Tesla, Waymo, and NVIDIA aren’t just shipping hardware—they are building autonomous operating systems like Tesla’s Dojo or Waymo’s Chauffeur. These AI OS layers integrate real-time sensor fusion, traffic-aware planning, edge computing, and failover strategies. They turn intelligence into coordinated, accountable action.

That orchestration logic is what enterprise AI now faces. Like AVs, agents in business must integrate signals, invoke APIs, adapt in real time, and remain compliant. Whoever owns this AI OS stack—whether Microsoft with Copilot, OpenAI’s Operator, or Perplexity’s vertical browser stack—controls not just intelligence but execution. The AV sector teaches that the biggest prize is not perception, but control of the logic layer where risk, data, and performance converge. The same is unfolding in the AI software stack.

Sorting out winners from losers

The battle for Agentic AI OS dominance must take into account more than firm assets. This should include the effects of:

  1. Open Source: Hugging Face to Warmwind OS, a wave of open-source and cloud-native platforms is challenging closed ecosystems, promising transparency and customization. Langchain’s  90k+ GitHub stars (as of July 2025) and 500+ plugins available and developer traction indicate that open composability is outpacing closed agent stacks in early adoption.
  2. Geopolitics: Through Huawei, China’s HarmonyOS Next is a strategic move to build a homegrown, sovereign digital ecosystem, while Europe pushes for open frameworks and trusted execution environments. Gaia-X, EU AI Act, and TEEs (Trusted Execution Environments) show a strong preference for auditable, privacy-preserving OS layers. Sovereign LLM efforts (Mistral, Aleph Alpha, Luminous) are all building toward agent readiness.
  3. Legal and Ethical Battles: As AI Os become central, legal disputes (such as OpenAI’s recent trademark and IP controversies) and regulatory scrutiny will likely be intensifying :

Lessons from the past

If history is any guide—especially the PC, mobile, and cloud eras—it teaches a few lessons.

Control of the orchestration layer consistently decides platform dominance: During the PC Era, Microsoft didn’t just build an OS—it orchestrated an entire ecosystem. Windows provided standardized APIs, development tools, backward compatibility, and distribution agreements with hardware partners. This made it the default platform for developers, pushing network effects that strengthened its position. The mobile war of the 2000s saw iOS and Android reach dominance through platform orchestration. Apple iOS used vertical integration—hardware, OS, App Store, and SDKs—to guarantee performance, security, and quality. Android, by contrast, leveraged openness and broad adoption across device manufacturers (Samsung, Huawei, Xiaomi). Platform scholarship emphasizes this dual model: “open enough” to scale, “controlled enough” to monetize—exactly as platforms like Uber or Airbnb balance openness with control. During the cloud era, cloud platforms moved orchestration into the data center. Amazon Web Services, Microsoft Azure, and Google Cloud converged on offering not only virtual machines but also dev toolchains, APIs, and serverless runtimes. This “programmable infrastructure”.

Ecosystems—not features—drive platform lock-in. The most successful platforms created massive developer flywheels. Apple did this through its iOS SDK and App Store, offering developers monetization, distribution, and quality control in a single stack. Android scaled globally by opening its OS to device manufacturers while anchoring control through Google Play Services. These ecosystems created positive feedback loops: more developers meant more apps, which attracted more users, which drew in even more developers. In the age of agentic AI, SDKs for building agents, marketplaces for composable tools, and developer-facing orchestration libraries will be the new engine rooms of platform lock-in.

The agentic AI OS must offer composability and extensibility while securing monetization layers such as memory state management, compliance APIs, and runtime governance.

Openness and control must be carefully balanced. Platforms that were too open often failed to capture value, while those too closed risked stagnation. Android succeeded because it was open enough to drive adoption by OEMs, yet retained control through proprietary services and APIs. Kubernetes, an open orchestration framework, became dominant only after managed services by cloud vendors (like GKE or EKS) wrapped it in enterprise-grade compliance and support. The agentic AI OS must offer composability and extensibility while securing monetization layers such as memory state management, compliance APIs, and runtime governance.

Open source shapes the stack but rarely captures the profit. The rise of Linux, PyTorch, and TensorFlow illustrates how open frameworks often define developer standards. However, value capture shifted to those who offered hosted infrastructure, tooling, and compliance. Red Hat, AWS, and Microsoft Azure monetized these ecosystems more effectively than the communities that created them. In the agentic AI context, LangChain, Hugging Face, and LangGraph are winning early adoption, but unless they wrap their offerings in enterprise-grade orchestration and compliance, they risk becoming commoditized.

Regulation is both a constraint and an accelerator of platform consolidation. Past platform giants faced significant regulatory hurdles: Microsoft endured antitrust litigation, Facebook faced data privacy crackdowns, and the GDPR redefined platform responsibility in Europe. In the agentic AI era, regulation will go even further. The EU AI Act classifies agentic systems as “high-risk,” requiring explainability, override mechanisms, and auditable memory. Compliance will not be optional. The platforms that embed safety, audit, and governance into their orchestration layers will gain both trust and a competitive moat.

The futures

These five strategic learnings also lead to three important tensions for the future of the agentic AI OS. The first tension is between centralization and decentralization. Orchestration layers tend to centralize over time due to network effects, but open source and geopolitical forces may resist this. The second tension lies in regulatory burden: platforms may need to slow down or redesign systems to satisfy compliance requirements, or they may embed governance so effectively that regulation becomes a moat. The third tension is the modularity of agentic systems: if agents are portable and composable, they may run across platforms; if not, vertical stacks may emerge.

Crossing those tension lines, three scenarios emerge.

  1. « Power of the few». Or orchestration being bundled into enterprise stacks by a few dominant players. Here, Microsoft and Nvidia extend their lead. Microsoft integrates agent orchestration into every Office workflow, into Azure, and developer tools. Nvidia supplies the runtime SDKs, model deployment frameworks, and infrastructure to host the entire lifecycle. This scenario is marked by tight vertical integration and high lock-in. Innovation continues, but within controlled environments. It is the natural continuation of what worked in the cloud and productivity eras.
  2. “Open federations.” Open-source tools and frameworks like LangGraph, Hugging Face, and LangChain converge to form a standard for portable agents and composable toolchains. Agentic orchestration becomes like Kubernetes: modular, standardized, and wrapped in enterprise offerings by vendors. This scenario reflects the success of Linux. Here, no one controls everything, but value accrues to those who provide the best wrappers, managed services, or domain-specific platforms.
  3. ” Localized sovereignty”. This is a future defined by political fragmentation and regional regulatory divergence. In this world, China advances its closed HarmonyOS Next stack; Europe mandates sovereign AI stacks that comply with Gaia-X, local data residency, and explainability laws. The US becomes a dual-track ecosystem with Big Tech controlling commercial agent systems and a parallel open-source movement serving developers.

Making sense of those futures

While the outcome across these scenarios may vary dramatically, it offers a few important constants for CEOs

The first lesson for CEOs is to move fast. The battle of AI OS means that big players are doubling down on innovations regarding Agentic AI. As a consequence, Agentic AI is evolving at a rapid pace, with new tools and features rolling out every few months. Companies that start early will build up valuable experience and know-how, making it much harder for slower competitors to catch up. Early adoption means your team learns how to automate, adapt, and improve processes, while waiting means you’ll need to spend more time and resources just to close the gap.

The second lesson is that control lies in how you organize and manage work, not in the tasks themselves. The real power is in setting up the flow of work—deciding how agents interact, what data they use, and who checks their work. If you let outside vendors control these rules, you risk losing oversight and flexibility. By designing your own rules and keeping a grip on how agents work together, you can switch tools more easily, protect your data, and stay in charge of your business processes.

The real advantage comes from making agents that can be reused and improved, encouraging teams to share what works, and tracking how much of your work is being handled by agents.

The third lesson is that the new way to compete is by using agents well, not just by having the best technology. Companies with libraries of reusable agent workflows can solve problems faster and adapt to change more easily. Each time you use an agent, you learn and improve, building up a base of knowledge that keeps you ahead. The real advantage comes from making agents that can be reused and improved, encouraging teams to share what works, and tracking how much of your work is being handled by agents.

In this new environment, you should review your current tools to see if they help you control workflows or if they take control away from you. Assign someone to lead your efforts in building and improving agent workflows. Start with small tests, learn quickly, and expand what works. Set clear rules for managing and checking agents, and regularly measure your progress.

Agentic AI is not just another tool—it’s a new way to run your business. Move fast, keep control, and focus on building flexible, reusable systems to stay ahead.

About the Author

Jacques BughinJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

The post The Real AI Battle: OS is the New Prize appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-real-ai-battle-os-is-the-new-prize/feed/ 0
How To Translate AI Potential Into Corporate Profitability https://www.europeanbusinessreview.com/how-to-translate-ai-potential-into-corporate-profitability/ https://www.europeanbusinessreview.com/how-to-translate-ai-potential-into-corporate-profitability/#respond Thu, 10 Jul 2025 09:44:21 +0000 https://www.europeanbusinessreview.com/?p=231973 By Louis-David Benyayer and Hao Zhong Every day, more companies adopt AI, yet profits remain flat. Stuck in the “light bulb phase”, most tweak the old instead of transforming the […]

The post How To Translate AI Potential Into Corporate Profitability appeared first on The European Business Review.

]]>

By Louis-David Benyayer and Hao Zhong

Every day, more companies adopt AI, yet profits remain flat. Stuck in the “light bulb phase”, most tweak the old instead of transforming the core. This article shows how to unlock true value.

There is a clear gap between the adoption and excitement about AI and the translation of these investments into corporate profits. Why is this the case, and what can companies do about it?

The Lag in Value Creation

Signs of corporate AI adoption are increasing every day: new software versions are released regularly, start-ups see their list of corporate clients grow bigger, and cloud providers report double-digit growth rates, so much so that the technology can no longer be described as emerging. For example, the 2025 AI Index Report by the Stanford Institute for Human-Centered AI estimates that 78% of organisations now use AI – a significant progress compared to 55% in 2023.

We should expect this rapid adoption to translate into higher corporate profits.

This is true for companies providing the picks and shovels to the AI gold rush. Chip manufacturers and cloud providers are generating substantial profits and seeing their valuations skyrocket.

In contrast, the reality is less rosy for companies using AI.

The same AI Index report states, “Most companies that report financial impacts from using AI within a business function estimate the benefits as being at low levels.

What’s Behind The Lag?

The impact of technology on productivity has been a stimulating field of study in Economics for decades.

This is especially the case for general-purpose technologies whose impact spreads across all industries, geographies, company sizes and so on.

Some researchers argue that AI has all the attributes of a general-purpose technology.

So, what can we learn from previous examples?

In the case of electricity, research revealed a lag of several decades between the adoption of the technology and its impact on productivity.

This lag represented the time needed for organisations to change their processes to leverage the technology.

When new technology is used to support existing processes, the productivity gain is marginal. But when it’s used for creating totally new ways of doing things, the impact can be radical.

When electricity came to factories in the late 19th Century, it was first used to light bulbs so that workers could work an additional couple of hours a day. However, the major increase in productivity only came when Henry Ford redesigned the manufacturing process with the assembly line.

This example shows that when new technology is used to support existing processes, the productivity gain is marginal. But when it’s used for creating totally new ways of doing things, the impact can be radical.

When it comes to AI, companies are still in the “light bulb phase”: tooling their existing processes with new technology without questioning radically what they offer and how they produce.

Whilst there are signs of greater adoption every day, it hasn’t yet reached the point of changing significant industries or work practices.

What Should Companies Do To Harness AI’s True Potential For Value?

1. Embrace the technology to discover radical new possibilities.

Many companies perceive AI as an addition to what they do already: a way to produce faster, cheaper or more, and to engage better or more often with their clients.

This way of thinking exemplifies the light-bulb approach and will only lead to marginal gains.

AI can open new directions, making previously impossible things now accessible.

Discovering these new possibilities requires a fresh approach. Long-standing companies can find it difficult to engage in such openness. They have established processes, operations, clients, and distributors – each of which serves as a reason against making radical changes to the company’s operations.

By comparison, new entrants don’t have such a legacy and can design new ways to do business, leveraging AI capabilities.

One way for established companies to transform radically is to combine technological expertise with small-scale operations developed from their management structure.

Accessing technological expertise can be done through internal development, partnerships with AI companies or acquisitions.

Developing radically new operations or offers usually involves a different talent pool and is sometimes made easier through partnerships.

2. Make strategic decisions about task allocation.

A major question for companies is where to automate and where to augment with AI.

Automation refers to letting AI handle tasks entirely on its own. Augmentation, by contrast, focuses on enhancing human abilities rather than replacing people.

This opens tricky and nuanced conversations that often depend on the context; for example, what kind of job does AI favour?

A recent research project  did a systematic review and meta-analysis of 106 experimental studies.

Each study was required to include an original human-participant experiment that evaluated the performance of humans alone, AI alone and human-AI combinations.

The results partly support augmentation as humans performed on average better with the help of AI than alone. However, they show that augmentation is not a silver bullet as the best AI or the best humans still perform better alone than a combination of AI and humans.

AI - human head

The study revealed an important difference in task type: combining AI and human performance is better for creating content than making decisions. This can be explained in two ways:

  • Over-reliance – when people rely too much on AI systems without seeking and processing more information.
  • Under-reliance– when people ignore AI’s suggestions because of adverse attitudes towards automation.

Other research has shown a significant shift in job roles due to the surge of Generative AI, with a notable decrease in time spent on initial drafting and an increase in time devoted to editing.

Identifying the right balance between AI and human input becomes crucial for maximising efficiency and effectiveness. It’s paramount to identify the tasks that can be automated, the ones that should blend AI and human contribution, and those that should remain managed by humans.

Deciding on task allocation involves taking the technological developments into account and their cost, but also the strategic positioning of the company. Deciding on what AI performs and what is managed by humans is a strategic choice, not a purely technical one.

Last, the direct environmental footprint of AI is massive and growing. Electricity and water consumption from data centres are now competing with consumers’ domestic energy needs. The choice of task allocation should also be driven by the environmental impact of their chosen AI system.

3. Invest equally in technology and people.

Companies should stop regarding AI as a technology problem and consider the human element.

However, getting clear on the type of human expertise they need versus the tasks they would trust AI to perform is difficult.

Comprehensive training programs can help employees understand the nuances of AI, including its limitations and best practices for its effective use.

Securing data and infrastructure is not enough when managers are not trained on AI systems or how to use the results they produce. Investing in training leads to higher productivity of technological investments. Training employees on AI systems and their limitations mitigates over-reliance and under-reliance, maximising the systems’ impact. Moreover, cultivating in-house talent and processes enhances the uniqueness of internal resources, strengthening competitive advantage.

A recent Financial Times article quoted research from Accenture that even though generative AI is expected to account for 15 percent of technology spending this year, fewer than half of the organisations surveyed had increased training on AI fundamentals or technical skills.

Training is becoming increasingly vital, especially to enhance the skills of lower performers. As AI continues to commoditise certain aspects of knowledge work, it’s not just the low performers who are affected; skilled professionals are also at risk.

Comprehensive training programs can help employees understand the nuances of AI, including its limitations and best practices for its effective use. The synergy between technical and human assets is key to realising the full potential of AI. Success in this domain depends not only on redefining processes but also on fostering an organisational culture that understands and trusts AI.

4. Engage with stakeholders in a conversation about the use and conditions of automation and

Significant efforts should be put into ensuring that users understand the system design.

Transparency around the data used, the models implemented, the results’ limits, and the scope of relevance all foster adoption through higher transparency.

What also supports adoption is sharing a common view about task allocation and developing a common sense of the split between automation and augmentation.

More precisely, these discussions could aim to decide on the details of augmentation, for example, to state the right level of transparency according to the context (e.g., facial recognition for smartphone vs radiology) or the objective (deciding versus performing a task).

These conversations should also involve external stakeholders such as clients and partners. Unions and collectives should also be involved in determining the scope and style of automation and augmentation.

This was demonstrated in September 2023 in the US when an agreement was reached between show writers and studios about the use of AI for writing or editing scripts.

Transformation At Scale: The Next Challenge

Exploring how to leverage AI at companies may lead to two very different types of opportunities: “light bulb” ones and “assembly line” ones.

The challenge, then, is twofold for companies.

First, how do they allocate enough resources to scale the “assembly lines” opportunities?

Second, how do they deal with both types of opportunities? Should the two remain, and if so, what is the right balance? Or should the “light bulb” ones disappear and the company concentrate on the “assembly line” ones – meaning a massive reorganisation?

Deciding how to solve the two challenges will be a critical management decision in the next few years.

About the Authors

Louis-David BenyayerLouis-David Benyayer is an Associate Professor of Information and Operations Management at ESCP Business School, Paris and AI initiatives Coordinator.
.

Hao ZhongHao Zhong is an Associate Professor of Information and Operations Management at ESCP Business School, Paris.

.
.
Reference
1. When combinations of humans and AI are useful: A systematic review and meta-analysis | Nature Human Behaviour

The post How To Translate AI Potential Into Corporate Profitability appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/how-to-translate-ai-potential-into-corporate-profitability/feed/ 0
Generative AI In Innovation Development: A Catalyst For Creative Disruption https://www.europeanbusinessreview.com/generative-ai-in-innovation-development-a-catalyst-for-creative-disruption/ https://www.europeanbusinessreview.com/generative-ai-in-innovation-development-a-catalyst-for-creative-disruption/#respond Tue, 08 Jul 2025 12:06:20 +0000 https://www.europeanbusinessreview.com/?p=232249 By Filippo Frangi From identifying strategic opportunities to enabling faster experimentation, from co-designing with users to reshaping go-to-market strategies, GenAI is pushing the boundaries of how we define and deliver […]

The post Generative AI In Innovation Development: A Catalyst For Creative Disruption appeared first on The European Business Review.

]]>

By Filippo Frangi

From identifying strategic opportunities to enabling faster experimentation, from co-designing with users to reshaping go-to-market strategies, GenAI is pushing the boundaries of how we define and deliver innovation. GenAI is indeed gaining traction as a key enabler across all stages of innovation processes. This shift demands structure, strategy, and measurable outcomes.

GenAI: a General-Purpose Technology with democratic access

The article draws from the latest research by the Startup Thinking Observatory at the Politecnico of Milan University to outline how companies can move beyond novelty and start integrating Generative AI as a structural component of their innovation systems.

GenAI can assist leadership teams in identifying emerging trends, reshaping innovation strategies, and monitoring entire portfolios of innovation initiatives.

GenAI shares the core traits of General-Purpose Technologies (GPTs, which is not the acronym of ChatGPT), such as electricity and the internet: it is versatile, scalable, and capable of enabling complementary innovations across industries. However, it introduces an additional, unique dimension: accessibility. Thanks to intuitive interfaces and relatively low barriers to entry, non-experts can now leverage GenAI for sophisticated tasks, a phenomenon rarely observed with other transformative technologies.

According to Microsoft’s 2024 Work Trend Index 1, 75 per cent of employees already use AI at work, often informally and before any structured adoption plan. This bottom-up movement highlights the urgency for leadership to harness and regulate this creativity before it scales haphazardly, but often it is already too late!

How GenAI Supports the Innovation Lifecycle in Each Phase

Starting from the four traditional phases that every innovation project must go through, it is easy to understand how GenAI is already delivering tangible value across the innovation lifecycle:

  1. Exploration: in this phase, GenAI tools analyze vast datasets to identify emerging trends, unmet needs, and strategic foresight scenarios. They can synthesize market signals and competitive landscapes far faster than traditional methods.
  2. Idea Generation: GenAI systems support divergent thinking by proposing a wide array of creative and unconventional ideas. Controlled experiments have shown that LLMs, for instance, can outperform human brainstorming groups in feasibility and impact, though not necessarily in originality2.
  3. Experimentation and Prototyping: from UI mockups to working code or product sketches, GenAI enables the rapid development of MVPs. This accelerates the “fail fast, learn faster” approach and reduces the time-to-feedback cycle.
  4. Execution and Go-to-Market: in this phase, GenAI can assist in personalizing campaigns, automating market segmentation, and generating content tailored to micro-audiences, making market launches more dynamic and responsive.

Beyond the operative stages, the role of GenAI is also becoming relevant at strategy level. GenAI can assist leadership teams in identifying emerging trends, reshaping innovation strategies, and monitoring entire portfolios of innovation initiatives. By turning unstructured data into strategic insights, these tools enable more informed, agile, and forward-looking decision-making processes.

Real-world Applications

Several companies are already demonstrating the practical benefits of GenAI in innovation:

  • IKEA used GenAI to design furniture inspired by retro-futuristic aesthetics, challenging its design teams to reimagine product categories.3
  • Oreo (Mondelez International) uses AI to accelerate the development of new snack recipes. Their AI tool uses machine learning to generate recipes based on desired characteristics such as flavor, aroma, and appearance.4
  • Albert Invent, a chemistry company, is leveraging an AI trained on over 15 million molecular structures to identify effective and safe ingredient combinations quickly, predicting physical, toxicological, and aesthetic properties.5
  • Beck’s created a product called “Beck’s Autonomous,” entirely conceptualized by AI, from the recipe to branding and packaging.6

These cases show that GenAI is not only improving internal innovation efficiency but also enabling new forms of user engagement and co-creation. In some instances, companies pursued these projects purely as experimental trials; in others, they addressed more concrete use cases. Either way, these experiences are meaningful examples — often executed with still-maturing tools — that can inspire further applications and strategic refinement.

The Other Side of The Coin: Limits and Risks

To fully leverage GenAI, companies must move from ad-hoc experimentation to structured integration.

Despite the excitement, organizations must remain critical. Several challenges can limit the strategic effectiveness of GenAI. One key concern is the risk of homogenization. GenAI tools trained on existing datasets tend to reinforce dominant patterns, which can inhibit breakthrough originality. This is not only a creative limitation but also a potential driver of bias. By amplifying dominant narratives and patterns found in training data, GenAI can unintentionally reinforce existing prejudices and stereotypes. Another issue relates to the quality of data; outputs are only as good as the inputs, and poor-quality or unrepresentative datasets can lead to misleading or inaccurate results.

Furthermore, there are still numerous ethical and legal ambiguities surrounding GenAI. Questions about the ownership of AI-generated content, the risk of unintentional plagiarism, and the lack of clear regulatory frameworks are pressing challenges that remain unresolved. Finally, even though GenAI can significantly augment creativity, it cannot replace human oversight. The effectiveness of these tools depends heavily on the expertise and critical thinking of users who can steer and validate AI-generated outputs.

Building an Augmented Innovation Model

To fully leverage GenAI, companies must move from ad-hoc experimentation to structured integration. This requires more than tools; it calls for a holistic approach rooted in culture, organization, and collaboration.

First, AI literacy and culture are foundational. Teams need more than access to advanced technologies; they require the right mindset, critical thinking skills, and ongoing learning opportunities. Fostering a culture that encourages experimentation and responsible use is crucial for scaling AI capabilities effectively and ethically.

Second, organizations should establish safe places for experimentations. These environments provide a sandbox for testing GenAI applications in controlled, low-risk settings. Here, teams can explore new ideas, experiment with workflows, and identify best practices that can later be scaled across departments or business units.

AI - apps

Finally, success requires embracing hybrid collaboration. The goal is not to replace human creativity and judgment, but to amplify it. GenAI should be seen as a co-pilot and an intelligent partner that augments human strengths while still requiring strategic direction, ethical oversight, and contextual interpretation from people. Designing systems that integrate human and machine capabilities seamlessly will be a key competitive differentiator in the years to come.

Beyond the Buzzword

Generative AI is not a passing trend. It is a foundational technology that can redefine how organizations innovate, but only if adopted with strategic intent. Companies that move past the hype, invest in capabilities, and embed GenAI into their innovation architecture will move not only faster, but smarter. In a world where change is exponential, the future belongs to those who can innovate with technologies, not just around them.

About the Author

flippo frangiFilippo Frangi, Senior Researcher, Digital Innovation Observatory, Politecnico of Milan

Master graduated in Management Engineering at the Politecnico of Milan, Filippo Frangi is Senior Researcher within the Digital Innovation Observatories. Since 2017, he has been studying how innovation is managed and developed in large enterprises and SMEs. In particular, the empirical and theoretical research activity is focused on the study of organizational and operational models for innovation, adoption of Corporate Entrepreneurship activities, Open Innovation theory and the role of startups. filippo.frangi@polimi.it | https://www.linkedin.com/in/filippo-frangi-005394109

References
1. https://www.microsoft.com/en-us/worklab/work-trend-index/ai-at-work-is-here-now-comes-the-hard-part
2. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4526071
3. https://www.fastcompany.com/90871133/ikea-generative-ai-furniture-design?utm_source=chatgpt.com
4. https://com/2024/12/27/lifestyle/oreos-owner-is-using-ai-to-create-new-snacks-and-get-them-on-shelves-5-times-faster/?utm_source=chatgpt.com
5. https://www.businessinsider.com/how-beauty-product-chemists-are-using-ai-to-test-ideas-2025-5?utm_source=chatgpt.com
6. https://lbbonline.com/news/becks-new-beer-and-its-ad-campaign-were-created-by-artificial-intelligence?utm_source=chatgpt.com

The post Generative AI In Innovation Development: A Catalyst For Creative Disruption appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/generative-ai-in-innovation-development-a-catalyst-for-creative-disruption/feed/ 0
Is AI the ‘Great Equaliser’ for SMBs in the Professional Services Sector? https://www.europeanbusinessreview.com/is-ai-the-great-equaliser-for-smbs-in-the-professional-services-sector/ https://www.europeanbusinessreview.com/is-ai-the-great-equaliser-for-smbs-in-the-professional-services-sector/#respond Fri, 04 Jul 2025 01:00:49 +0000 https://www.europeanbusinessreview.com/?p=232058 By Bret Tushaus Artificial intelligence is helping small and medium-sized businesses close the gap with larger competitors. Bret Tushaus explores how AI empowers SMBs in the professional services sector by […]

The post Is AI the ‘Great Equaliser’ for SMBs in the Professional Services Sector? appeared first on The European Business Review.

]]>

By Bret Tushaus

Artificial intelligence is helping small and medium-sized businesses close the gap with larger competitors. Bret Tushaus explores how AI empowers SMBs in the professional services sector by enhancing agility, streamlining operations, and improving project delivery. With the right strategy and tools, AI can become a powerful equaliser for ambitious firms.

 Small and medium-sized businesses (SMBs) have always possessed unique strengths, including agility, creativity, and the ability to adapt quickly to change. Yet when it comes to accessing advanced analytical capabilities and technological resources, larger organisations have historically held certain advantages through their enterprise-level infrastructure, vast data repositories, and specialised teams.

Artificial Intelligence (AI) is now amplifying the natural advantages that SMBs already possess. The technology is democratising sophisticated capabilities, allowing agile, creative businesses to leverage their inherent flexibility alongside enterprise-level analytical power. This combination of human agility and AI creates a powerful competitive advantage. For those operating in the professional services sector in particular, where project complexity and resource management directly impact margins, AI represents a transformative opportunity to level the competitive landscape.

Rethinking project excellence with AI

Traditionally, project management has long relied on historical data and fixed methodologies, alongside human intuition. While effective to a degree, this model has favoured larger organisations with substantial infrastructure and specialised staff. For smaller businesses with limited resources, it often means accepting some level of uncertainty is simply part of the process.

AI changes this equation fundamentally. By analysing vast datasets of past performance across multiple touchpoints, AI systems identify potential risks and opportunities long before they would become apparent through the conventional means outlined above. This predictive capability gives SMBs a new level of foresight, allowing them to match or even exceed the sophistication of larger competitors.

It’s little wonder that confidence in tracking key project metrics – including profitability, budget adherence, and actual cost – has jumped from 59% to 75% in just one year among UK project-based SMBs. This dramatic improvement coincides with the fact that 57% of firms now are currently using, or planning to adopt AI to improve project delivery.

This transformation is particularly relevant for SMBs, where just over six in ten (61%) identify technology integration as a critical challenge yet 47% recognise technology and automation as their primary profitability driver. The potential for competitive advantage through AI adoption has never been greater.

Balancing ambition with pragmatism 

Simply acknowledging AI’s potential is insufficient. Successful implementation requires a structured approach that balances ambition with pragmatism. Several strategies are a non-negotiable for firms looking to level the playing field with AI.

We’re already seeing the effects of this transformation firsthand in the professional services sector. Small consultancies are now leveraging AI to perform complex analyses that once required supercomputing capabilities. Boutique architectural firms are using generative design tools to explore thousands of design possibilities in the time it once took to create a handful of options. Project-focused SMBs are employing AI-powered sequencing to optimise resources with a precision previously impossible without dedicated planning departments. But how can all firms reap the same rewards?

The key is to partner with progressive solution partners who have AI capabilities built into their roadmap. Rather than developing complex data strategies from scratch, SMBs can leverage off-the-shelf solutions that are designed with AI integration in mind. By using these tools to their fullest extent and generating quality data through daily operations, businesses create the foundation for AI capabilities both today and in the future. This makes AI immediately accessible without extensive internal data expertise or intimidating technical overhauls.

Equally important is investing in a team’s capabilities. While AI solutions become increasingly user-friendly, organisations still need professionals who understand both the technology’s potential and its limitations. This means ensuring project teams understand how to effectively leverage AI tools within their existing workflows, building on the collaborative and adaptive culture that already defines successful SMBs.

With just over half (53%) of firms citing lack of upskilling investment as detrimental to their organisation, and 51% focusing on encouraging continuous learning, successful SMBs recognise that human capability development must parallel technological advancement. This dual focus ensures businesses not only adopt current AI solutions but develop the organisational adaptability to integrate future innovations as they emerge.

Building future ready teams

Integration must be iterative. SMBs must start with clearly defined use cases where AI can deliver immediate value, such as automating routine administrative tasks, enhancing project risk assessment, or improving resource allocation. With 41% of smaller organisations identifying significantly increasing their number of projects as essential for future success, AI-powered process optimisation must offer a pathway to expansion without proportional increases in personnel or operational costs.

The encouraging news for SMB leaders is that returns on AI investment can be immediate when the right approach is taken. By starting small with mainstream tools and focusing on specific use cases, businesses often see efficiency gains within weeks rather than months. This rapid return is particularly valuable for SMBs where every operational improvement directly impacts the bottom line.

While AI offers transformative potential, it must be utilised to enhance – rather than replace – existing business strengths. AI’s analytical power is nothing without hard-won expertise and a deep understanding of client needs. The future of projects for SMBs will be about the creation of collaborative intelligence that amplifies what a team already does well.

For SMBs, AI represents an opportunity to do what they’ve always done best – innovate, adapt, and deliver exceptional value – but now with unprecedented analytical power and efficiency. The natural agility and creativity that defines successful small and medium-sized businesses, combined with AI’s analytical capabilities, creates a compelling competitive advantage that larger, less flexible organisations struggle to match. SMBs must continue to prioritise digital transformation, with an emphasis on upskilling, collaboration, and innovation. In turn, smaller firms position themselves for long-term growth while underlining one clear message: the technology that once threatened to widen the competitive gap is now the most powerful tool for closing it.

About the Author 

BretAs Deltek’s Vice President of Product Management, Bret Tushaus focuses on the changing needs of the architecture industry with a mission to identify new ways technology can solve organization’s operational pain points. Bret also has a background in architecture after spending 15 years at Eppstein Uhen Architects prior to joining Deltek.

The post Is AI the ‘Great Equaliser’ for SMBs in the Professional Services Sector? appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/is-ai-the-great-equaliser-for-smbs-in-the-professional-services-sector/feed/ 0
The Future of Government Procurement: Innovation, AI, and Automation https://www.europeanbusinessreview.com/the-future-of-government-procurement-innovation-ai-and-automation/ https://www.europeanbusinessreview.com/the-future-of-government-procurement-innovation-ai-and-automation/#respond Tue, 01 Jul 2025 05:54:03 +0000 https://www.europeanbusinessreview.com/?p=231647 By Lara Blake Government procurement, representing a substantial portion of global GDP, stands at the cusp of a technological revolution. While traditional procurement processes have long been characterised by bureaucratic […]

The post The Future of Government Procurement: Innovation, AI, and Automation appeared first on The European Business Review.

]]>
By Lara Blake

Government procurement, representing a substantial portion of global GDP, stands at the cusp of a technological revolution. While traditional procurement processes have long been characterised by bureaucratic inefficiencies and complex paperwork, the integration of artificial intelligence (AI), automation and powerful tender management software is fundamentally transforming this landscape. This transformation presents both opportunities and challenges for governments and businesses alike.

Current Challenges in Government Procurement

The existing procurement framework faces several systemic challenges that impede efficiency and innovation:

Bureaucratic Inefficiencies

Traditional procurement systems remain encumbered by lengthy approval processes and outdated methodologies, resulting in significant operational delays and increased costs.

Transparency Issues

The opacity of procurement decisions often raises concerns about fairness and potential corruption, undermining public trust in governmental spending processes.

Limited SME Access

Small and medium enterprises (SMEs) face substantial barriers when attempting to navigate complex procurement requirements, creating an uneven playing field that favors larger corporations.

Data Management Complexities

The fragmentation of procurement data across various governmental departments impedes effective analysis and optimisation of spending patterns.

The Transformative Impact of AI and Automation

The integration of artificial intelligence and automation into procurement systems is fundamentally reshaping how organisations approach public spending. Modern AI-driven platforms are spearheading this transformation through sophisticated process optimisation that goes far beyond simple automation.

Consider bid evaluation, traditionally a time-consuming process prone to human bias. Today’s AI algorithms can assess submissions with remarkable precision, analysing thousands of data points in mere minutes, often speeding up review for both procurement teams and professional bid writing consultants supporting organisations through the tender process. Government procurement offices implementing these systems report processing times reduced by up to 60%, with automated document verification systems handling the burden of compliance checking that once occupied procurement officers for weeks.

Smart contract management has emerged as another breakthrough, transforming what was once a labyrinth of paperwork into a streamlined digital workflow. These systems not only automate routine tasks but also flag potential issues before they develop into problems, allowing procurement teams to focus on strategic decision-making rather than administrative tasks.

The revolution in procurement transparency perhaps best illustrates the transformative power of these technologies. Blockchain integration with AI-powered analytics has created an unprecedented level of accountability in public spending. Every transaction leaves an immutable record, while smart contracts automatically enforce compliance requirements without human intervention. Stakeholders can now track procurement processes in real-time, a level of visibility that was unimaginable just a few years ago.

Democratizing Access: A New Era of Inclusion

Perhaps the most significant impact of AI in procurement has been its role in leveling the playing field for smaller enterprises. Traditional procurement processes often favored larger organisations with the resources to navigate complex bureaucratic requirements. Advanced AI systems are dismantling these barriers through intuitive interfaces and intelligent assistance in bid preparation. Small and medium-sized enterprises can now compete more effectively, thanks to simplified registration processes and standardised documentation requirements that no longer require teams of specialists to navigate.

Innovation in Action: Global Case Studies

The real-world impact of AI in procurement is best illustrated through several pioneering implementations across the globe. The United Kingdom’s procurement analytics initiative stands out, having achieved a remarkable 23% improvement in supplier performance assessment accuracy. This system now enables procurement officers to conduct predictive trend analysis, fundamentally changing how the government approaches strategic planning in public spending.

Singapore’s GeBiz platform represents another quantum leap in procurement innovation. By fully integrating AI systems throughout the procurement cycle, they’ve achieved a 40% reduction in processing times while simultaneously increasing cost savings by 15% through optimised decision-making processes. This dual improvement in both efficiency and effectiveness demonstrates the transformative potential of comprehensive AI integration.

A particularly illustrative example of AI’s democratizing effect comes from an unexpected quarter: a boutique Port Douglas Florist, in Australia, that transformed into a major supplier for government and corporate contracts across the Asia-Pacific region. Starting as a local wedding florist, the business identified an opportunity in government and corporate procurement for event services and office installations. By leveraging AI-powered tender management software, the company successfully navigated complex procurement processes that had previously been the domain of large multinational corporations.

The florist’s journey from local business to regional player demonstrates how AI-driven procurement systems can level the playing field. Using automated bid preparation tools and smart contract management systems, they secured their first government contract for parliamentary office floral arrangements. This initial success led to expanded opportunities across multiple departments and, eventually, corporate contracts throughout Southeast Asia. Within three years, their revenue from procurement contracts grew from zero to accounting for 60% of their business, while maintaining their boutique wedding services.

This transformation was made possible by AI systems that simplified the complexity of procurement processes. The same technology that enables multinational corporations to manage thousands of contracts simultaneously allowed this small business to efficiently handle government compliance requirements, monitor contract deliverables, and scale their operations without a proportional increase in administrative overhead.

The European Union’s eProcurement systems perhaps best exemplify the sophisticated capabilities of modern procurement technology. These systems successfully manage the complexity of cross-border transactions while ensuring compliance with varied regulatory requirements across member states – a feat that would be nearly impossible without AI assistance.

Measuring Success: The Strategic Benefits

The adoption of AI-driven procurement systems has delivered quantifiable advantages across multiple dimensions. Organisations implementing these systems consistently report operational efficiency gains, with procurement cycle times typically reduced by half and administrative costs decreased by 30%. However, the benefits extend far beyond mere cost savings.

Financial optimisation has reached new levels of sophistication through AI-powered analysis. Procurement officers now have access to real-time market rate comparisons and enhanced cost analysis capabilities, enabling more informed budget allocation decisions. Risk management has similarly evolved, with advanced fraud detection systems and automated compliance monitoring providing a level of oversight previously unattainable.

Navigating the Implementation Journey

While the benefits are clear, organisations must approach AI implementation with careful consideration of several critical factors. The integration of AI systems with existing infrastructure requires substantial planning and investment. Success depends on developing comprehensive migration strategies that maintain operational continuity while upgrading technological capabilities.

Cybersecurity presents another crucial consideration. As procurement systems become more digitised, the importance of robust data protection frameworks increases exponentially. Organisations must balance the drive for efficiency with the need to safeguard sensitive procurement data through sophisticated security protocols.

The human element remains crucial. While AI systems can automate many tasks, they require skilled professionals to manage and optimise their operation. Organisations must invest in comprehensive training programs to ensure their workforce can effectively leverage these new technologies.

About the Author

Lara Blake is the Partnership Development Manager at Tenderfy, a leading tender management software. With a strong background in business development and a passion for technology, Lara is dedicated to empowering young professionals and driving innovation in the industry through her writing and advocacy.

The post The Future of Government Procurement: Innovation, AI, and Automation appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-future-of-government-procurement-innovation-ai-and-automation/feed/ 0
Why AI Tools are Not the Magic Digitalisation Bullet for Legacy Businesses https://www.europeanbusinessreview.com/why-ai-tools-are-not-the-magic-digitalisation-bullet-for-legacy-businesses/ https://www.europeanbusinessreview.com/why-ai-tools-are-not-the-magic-digitalisation-bullet-for-legacy-businesses/#respond Sun, 29 Jun 2025 17:29:47 +0000 https://www.europeanbusinessreview.com/?p=231586 By Ritavan Many legacy businesses rush to adopt AI tools without aligning them to clear strategic goals. Ritavan argues that this approach leads to costly inefficiencies and missed opportunities. He […]

The post Why AI Tools are Not the Magic Digitalisation Bullet for Legacy Businesses appeared first on The European Business Review.

]]>

By Ritavan

Many legacy businesses rush to adopt AI tools without aligning them to clear strategic goals. Ritavan argues that this approach leads to costly inefficiencies and missed opportunities. He explains why AI should amplify business fundamentals, not replace them, and outlines a practical framework for generating measurable, customer-focused value through data.

In an era dominated by buzzwords like “digital transformation” and “AI transformation”, many legacy businesses are scrambling to adopt the latest technologies, hoping to ride the wave of tech buzz. However, this approach is more reactive than strategic and ultimately self-defeating.

Despite massive investments in cloud, SaaS solutions, dashboards, machine learning models, and automation tools, a significant number of traditional enterprises still struggle to show meaningful returns. Why? Because AI tools, on their own, are not magic bullets. They are amplifiers, not creators, of value. And without a sound strategy rooted in business fundamentals, AI simply increases costs and complexity.

The Hype Trap: Chasing Trends Without Impact

Legacy businesses are facing pressure from all sides: fast-evolving customer expectations, tighter margins, and rising digital-native competition. This cocktail of urgency often drives them to embrace AI in a haphazard way, jumping on the trend without aligning it to their core business objectives. As Ritavan, warns, this “spray and pray” approach results in wasted resources and confusion. Businesses collect troves of data and deploy AI models without a clear “why,” creating noise and activity instead of clarity and impact.

Worse still, many legacy businesses crowdsource their digital strategy across departments, ending up with disjointed, incoherent initiatives. When your roadmap is a product of consensus rather than conviction, it lacks focus and intent. True data impact, as Ritavan puts it, must begin with deliberate, first-principle thinking, not with copying groupthink “best practices” or buying into vendor-driven tool hype.

Why Data Alone Isn’t Enough

Being data-driven is no longer a competitive edge. It’s a baseline requirement. But the reality is that most organisations have no clear way to benchmark their data-driven impact. They may have dashboards, KPIs, and ML models, but they cannot articulate how these actually move the needle. They mistake activity and effort for outcomes and impact.

This is where Ritavan’s SLASOG framework — Save, Leverage, Align, Simplify, Optimize, Grow — becomes critical. It is a practical and rigorously tested approach to ensuring data isn’t just being hyped but is purposefully used to create customer value and drive business outcomes.

  • Save money by avoiding costly groupthink mistakes
  • Leverage your strengths and highest impact opportunities
  • Align everyone and everything to your business goals
  • Simplify to minimize the cost of complexity and maximize leveraged returns
  • Optimize learning and impact based on empirical truth
  • Grow by shedding the scarcity mindset and focusing on demand

Legacy Assets: The Unfair Advantage

Contrary to popular belief, legacy businesses are not at a disadvantage in the data-driven era. In fact, their long-standing customer relationships, physical assets, and supplier networks are formidable assets if used wisely. Digital-native competitors may be agile, but they lack the depth and trust that legacy businesses have spent decades building.

The challenge is not to replace legacy systems with shiny AI tools, but to leverage physical non-digital advantages towards data-driven value creation.

The most durable success comes from businesses counter-positioning themselves based on their unique unfair advantages. By doing things competitors can’t or won’t do without breaking their own existing business models.

Avoiding the Illusion of Progress

Digitalisation is often mistaken for progress. But layering AI on top of broken systems just creates complexity, not efficiency. Many legacy businesses fall into the trap of thinking they’re transforming just because they’ve bought tools. True transformation is subtractive. It involves rethinking organisational design, reengineering workflows, and removing bureaucratic decision-making.

Adaptability and antifragility depend on decentralised, data-informed decision-making, not blueprint-based plans or consensus committees. As Ritavan stresses, decisions must be guided by data experimentation, not politics or legacy habits. That is the difference between operational excellence and digital theatre.

Measuring What Matters: Data-Driven Customer Value

Ultimately, the question is not how much data you collect, but how much customer value that data helps you create. Legacy businesses tend to obsess over cost savings and internal efficiencies but often fail to measure how data improves the customer experience in meaningful, monetizable ways.

The real benchmark is this: Can you quantify the data-driven value your customer receives? Is your data making your product smarter, your service faster, your experience more relevant? Have you used it to personalise offerings, anticipate needs, and deepen loyalty?

Without these outcomes, AI is merely an expensive exercise in tech adoption. But remember that no one consumed their way to greatness! A well-articulated data strategy starts from the customer and works backwards, not the other way around.

Embracing the Data Flywheel

The end goal isn’t just automation or optimization. It is building a compounding data-driven value creation flywheel. In the most successful organisations, every customer interaction generates data that improves products, which in turn drives better outcomes, which brings in more customers and more useful data. This is the flywheel effect.

Legacy businesses must build toward this future where data doesn’t just inform decisions but powers an engine of growth that accelerates over time. That requires rethinking every touchpoint, building feedback loops into your services, and productising your data insights.

Conclusion: No Silver Bullets, Only Smart, Strategic Moves

Charlie Munger once shared a powerful lesson from his and Warren Buffett’s time in the textile industry that underscores the illusion of tech-driven value. When told about a new loom that could double production, Buffett responded that they should exit the textile business. His logic? The real value would go to the loom maker and the textile buyer—not the company investing in the equipment. It was a clear-eyed recognition that simply adopting new technology doesn’t guarantee returns unless it aligns with a defensible business model and creates differentiated value.

There is no magic AI tool that can save a legacy business from irrelevance. Digitalisation is not a one-time fix or a tech-stack makeover. It is a strategic, iterative, customer-centered transformation that demands clarity, courage, and craftsmanship.

Legacy businesses already have what it takes: depth, history, trust, and data. But to thrive in the new era, they must go beyond tools and trends. They must commit to long-term value creation, anchored in first principles, not fads.

As Ritavan makes clear, success lies in asking the right questions, not buying the right software. Only then can AI amplify what’s already strong instead of revealing what’s fundamentally weak.

About the Author

RitavanRitavan is an operator and investor, author of Data Impact, with peer-reviewed publications and an international patent. Over the past decade, he has built or scaled, data-driven solutions impacting billions. His mission: replace vague digital transformation narratives with clear, outcome-focused frameworks that help legacy businesses create real, measurable value.

The post Why AI Tools are Not the Magic Digitalisation Bullet for Legacy Businesses appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/why-ai-tools-are-not-the-magic-digitalisation-bullet-for-legacy-businesses/feed/ 0
Between Brussels and Westminster: Navigating AI Regulations  https://www.europeanbusinessreview.com/between-brussels-and-westminster-navigating-ai-regulations/ https://www.europeanbusinessreview.com/between-brussels-and-westminster-navigating-ai-regulations/#respond Sun, 29 Jun 2025 15:24:52 +0000 https://www.europeanbusinessreview.com/?p=231572 By Andrew How  Artificial intelligence is reshaping insurance through advanced analytics and automation. Andrew How explores how regulators in the UK and EU are taking different approaches to AI governance. […]

The post Between Brussels and Westminster: Navigating AI Regulations  appeared first on The European Business Review.

]]>

By Andrew How 

Artificial intelligence is reshaping insurance through advanced analytics and automation. Andrew How explores how regulators in the UK and EU are taking different approaches to AI governance. He highlights why insurers must align innovation with evolving compliance standards to manage risk, ensure fairness, and harness AI’s full potential across jurisdictions. 

Artificial intelligence (AI) is fast becoming the engine room of modern insurance, from dynamic pricing and risk-based underwriting to customer segmentation and behavioural modelling. But as the technology races ahead, regulators on both sides of the Channel are taking slightly different paths.  

The EU’s AI Act 

The EU AI Act (official text here) entered into force in August 2024, with staggered deadlines from 2025 to 2027. It takes a risk-based approach, classifying AI applications into prohibited, high-risk, limited-risk, and minimal-risk categories. Most AI systems used in insurance pricing, underwriting, claims handling, and fraud detection will likely fall into the “high-risk” category—especially those impacting access to financial services or that may influence decisions with legal or significant personal consequences. 

The law also imposes heavy penalties for non-compliance: up to €35 million or 7% of global annual turnover, whichever is higher. 

Specific requirements for insurers began applying from February 2025, including bans on certain AI uses (e.g., social scoring) and AI literacy training. From August 2025, general-purpose AI models and governance structures will be in scope, followed by implementation of full high-risk obligations in 2026–2027. 

The UK’s Sector-Led Model: Flexibility with Friction 

In contrast, the UK government has intentionally avoided creating a centralised AI law, favouring a pro-innovation framework based on existing regulatory principles. The approach emphasises five cross-sectoral principles: safety, transparency, fairness, accountability, and contestability. 

While flexible, this regime has left sectoral regulators – like the Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA) – to interpret and enforce these principles based on context.  

Key compliance considerations include: 

  • Ensuring fair outcomes in pricing and underwriting, in line with Consumer Duty
  • Avoiding algorithmic bias in automated decision-making
  • Managing risks from third-party AI tools (e.g., cloud or data vendors)
  • Ensuring explainability and audit trails for models impacting customer access to financial services

Meanwhile, the proposed AI (Regulation) Bill, a private member’s bill introduced in the House of Lords, signals rising political appetite for stronger statutory guardrails (see Bill status here). 

Looking Ahead: The Agentic Horizon 

For re/insurers operating across the UK and EU, the divergence in AI governance creates a challenging compliance landscape. An AI model built to UK standards, emphasising agility, proportionality, and sectoral discretion, may not meet the EU’s strict documentation, audit, and transparency requirements. 

Ultimately, for insurers technology enhancements around pricing, decisioning, underwriting and regulation are evolving in tandem, increasing the demand for trusted, expert technology partners that can also demonstrate actionable insights on AI compliance. 

Indeed, the next regulatory test will involve agentic AI – systems capable of making autonomous decisions and dynamically adapting to objectives without constant human intervention, but  – critically – still with expert human oversight.  

This shift from automation to autonomy will also require an evolution in governance structures. Agentic AI: systems capable of proactively making decisions, initiating actions, and adapting based on feedback to achieve specific goals, have immense potential to improve efficiency, customer centricity and personalisation. The potential extends even further when multiple AI agents operate in parallel, dynamically coordinating and adjusting in real time to solve complex problems or pursue shared objectives. 

Navigating, Not Avoiding, Complexity 

The AI regulatory landscape in the UK and EU is evolving, and these two markets are taking distinctly different approaches to balancing innovation and oversight. While the EU imposes strict, top-down compliance obligations, the UK’s sector-led model offers flexibility but introduces ambiguity. For insurers operating across both jurisdictions, this divergence creates added complexity, not only in legal terms but in day-to-day operational decisions.  

The difference between AI ambition and impact often lies in working with partners who understand the full picture, from regulatory nuance to behavioural economics and legacy integration. The best suppliers bring this clarity to every deployment. As the technology moves toward greater autonomy and impact, partnering with trusted technology providers who can ensure compliance, transparency, and performance at scale is essential. 

We firmly believe that regulatory divergence isn’t a roadblock to innovation. It’s a catalyst for more mature, enterprise-wide AI strategies that are as robust as they are agile – and it’s this shift that will define the next phase of insurance transformation.

About the Author

Andrew HowAndrew How is Director of Insurance – UKI at Earnix, leveraging nearly two decades of expertise in P&C insurance across EMEA. He drives growth and innovation through enterprise software solutions, blending deep industry insight with cutting-edge Insurtech and Fintech advancements to transform insurance pricing, underwriting, and customer engagement. 

The post Between Brussels and Westminster: Navigating AI Regulations  appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/between-brussels-and-westminster-navigating-ai-regulations/feed/ 0
Three Areas in Which to Upskill Staff to Work Alongside AI https://www.europeanbusinessreview.com/three-areas-in-which-to-upskill-staff-to-work-alongside-ai/ https://www.europeanbusinessreview.com/three-areas-in-which-to-upskill-staff-to-work-alongside-ai/#respond Sun, 08 Jun 2025 15:31:16 +0000 https://www.europeanbusinessreview.com/?p=230572 By Kamales Lardi To thrive alongside AI, organisations must invest in continuous learning that empowers employees to use AI tools, think critically, collaborate effectively, and adapt to rapid change. Upskilling […]

The post Three Areas in Which to Upskill Staff to Work Alongside AI appeared first on The European Business Review.

]]>

By Kamales Lardi

To thrive alongside AI, organisations must invest in continuous learning that empowers employees to use AI tools, think critically, collaborate effectively, and adapt to rapid change. Upskilling in these key areas ensures teams can not only keep pace with AI advancements but also drive innovation and value in a tech-driven world.

The application of artificial intelligence (AI) at work is now here, and has become a part of the global business reality. Use of AI-based solutions at work has nearly doubled in the past year, with over 75% of knowledge workers using it as part of their daily work. As AI-powered transformations are expected to deliver tangible business outcomes and value, organizations worldwide are increasing their spending on AI-enabled applications, infrastructure, and related business services. By 2028, AI spending will more than double, estimated to reach $632 billion globally.

A recent report by Boston Consulting Group indicated that leading organizations are currently allotting up to 1.5% of their total budget to AI upskilling. However, it is evident that the pace of AI development could potentially outperform the ability of human workers to learn, adapt and upskill quickly. A deeper investment in resources and capabilities is required to support the massive shift in how employees are expected to achieve their goals and effectively manage their responsibilities in the age of AI.

To understand how AI will reshape work, Microsoft partnered with LinkedIn to conduct a survey of 31,000 people across 31 countries. The survey highlights that employees want to use AI-based solutions a work to increase efficiency and free up time for more engaging and creative work. Conversely, 45% of professionals surveyed are worried that AI will replace their jobs, and are considering leaving in the year ahead. From the same survey, only 39% of people who use AI have received formal training from their companies, while only 25% of companies are planning to offer AI upskilling and training.

In my new book, Artificial Intelligence For Business, I highlight the critical need to prioritise upskilling, reskilling and new-skilling to ensure the workforce is able to leverage AI-based solutions effectively. Conventional approaches to close skills gaps are falling short, compared to the pace of AI development and demands of the business and technological landscape.

Organisations will need to build mindful, continuous learning and development programmes to ensure employees are able to apply AI knowledge effectively and stay up to date with rapid developments in the sector. Organisations must invest in upskilling that not only ensure employees are able to use AI tools and solutions, but also think critically, collaborate effectively and adapt continuously.

Strategic Fluency in AI and Decision-Making

Although some degree of technical understanding can be valuable, employees and leadership teams do not need to become technical experts in machine learning or data scientists. But a deep understanding of how AI works in a business context, as well as its impact and risks, are necessary. Strategic fluency – the ability to distinguish between hype and real business value – is critical in determining where AI-based solutions can be most effective, and developing use cases that will deliver business outcomes. Management teams and employees will need to build a foundational understanding of AI concepts, including generative models, automation and natural language processing, within the context of real business applications.

Additionally, ethical awareness is highly critical. This will enable teams working with AI-based solutions to flag issues and abnormalities, such as algorithmic bias, data misuse and regulatory non-compliance. Decision-makers in organisations will need to develop critical oversight capabilities to ensure AI outputs align with human judgement and broader organisational objectives. Effective methods to upskill in strategic fluency and decision-making may include executive briefings, cross-functional learning labs, and AI strategy and ethics courses that are embedded in the context of the business.

Human-AI Collaboration and Workflow Design

AI solutions are transforming job roles, not replacing them. For example, there is a lot of hype in the market currently regarding Agentic AI replacing jobs and team functions. These are AI systems that can make decisions and act independently to achieve specific goals, often with limited human supervision. As intelligent as they systems may appear, true opportunity lies in designing workflows where humans and AI systems support and complement each other. To alleviate fears of being replaced, employees must understand how these AI systems can augment their capabilities.

Upskilling efforts need to focus on task-level integration by conducting skills analysis to understand the tasks each role performs and capabilities required to complete tasks effectively. This will offer insights into what tasks could be automated, and where it is necessary to retain human oversight, intervention and capabilities. Teams will need to learn how to delegate repetitive or data-heavy tasks to AI-based solutions, including deeper training on prompt engineering for large language models, learning to apply AI productivity tools, and how to orchestrate human-AI handoffs in daily workflows.

This shift in working models can be supported by role-specific training on AI tools, as well as job redesign workshops conducted with internal teams to identify high-impact opportunities for automation and augmentation.

Adaptive Mindset and Soft Skills

The success of applying AI for business effectively comes down to the human element – whether people are able to adapt to the new working environments. It is crucial to create an environment where exploration and experimentation are encouraged, learning from failure is normalised, and staying up-to-date with emerging tools and capabilities becomes second nature.

As teams take on more nuanced roles and integrate more AI-based solutions in daily workflows, the need for human-centric skills such as empathy, creativity, communication and cross-functional collaboration become more important. Additionally, psychological safety in the organisation, where employees feel secure enough to test, challenge, and question the AI systems and its output will ensure the appropriate human oversight is put in place and people embrace AI solutions effectively. Organisations can develop adaptive mindsets through leadership coaching, peer-led groups and innovation incentives that reward curiosity and participation.

People at the centre of AI for business

Organisations that want to achieve sustainable success with AI application for business must understand that it is not only a technology investment, but rather a people strategy. Focusing only on the implementation of technology and tools will fall short. The real differentiator lies in how well people are prepared to lead, collaborate and grow in an AI-augmented business world. For business leaders, the call to action is clear – equip your teams to not only work with AI-based solutions, but to embrace and thrive with them.

About the Author

Kamales LardiKamales Lardi is the author of Artificial Intelligence for Business and a recognised global keynote speaker on AI transformation

The post Three Areas in Which to Upskill Staff to Work Alongside AI appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/three-areas-in-which-to-upskill-staff-to-work-alongside-ai/feed/ 0
The AI Wave is Here: 5 Simple Ways to Get Your Business AI-Ready  https://www.europeanbusinessreview.com/the-ai-wave-is-here-5-simple-ways-to-get-your-business-ai-ready/ https://www.europeanbusinessreview.com/the-ai-wave-is-here-5-simple-ways-to-get-your-business-ai-ready/#respond Sun, 01 Jun 2025 15:14:46 +0000 https://www.europeanbusinessreview.com/?p=230258 By Tracy Sheen Small businesses can thrive in the AI era with the right mindset and simple steps. In this accessible guide, Tracy Sheen outlines how owners can use everyday […]

The post The AI Wave is Here: 5 Simple Ways to Get Your Business AI-Ready  appeared first on The European Business Review.

]]>

By Tracy Sheen

Small businesses can thrive in the AI era with the right mindset and simple steps. In this accessible guide, Tracy Sheen outlines how owners can use everyday tools to work smarter, not harder. From automating tasks to upskilling teams, her insights make AI less intimidating and more empowering for everyone.

Artificial intelligence is no longer a future trend—it’s a present-day force reimagining how businesses operate, compete, and connect with customers. For small business owners, this can feel both exciting and overwhelming. With limited time, resources, and technical expertise, how do you even begin to prepare? The good news is that getting AI-ready doesn’t require a tech overhaul or an IT degree. What it does require is a mindset shift and a few practical steps. Whether you’re a retailer, a professional services firm, or a tradie, here are five simple ways to position your business in the AI era. 

1. Start with curiosity, not code

AI readiness begins not with technology, but with awareness. Ask: What are the repetitive, time-consuming tasks in my business? Where do we lose time or miss opportunities? AI tools—many of them affordable and user-friendly—are already solving these problems. Think automated inbox management, chatbots for customer service, or scheduling tools. You don’t need to build AI, you just need to know what’s possible—and where it could help. Set aside 30 minutes a week to explore what’s out there. Curiosity is your competitive advantage.

2. Audit your data (even if it’s messy)

AI runs on data. But most businesses don’t have perfect records — and that’s okay. What matters is knowing what data you do have, where it lives, and how you collect it. Start with a simple audit: What customer data do you gather? How do you track sales or inventory? Is your information in spreadsheets, cloud apps, or someone’s head? Once you have visibility, you can begin to streamline. Clean, accessible data is the first step toward using AI meaningfully.

3. Upskill your team—gently

You don’t need an in-house AI expert, but you do need a team that’s open to change. AI can and will amplify what your people do, not replace them. Treat AI like another team member, not a competitor and help your people to see the value in adopting new tools. Host a lunch-and-learn session about AI in your industry. Share a simple automation tool and invite staff to trial it. The goal is to reduce fear and create an environment of collaboration not competition. Over time, these small investments in learning can unlock big gains in productivity and morale.

4. Experiment with low-risk, high-reward tools

You don’t need to overhaul your systems to benefit from AI. Start with tools that integrate with what you already use—whether that’s Xero, Shopify, ChatGPT, or Microsoft 365. For example, AI-powered writing assistants can help with customer emails or marketing copy. AI schedulers can take the back-and-forth out of bookings. These aren’t just “nice to haves”—they save real time, which small teams often lack. Pick one area of friction and pilot a tool. Learn from it. Then scale what works.

5. Think in workflows, not just tools

The most powerful way to leverage AI isn’t by adding more apps—it’s by reimagining how work gets done. Look at your business processes end-to-end. Where are the delays, bottlenecks, or duplicated effort? This is where small businesses have an edge: they can adapt quickly. By redesigning workflows with AI in mind—automating repetitive steps, using predictive insights, or creating smarter customer journeys—you can free up time to focus on growth, relationships, and innovation.

AI isn’t a threat to small business—it’s a toolkit. But like any toolkit, its value depends on how you use it. You don’t have to do everything at once. Just start. Explore one tool. Solve one problem. Build one habit. The businesses that embrace AI now—not perfectly, but practically—will be the ones best positioned to thrive in what comes next.

About the Author

Tracy SheenTracy Sheen is an AI business strategist, keynote speaker, and author of AI & U: Reimagine Business, helping leaders navigate the intersection of technology and human potential. She works with organisations across sectors to make AI accessible, ethical, and impactful for teams of every size. Visit https://www.thedigitalguide.com.au/  

The post The AI Wave is Here: 5 Simple Ways to Get Your Business AI-Ready  appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-ai-wave-is-here-5-simple-ways-to-get-your-business-ai-ready/feed/ 0
Google or Gemini? A Framework for Navigating Agentic AI Confusion https://www.europeanbusinessreview.com/google-or-gemini-a-framework-for-navigating-agentic-ai-confusion/ https://www.europeanbusinessreview.com/google-or-gemini-a-framework-for-navigating-agentic-ai-confusion/#respond Tue, 27 May 2025 09:58:56 +0000 https://www.europeanbusinessreview.com/?p=230026 By Jacques Bughin  Agentic AI is transforming the digital economy, replacing traditional search with intelligent execution. In this article, Dr. J Bughin presents a five-step framework that challenges binary narratives and […]

The post Google or Gemini? A Framework for Navigating Agentic AI Confusion appeared first on The European Business Review.

]]>
By Jacques Bughin 

Agentic AI is transforming the digital economy, replacing traditional search with intelligent execution. In this article, Dr. J Bughin presents a five-step framework that challenges binary narratives and reveals how businesses can adapt strategically. The future of monetization depends on navigating this shift with clarity, precision, and economic insight.

Summary

In the age of agentic AI, where artificial intelligences no longer simply respond but execute actions, traditional business models – such as Google’s – are being profoundly challenged. Managers need to find a clear, unambiguous answer to this question. This paper proposes a five-step analytical framework for understanding this rupture and deriving well-founded strategic decisions from it. Applied to the case of Google, this process reveals that:

  1. The current model is based on monetizing traffic via the SERP; however, it is structurally fragile. If agents bypass the SERP, disintermediate search, and, above all, reduce the value of the click, it could undermine the system.
  2. On the demand side, agents promise a growing search market by improving conversion rates and making previously ignored queries monetizable. This attracts new entrants and allows Google to cannibalize itself.
  3. Competition is evolving: according to game theory, a new equilibrium should quickly emerge between Gemini (Google) and the integrated advertising of LLMs, and at a pace faster than that driven by the adoption of agentic AI.
  4. The value will shift to execution. Google must therefore become an orchestrator of agents, not just a search engine.
  5. An interesting game balance is not an all-out battle, but a differentiation model where agents focus on industries (verticalization) while Google becomes more integrated, from Google Cloud, and Chrome to Google workplace and Gmail.

This framework makes it possible to move beyond binary reactions and approach transformation in a structured, rigorous, and economically sound way.

Introduction

The rise of large language models (LLMs) and agentic AI has catalyzed a wave of speculation about the end of search as we know it.

While popular discourse is dominated by two opposing conjectures (“Google will be wiped out” versus “LLMs are not profitable”), the future is more complex and requires a structured analysis of how search has been monetized, as well as a theoretical assessment of the evolution of search and monetization in the context of the evolution of AI.

Using models based on the microeconomics of search as well as the type of strategic interaction (static and repeated games) between Google and attackers such as Open AI, Perplexity and others, we try to offer a more powerful framework that not only explains the transformation underway, but debunks simplistic narratives (Table 1). Managers may find this framework important when they’re looking for more solid answers about what to do in AI transformation.

The Five-Step Framework

Table 1: Navigating AI confusion

Step Action Objective
1. Understanding the business model Analyze the current revenue model of the dominant incumbent operators Establish an economic base and structural dependencies.
2. Evaluate actual disturbances Identify how attackers modify monetization channels. Determine the depth and extent of disturbance.
3. Understanding the economics of demand Understanding how new games change demand. Assessing the market’s future – up or down
4. Add supply-side economics Understand the logc of the new equilibrium, dynamically . Assess the intensity, stability and type of new competition
5. Rebuild with aggregates Analyze supply and demand . Find new results and deduce actions/key assets/playing fields

1. Understanding the business model

Google’s sponsored links, which manifest themselves primarily through search ads, are the cornerstone of its revenue model. By 2024, Google’s advertising revenues will reach around $240 billion, with search ads contributing around $175 billion, or 57% of the company’s total revenues.

While these figures underline the significant value of sponsored links within the Google ecosystem, Google has other revenue streams, such as Google Cloud, which will benefit from the deployment of AI. In addition, advertising revenue is driven by three fundamental levers: the immense volume of global search queries, the subset of high-intent queries that trigger paid ad auctions, and the Google platform’s control over the search engine results page (SERP). By dictating page structure and bidding rules, Google effectively monetizes attention and intent on a massive scale.

However, this dominance comes with inherent vulnerabilities. Firstly, the vast majority of queries – around 80% – are not commercially monetizable. They respond to needs for information, navigation, or exploration. Secondly, SERPs themselves are saturated and increasingly commoditized, with search engine manipulation diluting the value. Thirdly, the user must always act outside the Google interface to accomplish tasks, creating friction in the user experience. These limitations constitute the structural exposure of Google’s traditional model.

2. Assessing real disturbances

  • The impact of AI: GenAI, but above all, agentic AI

LLMs clearly change the structure of search by reducing the need for links (direct answers) and reducing navigation (multi-click paths become a single prompt). With LLMs, over 60% of queries are now informative or intent-driven, which is ideal for AI-generated answers. Users interact with summaries and don’t click on links, reducing volume for Google

The other danger is the collapse of traditional ranking logic, as the concept of “#1 ranking” is replaced by being quoted, summarized, or cited by LLMs. The implication is that the ranking value that increased cost-per-click disappears and pricing power is reduced.

Although initially limited to synthesis and dialogue, the integration of agentic AI considerably broadens the scope of disruption. With the emergence of single-agent systems, a single AI entity can autonomously perform discrete tasks – for example, booking a restaurant, sending an e-mail, or initiating the drafting of a document – without human intervention. Multi-agent systems go further: they break down complex workflows into sub-tasks, coordinate APIs, and execute a sequence of decisions on the user’s behalf. In both cases, the agent not only interprets the user’s intention but acts on it, transforming traditional requests into executable commands.

On a large scale, this transition is transforming the very nature of digital search. It replaces the advertising-funded discovery layer with agent-based orchestration, increasing the potential economic value of each query, but also reshaping who controls that value and how it is monetized.

  • Advertising value chain

This evolution is turning the structural microeconomics of search on its head, by orienting it towards the delivery of results. This shift replaces the monetization of navigation (selling advertising space along the way) with the monetization of execution (capturing value at the result level).

But the rise of agentic AI isn’t limited to disrupting search. It’s putting systematic pressure on Google’s broader monetization engine – including display advertising, YouTube content monetization, and even, eventually, a large number of B2B SaaS intermediaries. In display advertising, AI agents bypass banner placement logic by performing tasks directly from the user’s prompt or workflow. In enterprise contexts, agentic AI increasingly disintermediates SaaS categories for which Google (via Workspace, Ads Manager, or Analytics) has monetized coordination or knowledge. When agents plan campaigns, manage CRM entries, or optimize user journeys, they bypass several layers of existing SaaS infrastructure. This creates downward pressure on margins and squeezes the space for traditional marketing and advertising technology.

Ultimately, Google and its AI competitors are converging on a new high-value node: the orchestration layer. This is where decisions are made, actions are initiated, and margins can be captured. Whether powered by Gemini, OpenAI, or specific vertical agents, this layer holds the key to monetization in the age of agentic AI. What search was for information, orchestration is becoming for execution: the critical control point in digital value chains.

3. Understanding the “demand” side of change

An important unknown is how agentic AI will affect the profit reserve. However, microeconomics tells us that the profit pool will be larger due to three factors.  Firstly, agentic execution improves the quality and relevance of interactions. Unlike the current model, where most ads are shown to users who are not yet ready to convert, agentic ads can be integrated directly into high-intent workflows. Secondly, agents reduce transaction friction. By reducing the funnel, they accelerate the passage to action. This reduces waste in sales channels and increases results attributable to advertising. Supply-side efficiency encourages brands to bid higher for access to agent-driven engagement.

Thirdly, the long tail of non-monetized queries – previously low-intent, informative searches – can now be captured and transformed into valuable transactions.

These effects are, in principle, multiplicative on the level of return on (search) advertising spent – so it’s not necessary to have a major impact on a single effect – a smaller, but combined, impact is the real crux of whether the market will grow.  As these three effects are likely to combine with agentic AI, it’s reasonable to think that the market will be bigger, not smaller, as the technology evolves towards agentic AI. 

4. Add the supply side of change

  • Why LLM will establish advertising as an additional source of revenue

If agentic AI increases the value per query, it threatens to cannibalize the very mechanisms that fund today’s search giants. For Google, the main concern is that agentic systems will bypass the SERP entirely, cutting off its advertising supply chain. Gemini, Google’s counter-offensive, seeks to preserve monetization while adapting the interface to a query-driven future.

On the other hand, players like OpenAI and Perplexity face an entirely different challenge: most of their users are free. OpenAI, for example, is said to have over 100 million weekly active users, but less than 5% pay for ChatGPT Plus. To maintain the high costs of LLM inference and GPU-intensive infrastructure, these platforms need to monetize the remaining 95% of users.

The strategic logic behind LLM advertising monetization is therefore simple but unavoidable. First, inference costs at scale require offsetting cash flows. Secondly, user payment models are reaching a ceiling – most users won’t pay for general-purpose chat. Thirdly, verticals such as procurement, local services, and SaaS recommendations are rich in intent and ripe for monetized orchestration.

  • Game-theoretic perspectives: Modeling competition between LLM and Google advertising

    • Pure strategy equilibrium (Nash solution)

When several suppliers compete, it is important to know whether it is possible to categorize the type of competition that is likely to occur. Here, the tools of game theory, which examine the payoffs to each player based on the movements of the others, are uniquely valuable in assessing possible behavior, now and in the future, based on repeated interactions.

Suppose we model the interaction between Google and LLM challengers as a static, repeated game, and the values of the game (including LLM subscription) in the static model are as follows (in billions of dollars by 2030):

Table 2: Game theory payoff matrix (illustrative)[i]

LLM : No ad monetization LLM : Monetizing advertising
Google : Do nothing (144, 30) (108, 75)
Google: Reinvention by Gemini (150, 60) (161, 44,5)

The payoff matrix (Table 2) shows that LLMs have an incentive to engage in advertising for some choice of Google. Thus, the main idea of game theory is the prevalence of a stable equilibrium, where dominant strategies converge with LLM-mediated advertising (Nash stragie) -and… the total market has grown.

  • Pure strategy equilibrium (Nash solution)

These results apply only to the one-shot game. Let’s assume a more realistic game, where there is uncertainty about the profitability and development of AI by agents, and that the game of interactions between Google and LLMs is repeated over 2024-2030. At this level, the dynamic changes: initially, LLMs stay away from advertising monetization, even though they experiment and gain the trust of users. Gemini is also partially deployed, but not in a head-on situation. As the capabilities of LLMs improve, advertising enters their ecosystems. Google, faced with strong erosion, accelerates Gemini deployment and integrates the new advertising logic into AI agent flows. In the end, both parties compete in the field of agent-based monetization.

This type of game is known as a mixed-strategy game, in which the different players combine several strategies at random to test their best position, and of course, hide their first intentions (Table 3). But this uncertainty disappears and converges towards the equilibrium shown in Table 2.

Table 3: Game frame evolution

  1. Mixed strategy phase (2024-2026) Dominant play for Google: 60-80% deploy Gemini (to reinvent itself, but avoid total cannibalization of margins); 20-40% delay Gemini (observe user habits, avoid overreaction Dominant play LLM: 40 -70% monetize advertising (Capture initial value in verticals like travel ) 30-60% – increase their footprint (Build trust,)
  2. Iteration and feedback (2026-2028): Updating beliefs (Bayesian learning on earnings structures) and refining strategies
  3. Convergence towards a pure strategy (2028-2030) Players commit to pure strategies, with Google fully integrating Gemini into search.

This evolutionary path, derived from game theory, is not innocent:

  1. Firstly, it means that rational logic should lead to an equilibrium where the new business model becomes dominant for each player.
  2. This model is evolutionary, not because Google has difficulty executing it, but because it’s more strategically optimal to embark on a mixed strategy. This mixed phase creates a space for experimentation without open conflict. Each party sends strategic signals (e.g., Gemini integration in Android but not in the search home page; OpenAI testing of sponsored suggestions in Pro mode only).
  3. Even if the game is evolutionary, — it’s fast: initially, there’s already more than a 50% chance that Google will launch into LLM, – this is marginally lower for LLM to launch into advertising, but the probability is far from zero. In 3-4 years, the strategy will lead to a reversal of the dominant business model, while the agentic penetration of AI in advertising and search is not yet dominant – 30 to 40% of customers use it.

This dynamic is the result of a positive loop effect. Increased usage leads to better feedback on the user interface and improved agent quality. Better agent quality reinforces trust and leads to more commercial requests. And if more resources are available, LLMs invest more in model optimization.

This loop has other implications: it favors the first to have a closed-loop infrastructure – so we can expect Google to integrate Gemini into Android, Chrome, Maps and Gmail. New LLMS attackers such as OpenAI or Perplexity could then choose to secure their position as agents in the key workflows of other players competing with Google such as Salesforce’s Slack, Microsoft Teams, or Zoom), thus creating multiple different ecosystems, without aggressive competition favoring the extraction of ROI from customers.

5 Bringing together all the elements of microeconomics

5.1. The metamorphosis of online research

From this perspective, the future of online search is not one of extinction or a struggle for survival. It’s about a metamorphosis where the revenue model will evolve from advertising around discovery to monetization around execution.

Google’s dominance depends on its ability to maintain trust, share relevance, and user flow. LLMs, meanwhile, are set to evolve from high-cost, low-revenue utilities to sustainable platforms. This will require a diversification of revenue sources from subscription to advertising, but advertising that is integrated, not imposed.

5.2. News of Google’s death is greatly exaggerated – But Google needs a boost

Google’s destiny is not binary: death or survival, but it is clear that the business model is set to shift towards agent-based execution, — and that this dynamic will force Google to reinvent itself. The success of this reinvention will depend on several interdependent factors.

The demand effect shows that the transition can be profitable.  The loop effect clearly shows that Google must also remain a major player if it is to make a successful transition.  The loss of more than 25% of classic search users, who are turning instead to LLMs (outside Gemini) for their searches, means that it may be difficult for Google to maintain its price levels, CPC. Gemini’s reinvention path is also about achieving a leadership position, but primarily in the search agent (not LLM) arena. So, Google’s current platforms will be Google’s best assets moving forward, while Gemini becomes the journey to execute well for Google’s rosy future.

Final Thought

Ultimately, the application of the above approach can be summarized in a Tabelau (Table 4)

Table 4: Summary of results

Step Applied to search and Google
1. Understanding business income – Google earns around $175 billion a year from search ads (57% of total revenue) – Monetization = Query volume × CTR × CPC – Only 10-20% of queries are monetized – Power lies in the platform’s control over SERPs and bidding rules.
2. Evaluate actual disturbances – LLMs respond directly, bypassing links and SERPs – Agentic AI performs tasks, eliminating navigation steps – Traditional CPC logic is weakened; ranking power is eroded – Platforms like OpenAI/Perplexity intercept high-intent queries.
3. Understanding the economics of demand – Agentic AI improves performance through better targeting and task integration.- Long-tail queries become monetizable
– Funnel friction is reduced→ higher intent capture.
– Result: Market expands through improved advertising results.
4. Add supply-side economics – LLMs must monetize to cover inference costs (subscription ceiling reached) –                                                                                                        Game theory shows that LLMs adopt advertising, Google launches Gemini:                                                                                                  – Competition shifts to agent orchestration (Gemini, Copilot, etc.) – Result: Coexistence in multi-agent ecosystems, no monopoly.
5. Aggregate reconstruction – Execution becomes the new monetization layer – Google needs to integrate deeply (Gemini in Android, Chrome, Gmail) – The new value lies in agent control, task execution and orchestration infrastructure.

-The speed of model changeover is rapid, and faster than customer adoption – because competition takes place at the margins, to ensure growth.

Although this synthesis may seem simple, its “tour de force” lies in the fact that it is the result of a comprehensive and detailed micoreconomic analysis. In fact, in times of disruptive technological transformati, – such as the rise of agentic AI – success doesn’t depend on intuition alone, and even less on fear. In times of disruption, the first thing to do is to make sense of change, and develop knowledge for a clear and persistent path of change. The time has come to establish a discipline aimed at building a solid foundation of strategic data. Business leaders and policy-makers need to rigorously model technological trajectories, changes in user behavior, and competitive dynamics. This five-step framework should enable more decisive and credible action to be taken.

About the Author

jacquesJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

References
Acharya, D., K. Kuppan and B. Divya (2025) , “Agentic AI: Autonomous Intelligence for Complex Goals-A Comprehensive Survey,” in IEEE Access, vol. 13, pp. 18912-18936, 2025, Agentic AI: Autonomous Intelligence for Complex Goals-A Comprehensive Survey | IEEE Journals & Magazine | IEEE Xplore
Bornet, P., Wirtz, J., Davenport, T. H., De Cremer, D., Evergreen, B., Fersht, P., … & Mullakara, N. (2025). Agentic Artificial Intelligence: Harnessing AI Agents to Reinvent Business, Work and Life. Irreplaceable Publishing.
Bughin, J and Ph Remy, (2025), Guiding Agentic AI, European Business Review, April, Agentic AI: The Future of Intelligent Systems – The European Business Review
Hosseini, S., & Seilani, H. (2025). The Role of Agentic AI in Shaping a Smart Future: A Systematic Review. Array, 100399.
Li, M., Nguyen, B., & Yu, X. (2016). Competition vs. collaboration in the generation and adoption of a sequence of new technologies: A game theory approach. Technology Analysis & Strategic Management, 28(3), 348-379. Competition vs. collaboration in the generation and adoption of a sequence of new technologies: a game theory approach: Technology Analysis & Strategic Management: Vol 28, No 3
Yuskevich, I., Smirnova, K., Vingerhoeds, R., & Golkar, A. (2021). Model-based approaches for technology planning and roadmapping: Technology forecasting and game-theoretic modeling. Technological Forecasting and Social Change, 168, 120761. core.ac.uk/download/pdf/478947916.pdf
[i] The model is illustrative and based on the following hypotheses, anchored in case studies. a) 20-30% monetization of the long tail of keywords, thanks to direct execution by AI; reduction in execution time by 50% and increased conversation by 50-100%; value split created 50/50 between customers and executor. LLM has its own orchestration-or pays 20% of revenues to other distributors.  Agent penetration is in the order of 30-40% for marketing and sales. Game values are based on the highest density obtained from MonteCarlo simulation on key variable intervals. Figures are also in real terms, excluding inflation.

The post Google or Gemini? A Framework for Navigating Agentic AI Confusion appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/google-or-gemini-a-framework-for-navigating-agentic-ai-confusion/feed/ 0
Reclaiming SaaS Value in the Agentic AI Era https://www.europeanbusinessreview.com/reclaiming-saas-value-in-the-agentic-ai-era/ https://www.europeanbusinessreview.com/reclaiming-saas-value-in-the-agentic-ai-era/#respond Tue, 27 May 2025 09:04:51 +0000 https://www.europeanbusinessreview.com/?p=230020 By Jacques Bughin  As agentic AI takes over task execution, the foundations of traditional SaaS begin to crumble. Interfaces lose value, orchestration gains power, and control over workflows becomes the […]

The post Reclaiming SaaS Value in the Agentic AI Era appeared first on The European Business Review.

]]>
By Jacques Bughin 

As agentic AI takes over task execution, the foundations of traditional SaaS begin to crumble. Interfaces lose value, orchestration gains power, and control over workflows becomes the new battleground. Dr. Jacques Bughin reveals how companies must pivot fast or risk fading into irrelevance in this rapidly shifting landscape.

1. The End of SaaS as We Know It

The traditional Software as a Service (SaaS) model, characterized by user interfaces, per-seat pricing, and feature sets, is undergoing a significant transformation. The catalyst for this change is the emergence of agentic AI—autonomous digital workers capable of executing tasks on behalf of users. By 2030, it’s projected that 30% of current B2B SaaS revenue is at risk due to orchestration-driven compression. Users will delegate tasks to agents, potentially bypassing multiple software interfaces. This shift changes the pricing paradigm from “software access” to “outcome fulfillment.. SaaS companies, especially those offering horizontal, mid-layer, or UI-heavy solutions, face significant disruption. Products like dashboards, CRMs, schedulers, or project tools without vertical integration risk obsolescence within 2–4 years

Who Owns the Stack Now and how it will change

In the agentic era, value accrues to those controlling the stack: infrastructure providers like Azure, AWS, and GCP; model developers such as OpenAI, Claude, Gemini, and Mistral; orchestrators including Copilot, Gemini 1.5, Dust, and LangGraph; and finally, the SaaS layer, which is increasingly reduced to an API endpoint. Control over the user interface no longer ensures monetization. Agents are UX-agnostic; what matters is control over intent, memory, context, and orchestration flow. The entity that owns the agent effectively owns the user.

Traditional SaaS captures value through accounts, permissions, UIs, and reports. Agentic AI, however, derives value from workflow dominance, automation logic, and autonomous action. This represents a shift from user-driven interfaces to autonomous decision-making architectures. Instead of users navigating multiple tools, agents pull data, query APIs, make decisions, and inform users, rendering entire layers of the traditional stack obsolete.

By the way, the transition is accelerating:

  • Adobe has integrated agentic AI into its suite, enhancing user experiences across its platforms. Salesforce has expanded its family of large action models, designed to predict and perform next actions, powering AI agents across its ecosystem. ServiceNow has introduced autonomous AI agents capable of executing complex tasks, differentiating from traditional generative AI copilots. Startups like Sana (SE), Otherside AI, and Deepop are building orchestrators as core products. Additionally, tools like AutoGen, CrewAI, and LangGraph are rapidly maturing, facilitating agent deployment
  • Major tech companies are aligning around task flow control. Microsoft is transforming M365 into an orchestration hub via Copilot. Google utilizes Gemini to defend search and expand SaaS offerings. Amazon connects agents to transactions through Bedrock and Alexa. Meta develops social/consumer agents to protect advertising revenue. NVIDIA and AMD drive demand for compute through orchestration. Each stands to gain from cloud revenue, LLM licensing, chip sales, or agent UX control.

In parallel, the cost per 1,000 tokens for LLMs has plummeted from ~$10 (GPT-4, 2023) to less than $0.01. Task orchestration costs are decreasing by 80–90% annually, while the number of addressable agentic workflows grows exponentially. By 2026, many SaaS workflows will be more cost-effective and efficient when executed by agentic systems rather than traditional applications.

2. SAAS future is different

Cutting Costs Isn’t a strategy, you need to move

Some SaaS firms may respond by reducing R&D, narrowing product scope, or halting innovation. While this might preserve short-term EBITDA, it jeopardizes long-term viability. Agents will outperform pared-down tools, leading to user attrition and increased churn. Eventually, these firms risk becoming mere wrappers around others’ orchestration logic.

Reality bites: Klarna, a fintech leader, has undertaken a significant transformation by reducing its reliance on over 1,200 SaaS tools, opting for internally developed AI-powered solutions. This strategic shift led to annual savings exceeding $10 million and streamlined operations. Notably, Klarna severed ties with major SaaS providers like Salesforce and Workday, replacing them with internal systems built on AI infrastructure, including OpenAI’s technologies. The company’s AI assistant, powered by OpenAI, managed two-thirds of customer service chats in its first month, effectively performing the work of 700 full-time agents. This move not only improved efficiency but also enhanced customer satisfaction, with errands resolved in less than 2 minutes compared to 11 minutes previously.

Embracing Strategic Pivots

In the dynamic landscape of SaaS and AI, the ability to pivot strategically is crucial. Many successful companies have undergone significant pivots in their business models to adapt to market changes and achieve growth. For instance, Twitter originated as a podcast service called Odeo before pivoting to a microblogging platform. Similarly, Shopify transitioned from an online snowboarding equipment store to a comprehensive e-commerce platform. Flickr began as an online multiplayer game before becoming a photo-sharing site. Pinterest started as a mobile shopping app named Tote before evolving into a visual discovery platform. These pivots often involve redefining core business assumptions and engaging new resources, technologies, and leadership. Such examples underscore the importance of flexibility and responsiveness in business strategy, especially in the face of technological advancements like agentic AI.

Pivots for SAAS include

  1. Build Embedded Agents: Integrate agentic UX within your product. Employ intent-based UI, context memory, and internal Retrieval-Augmented Generation (RAG).
  2. Attack via Vertical Orchestration: Control agents across specific vertical domains (e.g., construction, legal, compliance). Examples include Procore, ServiceNow, and Toast.
  3. Own the Model Logic: While not necessarily owning the LLM itself, manage your RAG, fine-tuning, and abstraction layers. Utilize tools like CoreWeave, Mistral, and LangGraph for efficient development.
  4. Best Ecosystems to Build I: key ecosystems for development include infrastructure providers like CoreWeave, Lambda, and RunPod; model developers such as Mistral and LLaMA 3; orchestration tools like AutoGen, CrewAI, and LangGraph; SaaS innovators including Notion, Intercom, and Deepop; and VC/PE firms like EQT, Point Nine, and Index.

Are you ready to embrace?

About the Author

jacquesJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

The post Reclaiming SaaS Value in the Agentic AI Era appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/reclaiming-saas-value-in-the-agentic-ai-era/feed/ 0
Overlook an AI-Ready Data Strategy at Your Peril   https://www.europeanbusinessreview.com/overlook-an-ai-ready-data-strategy-at-your-peril/ https://www.europeanbusinessreview.com/overlook-an-ai-ready-data-strategy-at-your-peril/#respond Sun, 11 May 2025 04:18:55 +0000 https://www.europeanbusinessreview.com/?p=227689 By Alan Jacobson  According to research conducted by Alteryx, 82% of global business leaders say generative AI is significantly impacting their company goals and nearly half of board members are […]

The post Overlook an AI-Ready Data Strategy at Your Peril   appeared first on The European Business Review.

]]>

By Alan Jacobson 

According to research conducted by Alteryx, 82% of global business leaders say generative AI is significantly impacting their company goals and nearly half of board members are prioritising genAI over anything else. This is a massive tectonic shift in organisational strategy. And while many stories talk about the view that none of these very attractive genAI initiatives will work if they’re not built on a solid bedrock of AI-ready data, this isn’t the real impediment. What is the secret to making genAI drive value in an organisation? It turns out, it’s much the same as with all analytics: education for the organisation on how it all works – and finding use cases that are safe and easy to implement quickly to build muscle in this space.   

So, how can organisations build their muscle and drive results? 

GenAI models can be highly risky and problematic, or incredibly risk-free  

Ask a genAI model to file your taxes for you, and you likely will end up in a dark locked room that you won’t be getting out of any time soon. This is not only due to data quality, but the very nature of what Large Language Models are good at and what they aren’t capable of today. 

And while IT teams are focused on data quality with the report revealing that IT teams are confident with their data maturity and trustworthiness. Over half (54%) rate their data maturity as good or advanced, and 76% trust their data. On the surface, this sounds promising. But if you try executing this use case, it likely will not matter how good the data is.  

Instead, if you pick a use case to use genAI to highlight what new tax codes might impact the business and automatically e-mail these to the tax strategy team for review, you might immediately have a winning use case with very little risk. In this example, you are not dependent on your internal data quality, you are asking an LLM to summarise the mountains of news articles about new tax code, which is something LLMs are quite good at, and finally, you are not exposing or requesting any sensitive data to the LLM. 

There are a myriad of these easier and safer use cases that quickly allow organisations to build the muscle needed to succeed in the genAI space. But how do you identify these easier use cases? It really comes down to education. Similar to the education needed to harness the power of automation and more traditional analytics in your business, teams need to focus on upskilling and ensuring teams have the tools necessary to go on the data journey. 

Unfortunately, the report shows that only 10% of businesses claim to have a ‘modern’ data stack, and nearly half (47%) are currently working on updating their infrastructure. While new employees are learning these skills in universities, they are arriving into the workplace with little beyond Excel to perform analysis, let alone to leverage genAI.  

What key elements make up the modern data stack?  

Organisations need to have a set of technologies that allow the storage of data in a unified location (data lake), the ability for knowledge workers to manipulate data beyond Excel (data wrangling), to automate processes (automation) and perform analysis (analytics). These tools must be accessible and easy enough for the majority of knowledge workers to leverage so that data work is not the sole domain of the IT or Data Science teams. Unfortunately, where companies have historically invested has been in tools for their technology teams, with Python and other ‘code-first’ types of tools. While these help a small number of technical experts, these tools alone will not allow a business to go on the journey of harnessing the power of analytics and genAI.  And without bringing the business along, the use cases will likely continue to be suboptimal. 

As organisations build out their data stack, it is important to keep an eye on ROI. Building a data lake takes significant time and resources (e.g. data engineers) and will cost significant money. And unfortunately, the act of building a data lake by itself will not deliver significant ROI. ROI will come when applications, automation and analytics are delivered. These other types of technologies will take on two forms, centralised teams building solutions and democratised teams leveraging analytics and automation. In the former case, this means investing in people to centrally build solutions. This typically again takes significant investment and tends to focus on larger problems that have good ROI but take a while to deliver. You will put your top data scientists on big problems. Democratised ways of using these technologies will take much smaller investment, as you are not building a large team of dedicated resources to build solutions, but instead upskilling the people you already have. The goal is to make these people more efficient and with time being freed up, they can then drive to higher value delivery.  

Some companies get frustrated with an over investment early on data lakes with cost and returns not in-balance. Successful companies tend to drive fast returns with democratised analytics, and then re-invest a portion of these savings into their data lakes and centralised teams. They also benefit by democratising the analytics, as the business can now better articulate the priorities of what they need the data teams to deliver as well. In the end, the best data stacks are designed to deliver ROI every step of the way. 

Aligning budgets with the genAI opportunity  

Another challenge facing organisations is budget management. IT teams, in general, are responsible for data technology budgets, but the reality of how those budgets are allocated and adjusted tells a story that may have made genAI adoption difficult. Over half (54%) of businesses admit that budgets are not reviewed or adjusted throughout the year, even if new needs arise. Added to that, 54% say that if other priorities, projects, or spending needs arise after budgets are allocated, they cannot be adjusted. This proved to be a huge challenge last year when the pressure to adopt genAI grew exponentially.  

Given how quickly genAI has moved over the last couple of years and how quickly it continues to change, encouraging cross-functional collaboration and communication and updating how IT budgets are allocated or reviewed is vital. The current rigidity among organisations will have a big impact on innovation to the data stack and creating the right foundations for successful genAI use cases.   

Clearer horizons for enterprise-wide rollout    

While there are many challenges to achieving a modern data stack, organisations must focus on upskilling their workforce while putting the appropriate infrastructure in place to deliver analytics across the organisation. The key is to democratise the effort and ensure the teams are engaged and able to participate in the journey, not focusing only on technology for technologists. By addressing these challenges, organisations will be able to harness the full potential of genAI, driving innovation and achieving organisational goals.   

While companies are still at the early stages of seeing the full impact of genAI adoption, there is no doubt that the fundamental elements of analytic teams in the enterprise will shift, from simply building solutions to teaching the organisation and helping deliver the change management to upskill the workforce. Organisations must be prepared to drive this data literacy while navigating the age of genAI.

About the Author 

Alan JacobsonAlan Jacobsonis the Chief Data and Analytics Officer (CDAO) at Alteryx, where he leads the company’s data science initiatives and drives digital transformation for its global customer base. In this role, he oversees data management and governance, product and internal data, and the utilization of the Alteryx Platform to foster growth. 

The post Overlook an AI-Ready Data Strategy at Your Peril   appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/overlook-an-ai-ready-data-strategy-at-your-peril/feed/ 0
The AI Skills Paving the Way for Next-Gen Business Operations  https://www.europeanbusinessreview.com/the-ai-skills-paving-the-way-for-next-gen-business-operations/ https://www.europeanbusinessreview.com/the-ai-skills-paving-the-way-for-next-gen-business-operations/#respond Sat, 03 May 2025 07:00:12 +0000 https://www.europeanbusinessreview.com/?p=227245 By Greg Fuller  AI is reshaping industries, but a skills gap persists. Bridging this requires a blend of technical expertise, like data analysis and machine learning, and power skills such […]

The post The AI Skills Paving the Way for Next-Gen Business Operations  appeared first on The European Business Review.

]]>

By Greg Fuller 

AI is reshaping industries, but a skills gap persists. Bridging this requires a blend of technical expertise, like data analysis and machine learning, and power skills such as communication and ethical understanding. Focused training and development initiatives are essential for maximising AI’s benefits and sustaining business growth. 

Artificial Intelligence (AI) is redefining workplace dynamics, with a recent McKinsey survey revealing that 78% of respondents report their organisation uses AI in at least one business function, rising from 72% in early 2024, as adoption of AI technologies grows across businesses.  

However, organisations face a pressing challenge: bridging the gap between AI’s potential and workforce readiness. While 51% of IT professionals report that AI has streamlined their workflows, 65% of leaders acknowledge that their teams lack the expertise necessary to maximise its value. Bridging this gap demands a dual focus on developing both technical expertise and essential ‘power skills’ like critical thinking, collaboration and ethical understanding. These skills are crucial for enabling the workforce to work effectively alongside AI and make the most of its potential. So, how can we achieve this?  

The first step in effective training is to assess the current AI capabilities within the workforce to identify any skill gaps. By conducting baseline evaluations, organisations can compare existing skills against skill benchmarks, highlighting areas that need improvement. This targeted approach ensures that resources and time are used efficiently.  

Considering this, what are the AI skills and proficiencies that are shaping the future of work? 

Programming skills  

Programming languages are fundamental technical skills for employees involved in AI development. Python continues to be a leading choice due to its versatility, ease and robust libraries like TensorFlow and PyTorch. These frameworks enable rapid prototyping of applications such as predictive analytics and natural language processing. Mastering programming languages empowers talent to effectively build, test and deploy AI solutions that drive innovation and efficiency within their organisation.  

Machine learning skills  

Advancing in AI requires developing skills in machine learning methodologies. This requires an understanding of learning types including supervised learning, unsupervised learning and reinforcement learning. A strong understanding of algorithms like gradient-boosted trees and neural networks is also critical for developing intelligent systems that improve over time. 

Frameworks like Scikit-learn streamline the deployment of these algorithms, enabling applications such as customer segmentation and risk assessment. Additionally, reinforcement learning enhances possibilities by utilising reward-based systems for adaptive decision-making.  

Data analysis and visualisation skills  

Proficiency in organising, refining and presenting data is critical for preparing AI-ready datasets and effectively communicating model outcomes. To equip talent with these skills, comprehensive training programmes can be developed.  

These programmes might include online courses focusing on developing expertise in platforms like Tableau and Seaborn to translate complex patterns into intuitive formats such as correlation matrices or time-series trend animations. This training empowers teams to convey complex data insights more effectively, enhancing decision-making within the organisation.  

Problem-solving and critical thinking skills  

Although technical skills are vital, power skills like problem-solving and critical thinking are just as important to ensure AI aligns with human values and organisational goals.  

These skills enable talent to recognise organisational challenges, evaluate scenarios, and devise effective solutions to tackle them. In the realm of AI, tackling complex open-ended problems is a frequent task, requiring strong analytical abilities and creativity to develop algorithms that address these issues and improve over time.  

Ethics awareness and bias mitigation  

Understanding ethics and bias is another key skill that talent need to develop. AI systems can unintentionally perpetuate existing biases present in their training data, leading to unfair or discriminatory outcomes. An example of this is biased datasets which may cause hiring algorithms to favour certain demographics.  

To address this challenge, using balanced datasets or fairness-aware algorithms can effectively address potential biases. By understanding these issues, teams can evaluate the social and ethical impact of AI technologies, ensuring their responsible and ethical use. 

Communication and collaboration skills  

Strong communication skills are also critical for AI professionals who often work alongside colleagues from various departments and must make complex ideas accessible to those without a technical background. For instance, when an AI specialist presents a predictive sales model to a marketing team, they need to translate intricate topics – like feature selection or algorithm mechanics – into clear, actionable business insights.  

By explaining how the model uncovers patterns and its real-world benefits in straightforward terms, they help bridge the divide between technical development and business objectives. This strategy makes sure that AI-driven solutions are not only understood but smoothly integrated into organisational workflows, maximising their impact and adoption.  

Equipping talent for the AI revolution  

With the rapid advancement of AI, employees must be ready to adapt to continual shifts in the workplace. Developing new skills is a continuous journey, prompting organisations to prioritise robust training programmes for their teams. Talent needs to be flexible and eager to adopt emerging tools and methodologies, ensuring organisations stay competitive in a constantly changing landscape. Regularly monitoring progress helps talent maintain their expertise as AI technologies advance, while also providing valuable direction for their learning paths. Customised development plans, paired with consistent feedback, further encourages professional growth and builds confidence in AI capabilities.  

Organisations that successfully embed AI into their businesses can gain a significant competitive edge, helping to fuel innovation and boosting efficiency and productivity. However, it is important to remember that leveraging AI goes beyond technical expertise; it also involves recognising its broader impact on the organisation. Organisations and talent who focus on both technical and power skills will be better prepared for the evolving world of work. This balanced approach drives innovation and ensures organisations fully capitalise on the long-term advantages of AI.

About the Author 

Greg FullerIn Greg Fuller’s 24+ year career with Skillsoft, he has been involved in tens of thousands of hours of content development projects; all focused on tech skills. Along the way, Greg has acquired several technical certifications such as PMP, CISSP, Oracle OCP, Cisco CCNP, and many others. Greg has applied much of the knowledge that he’s acquired working closely with several Fortune 500 companies to help build their upskilling programs. 

The post The AI Skills Paving the Way for Next-Gen Business Operations  appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-ai-skills-paving-the-way-for-next-gen-business-operations/feed/ 0
GenAI: Prompting A Better Marketing Strategy https://www.europeanbusinessreview.com/genai-prompting-a-better-marketing-strategy/ https://www.europeanbusinessreview.com/genai-prompting-a-better-marketing-strategy/#respond Thu, 24 Apr 2025 04:57:28 +0000 https://www.europeanbusinessreview.com/?p=226783 By Joerg Niessing and David Dubois  Amid the rise of GenAI, executives must master prompt crafting to optimize their marketing and sales strategies. In the ever-evolving world of digital marketing, the […]

The post GenAI: Prompting A Better Marketing Strategy appeared first on The European Business Review.

]]>

By Joerg Niessing and David Dubois 

Amid the rise of GenAI, executives must master prompt crafting to optimize their marketing and sales strategies.

In the ever-evolving world of digital marketing, the ability to craft impactful, audience-centric campaigns has become both an art and a science. By the end of 2025, AI is expected to drive 95 percent of customer interactions in some capacity.

Central to this evolution is the growing importance of prompts—carefully unstructured inputs that guide AI tools and systems to generate creative, targeted, and data-driven outputs. By mastering the “art of prompting,” brand and sales executives can augment and transform their strategies, foster deeper engagement, and drive tangible results. For chief marketing officers (CMOs) and marketing leaders, mastering prompts is no longer a technical skill but a leadership imperative.

Why prompts matter in marketing strategy

Prompts are unstructured, often textual inputs that direct GenAI-powered tools to produce specific responses or outputs. Acting as a bridge between prescriptive inputs by humans and AI capabilities, they translate strategic goals into actionable content. These inputs range from straightforward instructions (e.g., “generate an Instagram caption for a new luxury watch”) to more specific requests (e.g., “create a five-part email sequence promoting a new sustainable fashion collection, emphasizing ethical sourcing and exclusivity).

Prompts have a wide range of applications, including (but not limited to) competitive analysis, online presence and SEO (search engine optimization) audits, marketing and brand positioning, and channel strategy and execution. In short, prompts inform, augment, and/or transpose strategic marketing goals into actionable outputs by AI systems.

Here are four ways effective prompting can ensure big wins for brands.

1. Personalization at scale

One of the most significant challenges in digital marketing is balancing scale with personalization. With the right prompts, brands can create hyper-personalized campaigns tailored to different audience segments.

A prompt such as “generate a personalized itinerary for a couple visiting Paris, emphasizing hidden gems and romantic spots” ensures that content resonates with travelers’ unique preferences.

For instance, L’Oréal’s GenAI Beauty Content Lab, CREAITECH, has started training GenAI to recognize the unique visual codes of the brands in their portfolio by leveraging logos, imagery, styles, packaging, typography, and colors. This enables the company to launch innovative campaigns faster and generate customized product recommendations for millions of users based on their individual preferences and skin types.

Then there’s the hospitality industry, which has the most potential to create significant value from AI, according to a study by McKinsey. Many operators, including Airbnb, increasingly utilize prompt engineering to power tailored travel suggestions for users. A prompt such as “generate a personalized itinerary for a couple visiting Paris, emphasizing hidden gems and romantic spots” ensures that content resonates with travelers’ unique preferences. This approach has helped Airbnb achieve a 30 percent increase in click-through rates on email campaigns.

2. Speed and efficiency

Prompts significantly reduce the time and resources spent on content creation, enabling marketers to move from ideation to execution more quickly. For example, in March 2023, Coca-Cola used AI-driven tools during its “Real Magic” campaign to encourage the creation of hundreds of variations of advertising copy and visuals to suit different markets. Within minutes, more than 120,000 AI-powered pieces of content (prompted by real humans) were posted.

Similarly, Nike uses prompts to fuel dynamic campaigns tailored to local markets, particularly in out-of-home advertisements. During the Paris 2024 Olympics, Nike integrated AI into their marketing strategy through AI-led billboards in cities across the United States, Europe, South America, and Asia that updated promotional messages in real-time based on the latest results. Whenever an athlete featured in the campaign won an event, the billboards immediately displayed the winning athlete, mirroring the thrill and uncertainty of the Games. Thanks to this strategy, Nike saw a peak in website visits on 31 July 2024, reaching two million visitors. Of these visits, 86,900 resulted in a sale. This increase in site traffic contrasted sharply with declines observed by competitors such as Adidas, Hoka, and On during the same period.

3. Consistency across channels

In today’s omnichannel environment, maintaining brand voice and consistency is crucial. Prompts ensure that messaging aligns with brand values, regardless of the platform. A prompt such as “Write a LinkedIn post and a corresponding Instagram caption about our new electric vehicle” ensures greater consistency while optimizing content for platform-specific formats and audiences.

4. Enhanced creativity

AI tools powered by prompts can act as creative collaborators, pushing the boundaries of what marketers can imagine. Marketing and sales leaders can use prompts to explore bold, unexpected ideas that go beyond traditional campaigns. For instance, Sephora launched the Sephora Skin IQ tool, which provides personalized beauty recommendations and tutorials. Prompts such as “suggest a lipstick shade to match an olive-toned complexion” allow for expert advice to be disseminated through digital channels, delivering value instantly. The tool boosted user satisfaction, and Sephora’s skincare sales grew by 35 percent following its introduction.

Prompting - AI bot

How CMOS can craft (more) effective prompts

To unlock the full potential of AI, marketing and sales leaders must learn how to craft effective prompts. Here are four key best practices:

  1. Be specific: Clearly articulate the desired outcome, objective, or message. Instead of stating “write an email,” opt for “write an email to first-time customers offering a 10 percent discount on their next purchase, emphasizing fast delivery and eco-friendly packaging.”.
  2. Incorporate context: Provide relevant details about the culture, market, audience, channel, or platform, as well as the objective. For example, “generate Instagram captions for a Gen Z audience in India, promoting a new line of sustainable and affordable sneakers.”.
  3. Iterate and refine: Test different prompt variations to identify which deliver the best For instance, a fashion brand could test prompts like “write a product description for a high-end handbag” vs. “write a product description for a high-end handbag targeting millennials who value sustainability.”.
  4. Align with data: Use prompts informed by customer insights and analytics. A luxury brand analyzing purchasing patterns might use a prompt like “suggest upselling strategies for customers who purchased premium leather jackets last winter.”.

The future of prompts

As AI continues to advance, the role of prompts in marketing and sales will only grow, with future opportunities including:

  • Real-time prompting: AI systems capable of generating and refining prompts based on live data, such as trending topics or breaking news.
  • Multimodal prompts: Expanding beyond text to include visual or auditory inputs. For instance, a prompt such as “Create a video ad for TikTok promoting a new sportswear line, using upbeat music and vibrant colors” could guide tools to produce complete video assets.
  • Automated campaign optimization: AI systems that autonomously test and tweak prompts to maximize campaign performance.

Beyond crafting individual prompts, organizations must start adopting comprehensive, automated tools that manage and execute these processes simultaneously. Platforms like VSTRAT.ai enable companies to orchestrate the entire workflow, addressing strategic challenges such as designing customer journeys tailored to specific objectives, markets, and target audiences. Similarly, the Anthropic Console allows users to generate, test, and evaluate the effect of prompts through a built-in prompt generator.

These tools go beyond manual interventions by integrating advanced analytics, prompt generation, and content deployment into a seamless, easy-to-use system. By ensuring consistent and optimized messaging at scale, they empower marketing and sales teams to shift their focus from operational execution to strategic planning and creative innovation, while the platform manages the execution and continuous refinement autonomously.

In a digital era defined by rapid change and fierce competition, prompts have emerged as a critical tool for marketing and sales leaders to unlock creativity, enhance personalization, drive efficiency, and optimize their messaging. Brands that master the art of crafting and deploying prompts will stay ahead of the curve, empowering them to achieve measurable success and build long-term, personalized relationships with their audiences at scale.

About the Authors

Joerg Niessing (1)Joerg Niessing is a Senior Affiliate Professor of Marketing at INSEAD and is passionate about bridging the academic and the business world on topics related to digital transformation, customer centricity, and data analytics. At INSEAD Joerg teaches executives and MBA students and he is the co-director of INSEAD’s programmes Leading Digital Marketing Strategy and B2B Marketing Strategies and Driving Digital Marketing Strategy (OOPS).

david duboisDavid Dubois is a tenured Associate Professor at INSEAD, specializing in data-driven innovation and customer-centric transformation. His research and teaching help professionals leverage digital insights for competitive advantage, particularly in areas like GenAI and social media. An expert in luxury brand management, his work has been featured in publications like The Financial Times and The Economist. He directs INSEAD’s digital marketing strategy programs and develops award-winning case studies.

The post GenAI: Prompting A Better Marketing Strategy appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/genai-prompting-a-better-marketing-strategy/feed/ 0
Can AI Help Banks See High-Risk Industries Differently? https://www.europeanbusinessreview.com/can-ai-help-banks-see-high-risk-industries-differently/ https://www.europeanbusinessreview.com/can-ai-help-banks-see-high-risk-industries-differently/#respond Sat, 19 Apr 2025 13:11:46 +0000 https://www.europeanbusinessreview.com/?p=226467 By Lissele Pratt High-risk industries face constant scrutiny. Banks hesitate. Regulators crack down. Even compliant businesses struggle to access financial services. Could AI help banks assess these industries more fairly? […]

The post Can AI Help Banks See High-Risk Industries Differently? appeared first on The European Business Review.

]]>

By Lissele Pratt

High-risk industries face constant scrutiny. Banks hesitate. Regulators crack down. Even compliant businesses struggle to access financial services. Could AI help banks assess these industries more fairly? Lissele Pratt, co explores the possibilities.

The current landscape

High-risk industries like crypto, gambling, CBD, and trading are always under the microscope. Banks hesitate to work with them. Regulators crack down hard. Fraud risks and compliance challenges make everything even tougher. Last year, 75% of crypto hedge fund firms reported issues accessing or growing banking services for their funds, compared to none of the traditional alternative investment managers surveyed.

Yet, not all high-risk businesses are the same. Many have strong compliance frameworks and responsible financial practices. Still, they face the same restrictions as riskier players. This broad-strokes approach makes it harder for legitimate businesses to operate.

Now, AI is already transforming fraud detection, compliance, and risk management. It spots threats faster, reduces false positives, and gives a clearer picture of financial risk. Could this be the solution to a fairer, more accurate way of assessing high-risk businesses?

How can AI help banks see high-risk industries differently?

AI has the power to move banks beyond outdated, one-size-fits-all risk assessments. But how exactly does it make this possible?

Smarter fraud detection

Fraud is everywhere in high-risk industries. High risk spaces attract bad actors. Scammers move money through fake accounts. Fraud rings exploit loopholes. The risks are real, and banks know it. That’s why they play it safe. If an industry has a high fraud rate, they’d rather shut the door entirely than take the chance.

The problem with this is that this approach punishes everyone. Legitimate businesses, ones with strong compliance and responsible financial practices, get caught in the same net as criminals.

Instead of following static rules, AI learns. It analyzes thousands of data points in real time, spotting patterns humans would never catch. Anomalies. Unusual behaviors. Subtle shifts in transaction habits. AI picks up on them instantly. That’s why banks using AI for fraud detection are seeing results. Some report detection rates of over 90%, with false positives dropping by nearly half.

Compliance and regulatory monitoring

Right now, banks spend over $60 billion a year on compliance operations. It’s slow, expensive, and filled with inefficiencies. Manual reviews, endless documentation, and outdated systems clog the process. What’s worse is that traditional compliance systems flood teams with false positives, flagging transactions that aren’t actually suspicious.

AI-powered compliance tools cut false positives by nearly half. They scan massive amounts of data in real time, filtering out noise so compliance teams can focus on real risks. With AI, you can scan huge amounts of data in real time, spotting real risks while filtering out the noise. Instead of reacting after a problem happens, AI helps banks catch red flags early.

Instead of scrambling to keep up with new regulations, AI updates compliance systems automatically. AI even makes identity checks more reliable, speeding up Know Your Customer and Anti-Money Laundering processes.

Balancing AI’s potential with operational and ethical concerns

Only if used wisely could AI help banks to see high-risk industries differently. Like any new innovation, there are challenges that must be considered.

  1. Trust and transparency: AI can analyze thousands of data points in seconds, but it can’t always explain its If a crypto or CBD business is flagged as high risk, and the bank can’t say why, that’s a problem. Regulators demand accountability. Businesses need fairness. If AI is learning from historical biases (like years of overly cautious banking policies, for instance) it could end up reinforcing the very barriers it’s supposed to break down.
  2. Over-reliance: AI is great at spotting fraud, but criminals are evolving Deepfake identities, synthetic transactions, people who participate in fraud are getting smarter. If banks put too much trust in AI without human oversight, they risk relying on systems that bad actors have already learned to manipulate.
  3. Regulatory barriers: Even if AI proves a business is financially stable and compliant, the law might still say no. High risk businesses don’t just face financial risk, they deal with shifting regulations, political scrutiny, and reputational AI can help banks assess businesses more fairly, but it can’t rewrite the rules they have to follow.
  4. Implementation: AI needs high-quality data, constant updates, and seamless integration with legacy banking systems, many of which were built decades ago. Large banks have the resources to make it Smaller institutions may not. And, if banks roll out AI without proper oversight, they risk compliance failures, bad decisions, and backlash from regulators.

Final thoughts

AI won’t fix everything. It won’t change regulations overnight. It won’t erase risk. But it can change how banks see it.

For too long, high-risk industries have been judged the same way—broad rules, blanket bans, no room for nuance. AI gives us a chance to do better. To separate bad actors from responsible businesses. To make decisions based on facts, not fear.

But AI is just a tool. It’s up to banks to use it wisely. To question their own biases. To demand transparency. To stop taking the easy way out.

The technology is ready. The excuses are running out. Now, it’s time for banks to decide: Will they keep looking at high-risk industries the same way? Or will they finally start seeing the difference?

About the Author

Lissele PrattLissele Pratt is a fintech entrepreneur, investor, and speaker with over ten years of experience in financial services. She is the co-founder of Capitalixe, a multi-million-pound fintech advisory firm that helps high-risk industries access global banking and payment solutions. She’s been named to Forbes 30 Under 30 Europe, TechRound’s 29 Under 29, and won titles like Fintech Businesswoman of the Year (2024), UK Fintech Rising Star, and a spot on the Women in Fintech Powerlist.

The post Can AI Help Banks See High-Risk Industries Differently? appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/can-ai-help-banks-see-high-risk-industries-differently/feed/ 0
Productivity Versus Ping Fatigue – is AI the Answer to Simplifying our Workflows? https://www.europeanbusinessreview.com/productivity-versus-ping-fatigue-is-ai-the-answer-to-simplifying-our-workflows/ https://www.europeanbusinessreview.com/productivity-versus-ping-fatigue-is-ai-the-answer-to-simplifying-our-workflows/#respond Thu, 17 Apr 2025 07:48:13 +0000 https://www.europeanbusinessreview.com/?p=226351 By Caz Brett The rapid integration of AI into workplaces has transformed how employees engage with their tasks. By automating repetitive work, AI offers a promising solution to boost efficiency […]

The post Productivity Versus Ping Fatigue – is AI the Answer to Simplifying our Workflows? appeared first on The European Business Review.

]]>
By Caz Brett

The rapid integration of AI into workplaces has transformed how employees engage with their tasks. By automating repetitive work, AI offers a promising solution to boost efficiency and free up time for strategic and creative thinking. Yet, despite AI’s ability to simplify workflows, many businesses are still struggling with an overload of disconnected productivity tools that create constant distractions. While AI can eliminate tedious tasks and create space for meaningful work, the proliferation of such non-integrated platforms risks creating a “ping-fatigue” crisis that leaves teams overwhelmed and exhausted.

I’ve worked extensively with organisations navigating the AI landscape, and I understand the promise of AI to unify workflows and create a more efficient digital experience. The challenge isn’t AI itself, but how businesses integrate it with their wider tools and platforms. The key lies in striking the right balance, leveraging AI as an opportunity to consolidate and simplify workflows, rather than adding to the noise and complexity. When implemented strategically, AI helps create a seamless and stress-free digital experience, while keeping employee well-being front and centre.

The rising tide of digital overload

With various platforms managing different tasks – collaboration software, communication apps, automated workflows – employees must constantly switch between interfaces.

While productivity tools are designed to enable efficiency, their sheer volume has led to an unintended consequence – overwhelmed employees grappling with an influx of digital distractions. From real-time project updates to instant chat messages, employees often find themselves constantly responding to a tidal-wave of notifications, rather than engaging in meaningful work. Research by Gloria Mark, a professor of informatics at the University of California, Irvine, found that people typically take about 23 minutes and 15 seconds to fully regain their concentration. With constant notifications disrupting their workflow, it’s understandable why many employees feel they can never get through tasks effectively if every notification can potentially impair their cognitive focus and productivity.

One of the biggest obstacles is the fragmentation of collaboration tools across different departments. With various platforms managing different tasks – collaboration software, communication apps, automated workflows – employees must constantly switch between interfaces. This “app overload” not only disrupts focus but also creates inefficiencies that AI can help us eliminate. Add to that the stress of the always ‘on’ culture with employees feeling the need to deliver on managerial demands regardless of what they’re doing or the time of day (especially when it comes to global teams). It’s no surprise that teams report feeling more stressed and less productive despite having more “productivity tools” at their disposal than ever before.

Leveraging AI to cut through the noise

To combat ping fatigue, business leaders must take a strategic approach to AI adoption by first uncovering what their organisation truly needs and then incorporating the right AI tools to support those goals. Rather than deploying multiple standalone tools, organisations should focus on integrating AI within a single, intuitive system that consolidates tasks and minimises unnecessary distractions. AI should be seen as the unifying force that streamlines workflows and eliminates redundant processes, rather than an additional layer of complexity. By carefully selecting and integrating the AI tools that align with their specific business objectives, companies can create a cohesive stack that directly addresses operational challenges and enhances workflow efficiency. The right AI solutions can help reduce unnecessary notifications, surface only the most relevant information, and improve overall productivity – all without overwhelming employees.

Companies can also consider adopting AI solutions that incorporate smart prioritisation, where notifications and tasks are surfaced based on urgency and relevance. For example, with Smartsheet, users can configure their notifications to ensure they only receive a specific type of notification on their desired device and can also opt out of notifications like ‘changes to a document’ to reduce the notification noise.

By aligning communication methods with employee needs, organisations foster a more effective and less overwhelming digital work environment, ensuring that technology enhances collaboration rather than becoming a source of frustration.

Beyond technological solutions, implementing a culture that prioritises digital well-being is crucial. Encouraging teams to set clear boundaries, limit non-essential notifications, and embrace productivity techniques – such as the famous Pomodoro technique (25 minutes of focused work followed by a 5-minute break) – can help mitigate mental exhaustion. It’s also helpful to encourage employees to have their ‘focused ping-free’ hour where they can concentrate on tasks exclusively.

Another key element of reducing digital overload is recognising that communication preferences vary among employees. While some may prefer instant messaging on platforms like Teams and Slack, others might find email more manageable, and some may opt for more direct phone or video calls for clarity. Leaders should engage in open discussions with their teams to understand these preferences and, where possible, establish guidelines that respect individual working styles. By aligning communication methods with employee needs, organisations foster a more effective and less overwhelming digital work environment, ensuring that technology enhances collaboration rather than becoming a source of frustration.

Prioritising employee well-being in the AI era

The AI revolution in the workplace is still unfolding, and businesses must remain agile in addressing its challenges. Leaders have an opportunity to reshape how AI is implemented, not as just another tool, but as a way to simplify and streamline work processes and move towards a cohesive, employee-first approach. By thoughtfully consolidating AI tools, respecting individual work styles, and prioritising mental well-being, businesses can create digital environments that truly serve their teams.

As organisations seek to optimise their digital strategies, discussions around AI adoption and workplace well-being will take centre stage. I’ll be speaking at Smartsheet’s upcoming London Summit on June 3, where we will explore these themes in depth – offering insights into how businesses can harness AI effectively while keeping employee experience at the forefront. With registration now open, it’s an opportunity for leaders to engage in meaningful conversations about the future of work in an AI-powered world.

The goal isn’t to implement technology for its own sake, but to build systems that empower employees to do their best work without unnecessary friction. This human-centred approach to AI adoption will distinguish tomorrow’s workplace leaders from those who simply chase the latest innovations without consideration for their impact. As we continue to refine our relationship with AI, let’s remember that the most valuable workplace asset remains human creativity and collaboration – qualities that technology should enhance, not hinder.

About the Author

Caz BrettCaz Brett is a Sr. Director of Product Management responsible for Smartsheet’s Enterprise Administration teams. Caz joined Smartsheet in 2022, prior to which she led product, engineering and design teams at the BBC and a global software development agency.

The post Productivity Versus Ping Fatigue – is AI the Answer to Simplifying our Workflows? appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/productivity-versus-ping-fatigue-is-ai-the-answer-to-simplifying-our-workflows/feed/ 0
Managing the “White Space” of AI to Achieve Operational Excellence https://www.europeanbusinessreview.com/managing-the-white-space-of-ai-to-achieve-operational-excellence/ https://www.europeanbusinessreview.com/managing-the-white-space-of-ai-to-achieve-operational-excellence/#respond Thu, 17 Apr 2025 03:18:05 +0000 https://www.europeanbusinessreview.com/?p=226236 By Dr. Annika Steiber and Dr. J. Mark Munoz The rapid integration of AI into business processes across industries may lead to the false assumption that companies are adapting artificial […]

The post Managing the “White Space” of AI to Achieve Operational Excellence appeared first on The European Business Review.

]]>

By Dr. Annika Steiber and Dr. J. Mark Munoz

The rapid integration of AI into business processes across industries may lead to the false assumption that companies are adapting artificial intelligence into all aspects of organizational design. This assumption is not true. The Intelligent and Interactive Ecosystem is a new concept that seeks to help organizations maximize the power of AI in, as yet, uncharted areas.

Introduction

In the contemporary digital landscape, the integration of artificial intelligence (AI) challenges traditional organizational frameworks. While technological innovation is ubiquitous, the true transformation lies in rethinking organizational design to leverage AI effectively.

Therefore, it is necessary to utilize a new concept, the Intelligent and Interactive Ecosystem (IIE) – an emerging paradigm that redefines how organizations create, share, and sustain value in the AI era. This concept was put forward by Zhang Ruimin, the Founder of Haier and the Emeritus Chairman of the Board of Directors of Haier Group, and put into practice within the Haier Group. On September 20, 2024, based on the IIE concept, Zhou Yunjie, Chairman of the Board of Directors and CEO of Haier Group made a further explanation of the concept and launched the theory and practice exploration of the management model ‘‘RenDanHeYi 2.0’’ (RDHY 2.0).

This article delves into the uncharted “white space” of AI by examining the role of this new organizational structure, as illustrated by the groundbreaking RDHY 2.0 model.

The “white space of AI” refers to the untapped potential or uncharted areas where AI has yet to fully integrate into organizational design and strategy. In essence, it refers to the transformative opportunities where AI can redefine how organizations operate, collaborate, and create value—spaces not yet fully realized by existing business practices or structures.

As an example, the RDHY 2.0 model is a user-centric, ecosystem-driven organizational structure that transforms traditional hierarchies into self-organizing micro-enterprises (MEs) and ecosystem micro-communities (EMCs) that act as IIEs, empowering employees to create value directly for users through autonomy, innovation, and dynamic collaboration.

Emergence of intelligent and interactive ecosystems

The traditional organizational model, built for efficiency and hierarchical control, struggles to meet the demands of today’s dynamic environment. In contrast, IIE are fluid and collaborative with networks of partners, technologies, and users. Haier’s evolution from traditional manufacturing to a boundaryless ecosystem exemplifies this shift, with its RDHY 2.0 model acting as the blueprint for the new AI era. The RDHY 2.0 model offers a lens through which to understand the design of IIEs – an improvement from a previous model called RDHY 1.0.

Managers need to anticipate emerging changes and prioritize implementation to attain the greatest impact on operational performance.

RDHY 1.0 focused on zero distance between an enterprise and customers and between employees and customers and enabled Haier with a ‘‘self-organization’’ capability, which meant that the company could sense, optimize, and iterate autonomously on customers’ needs. Zero distance between an enterprise and its customers creates the self-organization capability based on a shift from a “product-centered transactional relationship” to “an interactive relationship centered on user experience upgrade.” Further, zero distance between employees and customers creates self-motivated employees. To make this empowerment possible, Haier delegated the three powers of the CEO to each micro-enterprise (ME), i.e., the right to set the strategy, the right to hire the right people, and the right to decide how to distribute generated value. The overall goal is to allow “everyone to become his or her own “CEO” and unleash human value.

RDHY 2.0 further improved the 1.0 model and switched the focus from ‘‘zero distance’’ to ‘‘zero boundaries’’ regarding user-experience, resulting in also zero boundaries for the IIE. On the demand side this means infinite interaction in user experience iteration through omni-domain intelligent and interactive touchpoints. On the supply side it means a boundaryless ecosystem which produces a positive feedback loop for value co-creation. Haier views the IIE as a new business model and as the economic engine in the AI era.

The ‘‘engine’’ in this new business model, which creates a sustainable positive evolution of the IIE is the management principle that all ecosystem partners should benefit from being part of the model and its fulfillment of value for all ecosystem partners. The company validates the IIE’s evolution through the butterfly effect, which refers to the concept that small actions or changes in a complex system can lead to significant and sometimes unpredictable consequences over time.

This approach impacts operational performance in significant ways:

  • Empower Decentralized Leadership: By delegating decision making, hiring and distribution rights to micro-enterprises (MEs), managers transition from top-down control to enabling employees as “CEOs” of their domains. This requires cultivating trust, entrepreneurial mindsets, and accountabil
  • User-Centric Value Creation: Managers must prioritize continuous user engagement over static product offerings, emphasizing user experience upgrades through iterative, interactive relationships. This requires new metrics to track experience-driven out
  • Boundaryless Collaboration: Managers need to establish frameworks for integrating diverse partners within an ecosystem. This involves creating shared governance mechanisms to ensure fair value distribution and sustained
  • Adaptive Leadership: Managers must embrace agility and develop mechanisms to respond dynamically to user feedback and ecosystem evolution. The focus shifts to orchestrating ecosystem dynamics rather than managing fixed hierarchies.

Consequently, managers need to anticipate emerging changes and prioritize implementation to attain the greatest impact on operational performance. While IIE promises transformative potential, they face several challenges that could become key opportunities. First, is interoperability. Diverse technologies and stakeholders require common standards. Haier addresses this currently through open platforms like COSMOPlat, which facilitate seamless collaboration. Second, cultural shift. Transitioning to an ecosystem mindset demands a departure from hierarchical thinking. Haier’s elimination of middle management exemplifies the bold organizational changes required. Third, scaling across contexts. Adapting ecosystem models to diverse industries and regions is complex. Haier’s success across global markets underscores the need for localized yet scalable frameworks.

Managing the “white space” of AI

AI ecosystem - sustainability

AI’s potential to revolutionize industries remains constrained by outdated organizational structures. The “white space” lies not in technology itself but in its integration into value chains, decision-making processes, and stakeholder relationships. Further, traditional governance models fall short in managing the complexity of ecosystems, while IIEs demand governance that is:

(1) Adaptive – rules and structures must evolve in response to dynamic environments, (2) Transparent – trust is vital in ecosystems and governance frameworks should ensure clarity around data usage, intellectual property, and profit distribution and (3) Inclusive – ecosystems thrive when all participants benefit equitably.

The concept of IIE fills this gap by fostering collaboration across boundaries, implementing dynamic governance models, and pursuing user-centric innovation. Furthermore, AI underpins the functionality of IIE by harmonizing diverse data streams (seamless data integration), implementing proactive decisions (enhanced decision-making), and customizing interactions to engage participants (personalized user experiences).

Intelligent and interactive ecosystems in practice

Haier’s transformation into an IIE exemplifies how organizations can leverage the dynamics of infinite user interaction and boundaryless ecosystems to drive value creation. Here are three cases demonstrating this approach:

1. Industrial Digitization- COSMOPlat

Take the example of Chery Automobile, an Intelligent and Interactive Ecosystem empowered by COSMOPlat, an industrial internet platform developed by Haier:

On the user side, Chery Automobile applies the interaction big model of COSMOPlat, interacts with users in-depth, identifies 50,000 demands, creates 35 types of customization scenarios, and ultimately creates the first customized car based on the massive demand of users for off-road cars.

On the supply side, through the flexible manufacturing capability and supply chain synergy empowered by COSMOPlat, and relying on mixed-line production, Chery has improved the synergy efficiency and delivery efficiency, realizing JPH (number of cars rolled off the production line per hour) of 60, as said “one car per minute.”

Based on the Intelligent and Interactive Ecosystem, users are involved in the whole process of automobile design and manufacturing, from being “consumers” to “designers”, realizing the “customization for users”, and changing the industry from “one car for thousands of people” to “one car for one person.”

By building an Intelligent and Interactive Ecosystem, the sales volume of this model of Chery automobile increased by 10 times.

2. Smart Home Solutions – San Yi Niao

Haier’s own journey, guided by RenDanHeYi, offers a roadmap for organizations seeking to navigate this uncharted territory.

On the demand side, Haier’s AI-powered appliances, such as refrigerators with intelligent preservation technology, engage users through omni-domain touchpoints that continuously adapt to individual needs. These touchpoints iterate on user experiences by learning from usage patterns and optimizing performance dynamically. On the supply side, Haier integrates hardware, software, and data within a boundaryless ecosystem, enabling seamless collaboration between technology providers, designers, and service partners. This ecosystem fosters co-creation, ensuring that smart home solutions evolve in tandem with user demands.

Take the Smart Kitchen scenario as an example, San Yi Niao and home furnishing company Boloni established a deep co-creation partnership to integrate planning, design, and fulfillment activities with the smart modules. In 2024, customized products achieved growth five times (5X) that of the industry average.

3. Urban Management – Hai Na Yun

On the demand side, Haier’s Hai Na Yun platform facilitates infinite interaction with city administrators and residents by leveraging AI and IoT to address challenges such as transportation, public safety, and infrastructure management. These intelligent scenarios provide real-time insights and decision-making capabilities for urban environments.

On the supply side, Hai Na Yun establishes a boundaryless ecosystem by connecting various urban systems—transportation, utilities, and public services—into a unified framework. This ecosystem generates a positive feedback loop that enhances urban efficiency, improves quality of life, and drives innovation in smart city solutions.

Based on these three cases, five strategies align with the principles of Intelligent and Interactive Ecosystems (IIEs), ensuring optimized performance across operations while fostering innovation and collaboration. Table 1 highlights the Five Strategies for Intelligent and Interactive Ecosystem Management.


table 1 - ecosystem management

The impact of these strategies reconfigures the organizational framework and the way it operates. When implemented correctly, it sets the foundation for operational excellence.

Framework for operational excellence

The white space of AI is not just a technological frontier but an organizational one. Intelligent and Interactive Ecosystems redefine how businesses create and sustain value in the AI era, bridging gaps that traditional models cannot address. The RenDanHeYi 2.0 model illustrates the power of this approach, proving that ecosystem thinking is not just the future of business operations – it is imperative. As organizations embrace this paradigm, they unlock the potential of AI to drive innovation, inclusivity, and sustainable growth. It is time to design beyond the boundaries, embrace the white space, and build intelligent and interactive ecosystems that thrive in an interconnected world.

As AI matures, Intelligent and Interactive Ecosystems (IIEs) will become the dominant organizational paradigm. Their ability to integrate diverse stakeholders, technologies, and markets positions them as the white space where AI’s transformative potential is fully realized. Haier’s own journey, guided by RenDanHeYi, offers a roadmap for organizations seeking to navigate this uncharted territory.

Based on the authors’ research and case studies on Intelligent and Interactive Ecosystems, Table 2 showcases the Three Frameworks for Operational Excellence.

table 2

These strategies focus on the synergistic integration of technology, collaboration, and empowerment to redefine operational excellence in the AI era. By embracing these approaches, organizations can bridge the “white space” of AI and build resilient, adaptive ecosystems.

About the Authors

Dr. Annika SteiberDr. Annika Steiber, Ph.D. in Management of Technology, is a senior executive, author, and researcher specializing in innovation management. She has authored 17 books and held positions in academia and business, including Professor and Director at Menlo College. She currently leads Management Insights and the RenDanHeYi Silicon Valley Research Center, a member of the Global Research Center Network of Haier Model Institute.

Dr. J. Mark MunozDr. J. Mark Munoz is a tenured Full Professor of Management at Millikin University, and a former Visiting Fellow at the Kennedy School of Government at Harvard University. Aside from top-tier journal publications, he has authored/edited/co-edited more than 20 books such as Global Business Intelligence and The AI Leader.

References
  • Arthur, W. B. (2009). The Nature of Technology: What It Is and How It Evolves. Free Press.
  • Drucker, P. F. (1999). Management Challenges for the 21st Century. HarperBusiness.
  • Gawer, A., & Cusumano, M. A. (2014). Industry Platforms and Ecosystem Innovation: Comparative Case Studies in Manufacturing and Technology. Journal of Business Strategy, 35(2), 39–50. https://doi.org/10.1108/JBS-01-2014-0001
  • Hamel, G., & Zanini, M. (2020). Humanocracy: Creating Organizations as Amazing as the People Inside Them. Harvard Business Review Press.
  • Iansiti, M., & Levien, R. (2004). The Keystone Advantage: What the New Dynamics of Business Ecosystems Mean for Strategy, Innovation, and Sustainability. Harvard Business Review Press.
  • International Organization for Standardization. (2024). ISO 56001 – Innovation Management– Innovation Ecosystem– Guidance. ISO. Retrieved from https://www.iso.org/standard/68221.html
  • Lorenz, E. N. (1993). The Essence of Chaos. University of Washington Press.
  • Schwab, K. (2017). The Fourth Industrial Revolution. Crown Business.
  • Steiber, A. (2022). Leadership for a Digital World. Management for Professionals
  • Steiber, A., & Alvarez, D. (2024). AI-driven Digital Business Ecosystems: A Study of Haier’s EMCs. European Journal of Innovation Management, ahead-of-print. https://doi.org/10.1108/EJIM-01-2024-0076
  • Wu, X., & Zhang, R. (2020). The RenDanHeYi Model: A Case Study of Haier’s Innovative Management Practices. Journal of Organizational Innovation, 7(3), 15–26. https://doi.org/10.1016/j.jorginn.2020.06.015
  • Zhang, Y., & Zhou, Y. (2020). AI and Urban Ecosystems: Insights from Smart City Frameworks. Sustainability, 12(8), 3322. https://doi.org/10.3390/su12083322
  • Zhang, R., & Zhu, H. (2022). COSMOPlat and Hai Na Yun: Platforms for Industrial and Urban Ecosystem Transformation. International Journal of Industrial Management, 14(1), 45–60. https://doi.org/10.1016/j.indm.2022.03.009
  • Zhu, H., Zhang, J., & Li, X. (2021). COSMOPlat: An Industrial Internet Platform for Digital Transformation. Journal of Manufacturing Systems, 58, 347–355. https://doi.org/10.1016/j.jmsy.2020.06.012

The post Managing the “White Space” of AI to Achieve Operational Excellence appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/managing-the-white-space-of-ai-to-achieve-operational-excellence/feed/ 0