Future of Business: Jacques Bughin Empowering communication globally Wed, 28 Jan 2026 15:10:12 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 The Next Wave of AI: from Generation to Agency https://www.europeanbusinessreview.com/the-next-wave-of-ai-from-generation-to-agency/ https://www.europeanbusinessreview.com/the-next-wave-of-ai-from-generation-to-agency/#respond Fri, 23 Jan 2026 15:24:46 +0000 https://www.europeanbusinessreview.com/?p=242526 By Jacques Bughin If you’re quietly patting yourself on the back for finally getting a grip on AI and its impact on your organization, here’s a reality check. It turns […]

The post The Next Wave of AI: from Generation to Agency appeared first on The European Business Review.

]]>
target readers strategic manager

By Jacques Bughin

If you’re quietly patting yourself on the back for finally getting a grip on AI and its impact on your organization, here’s a reality check. It turns out there is more to AI than GenAI. Agentic AI is coming our way – and this time, it’s REALLY big!

The last two years have been dominated by generative AI, LLMs that produce text, code, and images at unprecedented speed. But the frontier has shifted. A new wave is forming, one that is less about generating content and far more about taking action. This wave is agentic AI,1 systems that plan, decide, execute, coordinate with other agents, and interface with real-world tools, software, or machines. And, unlike the previous transitions, this one fundamentally reshapes entire industries, labor markets, and the competitive landscape of AI firms.

Unlike the previous transitions, this one fundamentally reshapes entire industries, labor markets, and the competitive landscape of AI firms.

To understand why, one must look beyond the hype and examine what the emerging players are actually doing. Across the world, we see the foundational pieces of an agentic economy being assembled. Some companies—Moveworks,2 ServiceNow, OpenAI, Anthropic, CrewAI, LangGraph—are building the orchestration and multi-agent fabric. Others—Alibaba DingTalk, Tencent, ByteDance, Baidu—are deploying agents at societal scale. In Europe, Siemens and ABB are embedding agents inside factories, robots, and supply chains. Yet, while the surface impression is progress, the deeper truth is that the global market is still mono-agent, doing tool calls rather than cooperation. True multi-agent systems, agents coordinating as teams, are only in their infancy.

However, the direction of travel is now known; we are entering a world where most workflow, coordination, planning, and even knowledge work will be executed by agentic systems, not by generative models. And this shift will be bigger and more transformative than the GenAI wave for three reasons:

  1. agency automates tasks, not content;
  2. agency scales labor;
  3. agency restructures firms, workflows, and entire industries.

GenAI revolutionized output creation. Agentic AI is the wave we need to revolutionize the entire concept of work.

AI Agent

1. The emerging agentic market: still mono-agent, but crystallizing fast

Although the industry uses the language of “multi-agent AI,” today’s systems are, bluntly, one-agent wrappers around LLMs. Moveworks, the leader in enterprise AI service management, provides a single enterprise assistant that resolves tickets, completes HR workflows, and updates internal systems. It behaves like a highly competent internal employee: it resets passwords, rewrites policies, updates CRM fields, books travel, and links to Jira or Workday. But all this flows from a single agent orchestrating many tool calls. It is not yet coordinating with other autonomous agents; rather, it is acting as a “meta-employee” for the enterprise.

The same is true for Microsoft’s Autogen-based internal systems, and for Google’s Gemini Code Assist. They are not yet multi-agent societies; they are intelligent single execution loops with planning sequences.

Only a few players push into real multi-agent autonomy. CrewAI is an open-source Python library which allows the orchestration of multiple AI agents as a real project team. Instead of settling for a single generalist assistant, one can create a squad of specialized AIs, each with a role, a mission, and the ability to communicate with their colleagues. The agents are powered by LLMs such as GPT-4o or Claude. Each agent acts within their field, collaborates with others, and contributes to advancing the mission. Everything is coordinated by a manager agent who orchestrates the team. What makes CrewAI so powerful is its role-based agents, its shared contextual memory system, and its ability to handle complex inter-agent conversations. LangGraph is another case in point as an orchestration framework designed for building multi-agent AI systems with large language models. It allows developers to create complex, dynamic workflows as graph structures where multiple AI agents interact, collaborate, and maintain context and state over long-running tasks. LangGraph excels in managing multi-agent communications with fine-grained control over application flow, enabling reliable, customizable, and scalable agentic AI applications, including conversational agents, task automation, and decision-making systems.

But even here, most use is experimental. Agents cooperate to write reports, run simulations, or perform market analysis, not to autonomously run supply chains or operate financial systems. China is the exception. Its industrial platforms—DingTalk, Tencent’s scenario platforms, Baidu’s Apollo, ByteDance’s commerce systems—use multi-agent structures, because the underlying digital ecosystems are unified. DingTalk agents negotiate task assignments and approval flows. ByteDance’s HiAgent allows pricing, logistics, advertising, and inventory agents to coordinate asynchronously. Baidu Apollo’s self-driving system is multi-agent by necessity, allowing vehicles to learn from collective driving experiences and scenario data. This distributed multi-agent structure enables scalability and fleet-level optimization, supporting real-time scenario simulation, validation, and model updates that enhance safety and performance. The necessity for this multi-agent approach stems from the complexity and safety-critical demands of autonomous driving. No single monolithic model can efficiently and reliably handle the wide range of subtasks required in diverse driving environments. Instead, modular agents specializing in perception, mapping, prediction, and planning enable parallel processing, robustness, and modular upgrades. Real-time interaction of these agents ensures continuous adaptation to changing conditions, while fleet-wide coordination facilitates system improvements on a large scale.

agentic AI

2. Why agentic AI is the next wave

(and bigger than generative AI)

To understand why agentic AI will overshadow generative AI, we must look at what it fundamentally changes. Generative AI produces content. That is powerful; coding assistants like Cursor or ChatGPT can generate boilerplate, transform legacy systems, and help developers. But content generation has natural limits: content is the output of tasks, not the tasks themselves.

Agentic AI flips this relationship. It automates the task, not the output. Instead of generating an email, the agent reads the email, opens Salesforce, retrieves context, drafts a response, updates the opportunity, books a meeting, and files a ticket. Instead of summarizing policy documents, the agent updates compliance workflows, sends approval requests, writes the audit trail, and coordinates with five other internal systems.

An agent is not a “smart model”; it is a worker. And when workers scale, so does value creation. This shift has three systemic implications:

First, agentic AI attacks the coordination costs of the firm, the deepest cost structure identified by Coase. If an agent can schedule meetings, allocate resources, file claims, run ETL pipelines, reconcile invoices, and coordinate inventory, the firm transforms from a hierarchy of labor into a network of autonomous processes. Productivity is no longer linked to headcount.

Second, agentic AI enables stacked multipliers: one agent helps sales; ten agents run an entire sales pipeline; fifty agents automate a supply chain. Generative models do not scale this way.

Third, agentic AI displaces GenAI-native companies because generation is commoditized. Once agents automate tasks directly, the value shifts from content to execution. Cursor helped developers write code; agentic systems automate the entire issue lifecycle: triage → fix → test → deploy → notify stakeholders. The more agentic systems mature, the weaker the standalone GenAI-only products become.

3. Why agentic AI will reshape employment

(more than GenAI ever could)

Predictive AI affected forecasters and analysts. GenAI affected writers, coders, and creatives. Agentic AI affects everyone, because it automates the workflows that make up jobs.

Moreover, multi-agent systems will automate coordination, the highest-level human activity in firms.

Moveworks and ServiceNow already demonstrate that a single agent can absorb 40–70 percent of IT and HR tickets. In major companies, this is the equivalent of replacing dozens of support staff. ByteDance HiAgent coordinates advertising, logistics, and customer support, reducing labor requirements across multiple domains simultaneously. DingTalk agents in China automate HR, finance, and purchasing workflows for millions of SMEs. Unlike GenAI, which “augments,” agentic AI executes. It can read emails, log into systems, reason over multi-step workflows, call APIs and make decisions

Moreover, multi-agent systems will automate coordination, the highest-level human activity in firms: resource allocation, scheduling, negotiation, prioritization. This is why the shift is more profound than the move to GenAI. GenAI replaced creation, but agentic AI replaces coordination, which is what managers, administrators, and entire corporate functions are paid to do.

EARLY EVIDENCE AGENTIC AI IMPACT ON WORK

Moveworks

  • Used by >200 enterprises (DocuSign, Slack, Palo Alto Networks
  • 40 percent of all IT issues solved end to end by agents
  • Up to 70 percent in the most automated deployments
  • Equivalent to replacing 20–50 support staff in a 10,000-employee corporation

ServiceNow Agent Workspace & Now Assist

  • For Fortune 500 clients, GenAI+agents reduce 30–50 percent of service-desk workload.
  • One major European bank automated 2.4 million annual tickets, reducing staffing needs equivalently by 600–900 FTEs.
  • Toyota, Deloitte, and Target report double-digit reductions in manual case handling.

ByteDance HiAgent

  • One agent replaces 8–12 human operators in e-commerce operations.
  • Labor requirements in trial teams fell by 38–52 percent.

Alibaba DingTalk Agents

  • Used by >20 million SMEs in China.
  • SMEs reduce administrative staffing by 30–60 percent after agent deployment.
  • HR teams shrink from 5–7 staff to 1–3 in typical 200–500 employee firms.

4. Agentic AI may oblige GenAI-only startups to reinvent

The GenAI SaaS wave (2020–3) produced an explosion of startups offering “smart content.” But the economics of agentic AI destroy that value proposition. A GenAI-only product generates a document, a query, or a piece of code. An agentic AI system reads the requirement, executes the task, interacts with systems, and completes the process.

Cursor is already facing this reality. Although it is a brilliant coding assistant, agentic systems like Devin or GPT-based Code Agents can automate entire tickets end to end, making a coding editor assistant insufficient. Jasper and Copy.ai have declined sharply in usage because marketing agents can now plan campaigns, test variants, analyze CTR, adjust budgets, and post on social media, not just generate copy. The more agentic AI improves, the more GenAI-only tools lose relevance. Why use a coding assistant when an agent can build, test, deploy, and monitor features? Why use a customer-service chatbot when an agent can resolve the case?

GenAI tools focused on “generation” become components, not products. Agentic AI is not a new product category; it is a platform shift that absorbs generation entirely.

As an example, consider Moveworks. It represents the first generation of enterprise agentic platforms, a single agent with deep enterprise integration and thousands of pretrained workflows. Its competitive strength lies in the density of integrations, not the intelligence of the model. It is a mono-agent that behaves like an entire tier-1 support team. This is why ServiceNow acquired Moveworks: it fits into a broader agentic vision where each enterprise function gets an autonomous system.

CrewAI and LangGraph represent the second generation—multi-agent orchestration frameworks, where different agents assume different roles, negotiate tasks, and pass control. These frameworks are early, messy, and experimental, but they are the seeds of a future where enterprises run dozens or hundreds of cooperating agents across departments. In China, DingTalk and ByteDance are already moving toward multi-agent ecosystems with specialized agents that cooperate across logistics, finance, inventory, marketing, and HR. In many ways, China is executing the true multi-agent vision earlier, because its digital ecosystems are unified.

Conclusion: The age of agency will restructure the economy; be ready for it

The next wave of AI is not about models but about actions. It is not about intelligence but about coordination. It is not about content but about workflows. Agentic AI will reshape firms, collapse coordination costs, create new digital labor forces, disrupt GenAI-only companies, and permanently alter labor markets.

Mono-agent systems will dominate in the short term, but multi-agent cooperation will define the long-term landscape. China is ahead in deployment, the US in frameworks, and Europe in industrial integration, if it can lower its cost of deployment. Agentic AI is not the next step after GenAI. It is likely a replacement for it. The era of autonomous work besides simple robotics has begun.

Managers must shift from supervising workflows to owning outcomes, because agentic AI automates the coordination tasks that once defined managerial work. Their role becomes that of system architect, not task allocator, designing which workflows agents execute, setting guardrails, and auditing AI decisions. They must manage constraints, not steps: accuracy thresholds, compliance logic, escalation paths, and risk boundaries. Data stewardship becomes central, since agentic AI’s performance depends on clean data flows, standardized processes, and interoperable systems. Metrics move from micro-monitoring humans to macro-monitoring system productivity, error vectors, and escalation patterns. Managers must redeploy humans into high-judgment roles: exception handling, negotiation, creativity, and cross-functional sense-making. They must also master AI risk management through audits, drift monitoring, red-teaming, and scenario testing.

Ultimately, managers evolve into hybrid orchestrators of humans and agents, responsible for strategic alignment, workflow design, constraint definition, and organizational learning. The quantity of managerial labor declines, but the strategic intensity of what remains increases sharply.

About the Author

Jacques BughinJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

The post The Next Wave of AI: from Generation to Agency appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-next-wave-of-ai-from-generation-to-agency/feed/ 0
Ten Things Every Manager Must Understand About China’s AI Strategy https://www.europeanbusinessreview.com/ten-things-every-manager-must-understand-about-chinas-ai-strategy/ https://www.europeanbusinessreview.com/ten-things-every-manager-must-understand-about-chinas-ai-strategy/#respond Thu, 18 Dec 2025 06:57:29 +0000 https://www.europeanbusinessreview.com/?p=240456 By Jacques Bughin China’s AI strategy reshapes competition far beyond foundation models and chatbots. As Jacques Bughin explains, managers must look at scale, integration and deployment rather than raw model […]

The post Ten Things Every Manager Must Understand About China’s AI Strategy appeared first on The European Business Review.

]]>
By Jacques Bughin

China’s AI strategy reshapes competition far beyond foundation models and chatbots. As Jacques Bughin explains, managers must look at scale, integration and deployment rather than raw model power. Understanding these dynamics helps decision makers assess risk, opportunity and partnership in a market treating AI as national infrastructure, not isolated innovation.

Understanding China’s position in the AI race requires stepping away from Western assumptions that the contest is mainly about foundation models. In China, the model is only the starting layer. The real competitive engine lies upstream in cloud architecture and downstream in national-scale deployment. A manager evaluating partnership, competition, or opportunities in China must grasp ten essential realities that define the Chinese AI trajectory. Each is rooted in evidence, company cases, and the way the Chinese market actually operates.

When a manager evaluates the efficiency of a Chinese platform, it is the integration, not the model, that explains the leap in adoption.

The first reality is that China is not building AI as a loose collection of apps and platforms but as an integrated technology stack. This stack connects foundation models such as Alibaba’s Qwen or ByteDance’s Doubao with cloud providers including Alibaba Cloud, Tencent Cloud, and Baidu’s AI Cloud, and links further downstream to workflows running inside DingTalk, WeChat Work, Alipay, Meituan, and entire municipal service systems. This means that AI deployment in China is fast, uniform, and often invisible to the user. When a manager evaluates the efficiency of a Chinese platform, it is the integration, not the model, that explains the leap in adoption.

A second reality is that China builds for scale from day one. DingTalk, with hundreds of millions of users, deploys more workplace agents in a month than most Western enterprise SaaS firms deploy in a year. These agents are not demos but operational capabilities handling HR approvals, procurement flows, financial checks, travel validations, and compliance steps. This scale acts as an engine for rapid iteration, meaning China’s agentic systems evolve through millions of real-world feedback loops per day. Managers must understand that this scale advantage compresses innovation cycles dramatically.

A third truth is that China’s digital ecosystems are structurally unified. A Western manager is accustomed to siloed systems: ERP, HRIS, CRM, ticketing, payments, messaging. In China, the same firm may run daily operations, messaging, approvals, payments, forms, file storage, customer interactions, and analytics all inside a single super-app environment. DingTalk for enterprises, WeChat Work for SMEs, and increasingly ByteDance’s Feishu enable agentic automation with almost no integration overhead. This is why multi-agent workflows already appear in logistics, commerce, and city services: the ecosystem makes agent-to-agent coordination genuinely feasible.

A fourth factor is that Chinese firms prioritize multimodal and real-time agents rather than text-only assistants. ByteDance’s Doubao is optimized for video, image, and real-time signals because Douyin, TikTok’s sister platform, runs on real-time multimodal behavior. Baidu’s models focus on real-time reasoning because Apollo, its autonomous driving system, requires agents to coordinate across perception, planning, and fleet routing within milliseconds. Chinese AI strategy is shaped by sectors where real-time autonomy matters: retail operations, logistics, mobility, and urban services.

A fifth insight is that multi-agent systems are far more advanced in China than in Europe or the United States. In Baidu Apollo taxis, dozens of agents operate simultaneously: a perception agent, a prediction agent, short-horizon and long-horizon planning agents, and a fleet coordination agent. In ByteDance’s e-commerce engine, pricing agents, advertising agents, inventory agents, and logistics agents work in asynchronous negotiation loops to optimize conversion and cost. Tencent’s financial platforms use evaluator agents to monitor fraud-detection agents. These systems are operational, not prototypes. A manager analyzing competition should understand that China is not theorizing about multi-agent AI; it is deploying it.

A sixth reality is that China’s industrial and manufacturing base is uniquely suited to agentic automation. Firms like Haier, Midea, Geely, BYD, and CATL already run digitalized factories with IoT, MES systems, centralized scheduling, and real-time data visibility. This foundation enables agentic systems to take over scheduling, quality control, machine setup, procurement coordination, and energy optimization. Siemens and ABB operate globally and are strong in Europe, but China is deploying at a faster internal velocity because the country has more greenfield plants and fewer legacy integration obstacles.

A seventh point is that regulatory structures in China support rapid iteration of enterprise AI. China’s AI regulations emphasize platform accountability rather than restrictive usage controls. For enterprise AI, this means firms can deploy agentic systems across workflows without facing the friction of overlapping data, privacy, or compliance requirements found in Europe. Chinese privacy law is real, but enforcement patterns focus on misuse and societal harm, not innovation. Managers should understand that regulatory speed is part of China’s competitive advantage.

An eighth truth is that Chinese consumer behavior accelerates agent adoption. Chinese users are accustomed to automation, from mobile payments to autonomous delivery robots. This cultural readiness dramatically reduces the adoption friction for AI-driven services. It is no accident that Meituan deploys hundreds of autonomous delivery units or that JD.com uses intelligent warehouses with agents coordinating robots. The population accepts and expects automation. This allows Chinese companies to deploy agentic systems at a depth Western companies cannot match without cultural change.

A ninth insight is that China’s mobile-first economy forces AI companies to optimize for inference efficiency, not model size. Chinese AI firms build leaner, faster reasoning models such as Qwen-1.5B, Doubao Lite, and Tencent’s small Hunyuan variants because these models run directly on smartphones, point-of-sale terminals, and industrial devices. Managers who believe the Chinese AI race is about parameter counts misunderstand the real technology direction: China’s competitive edge lies in low-latency, cost-efficient, highly-deployed agent models.

China is building not a collection of AI tools but a national operating system for agentic intelligence.

A final and crucial point is that China’s AI strategy is not simply technical; it is geopolitical. Every deployment of an agentic workflow inside DingTalk, every multi-agent system inside a factory, every city adopting Baidu’s fleet-level autonomy, and every ByteDance agent operating cross-border commerce strengthens China’s position in global value chains. Managers must realize that AI in China is tied to industrial policy, national competitiveness, and economic sovereignty. When a Chinese firm deploys an AI agent, it is not merely automating a task; it is reinforcing the country’s position in global supply chains.

Together, these ten elements form a picture of a deeply coordinated AI economy. China does not win because it trains bigger models. It wins because it deploys agents deeper inside digital ecosystems, industrial infrastructure, and consumer environments.

Western managers must stop evaluating China through a generative AI lens and start evaluating it through the lens of agentic automation. China is building not a collection of AI tools but a national operating system for agentic intelligence.

About the Author

Jacques BughinJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

The post Ten Things Every Manager Must Understand About China’s AI Strategy appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/ten-things-every-manager-must-understand-about-chinas-ai-strategy/feed/ 0
Managing in the Age of Superstar firms https://www.europeanbusinessreview.com/managing-in-the-age-of-superstar-firms/ https://www.europeanbusinessreview.com/managing-in-the-age-of-superstar-firms/#respond Mon, 03 Nov 2025 06:37:45 +0000 https://www.europeanbusinessreview.com/?p=238039 By Jacques Bughin Superstar firms are reshaping global markets through scale, innovation, and strategic agility. Jacques Bughin examines how these companies achieve and sustain dominance, offering lessons for today’s leaders. […]

The post Managing in the Age of Superstar firms appeared first on The European Business Review.

]]>
By Jacques Bughin

Superstar firms are reshaping global markets through scale, innovation, and strategic agility. Jacques Bughin examines how these companies achieve and sustain dominance, offering lessons for today’s leaders. He explores the defining traits of corporate maturity, the economics of sustained growth, and the evolving playbook for CEOs in the age of market superstars.

1. Introduction

If one had to guess when superstar athletes are at their best, most would say mid-20s. They are right- for explosive power sports, it is a bit earlier – e.g., sprint swimming & track: ~21–25 or for gymnastics teens–early 20s for ladies; early–mid 20s for men. If there is a need for endurance, like in tennis, it is about ~24–30 (some outliers now winning in their 30s, and in football (soccer): attackers ~23–27; midfielders ~24–29; defenders/keepers ~26–32. Finally, for endurance/strategy heavy, such as marathon & road cycling: ~28–30 or Golf: ~30s (peak major-winning window often 31–36)

What this says is that a) it takes a few decades to become a superstar, b) the more endurance/strategy, the longer it takes, and c) performance curves are usually bell-shaped: a build-up, a short peak, then decline.

In a time where market power has grown significantly and accrued mostly to superstar firms, companies should at least try to emulate part of the recipe.

But what about companies? Superstar firms are now extensively talked about, such as the FAANGs, who lead markets in high tech, or those from a century ago in industrial manufacturing,  also have reached their superstar status in their 20s to 30s- but the key difference is that cycle is less smooth than sport athletes, and foremost some can-Superstar evidently involves luck but also entails a critical journey. In a time where market power has grown significantly and accrued mostly to superstar firms, companies should at least try to emulate part of the recipe. Not only as sports athletes, but also with the specifics of business.

Here is what an executive should know to emulate- or at least not fall into the trap of the long tail of insignificance.

2. Superstar Economics

Every superstar has an age. We define a superstar firm, not simply as a large or fast-growing company. It is a firm that meets three quantitative thresholds

  1. Top-100 firms by market capitalization (measured annually, 1980-2025)
  2. Persistent Top-100 presence for at least five consecutive years or cumulative Top-10 appearances in any period.
  3. Above-sector profitability and shareholder return

With this definition, let’s look at the high-tech sector. While the list is no surprise, the top 10 superstars by 2025 are more than the FAANGS, with names from Nvidia to ASML. (Table 1)

Table 1: High-tech superstar firms, 2025.

Rank Company Founded Age (2025) Core domain leadership
1 NVIDIA 1993 32 AI compute chokepoint
2 Microsoft 1975 50 Cloud, OS, enterprise data
3 Apple 1976 49 Device–services integration
4 Alphabet 1998 27 Search, ads, AI platform
5 Amazon 1994 31 Commerce + cloud
6 Meta 2004 21 Social graph, ad data
7 TSMC 1987 38 Foundry monopoly
8 Broadcom 1991 34 Connectivity silicon
9 Tesla 2003 22 Data-driven autonomy
10 ASML 1984 41 EUV lithography bottleneck

Superstar Adulthood: The average age of Top-10 technology firms in 2025 is 31 years. The median age of Top-100 tech companies worldwide is 26 years. Only one of the current global top ten was founded after 2000 (Meta, 2004)

The average company of this list went on IPO after 8, 5 years, 6 years to become a unicorn (more than 1 billion revenue). Their revenue growth is, since IPO, 25% (and market cap growth just above at 25-27%, with a rule of 40 largely exceeded, at 55-65%.

Looking at their age, Meta and Tesla are the youngest-and still in their (early) twenties. When Microsoft turned twenty, it had just launched Windows 95. When Apple turned thirty, the iPhone was born. When NVIDIA turned thirty-two, it became the most valuable company in the world. Across four decades of technology history, the same pattern repeats: the world’s most powerful companies reach true maturity between 20 and 35 years of age. That’s the period when ideas become institutions, founders give way to systems, and growth becomes self-reinforcing rather than accidental.

This “age of the superstar” marks the point where a firm is no longer merely fast-growing — it becomes foundational. It is when the company stops chasing the market and starts defining it. We (used to) Google, not search.

The Journey to adulthood. We often celebrate the birth of innovation — the garage, the founders, the first million users —in one way, this is normal, winners are those that escape a sudden death experience: 50%+ close doors in a horizon of 5 years.

Superstar firms are of a different species – they are past the valley of death. By the time they’re competing for Top-100 seats, they’ve already cleared the early-life 5-year hazard that kills ~half of new firms. Their risk is different, but as risky, not as frequent bankruptcy, but more about losing the status. In fact, looking deep in the data, churning out of the superstar ranks is high, about 30-40 percent, but given their size impact on shareholder value evaporation is much larger than early death.

And the journey is not easy. Apple’s adolescence (1990–1997) nearly killed it; a decade of chaos ended only when Steve Jobs returned and reimagined the company around design and ecosystem control NVIDIA’s adolescence (2001–2008) was marked by a GPU glut and strategic confusion; its recovery, built on parallel computing and later AI acceleration, turned the firm into the infrastructure of the 2020s.

Three traits consistently appear at this stage:

  1. Moats that deepen with scale: network effects, distribution channels, or data flywheels that make the company better the larger it gets.
  2. Governance that stabilizes: leadership transitions, professionalization, and institutionalized culture, and ecosystem leadership play
  3. Reinvestment discipline: a focus on cash flows and R&D, not just market share; reinvent quickly with guardrails.

Adulthood is not about slowing down; it is about compounding intelligently.

First key is innovation:  Microsoft in its forties is perhaps more innovative than it was in its twenties. Constant reinvention is necessary- Google, as much a king of search, is quickly reinventing as an AI darling.

Second key is ecosystem play- most of the players are not looking at typical JV or partnership- they are quickly investing in all forms of tightening, from M&A, to commercial deals, to secure the build-up of mega industries. For example, using multiple triangulations, we estimate that each $1 of NVIDIA revenue induces roughly $2.5–$4.0 of adjacent infrastructure revenue, and $5–$7 when including cloud revenue. The system may be fragile as claimed elsewhere, but this is also the virtue of a trusted ecosystem to co-evolve with everyone so that incentives are aligned.  In consequence, becoming a superstar takes time because building an ecosystem takes cycles of technology, trust, and talent. Startups can reach billion-dollar status in months, but credibility, supply chains, regulatory acceptance, and institutional memory accumulate over decades.  In that sense, corporate age is a proxy for earned leadership. Superstar is really not a hit or miss.

We estimate that each $1 of NVIDIA revenue induces roughly $2.5–$4.0 of adjacent infrastructure revenue, and $5–$7 when including cloud revenue.

Finally, adulthood does not mean stability—the third key is the agility for superstar firms.  When a new technology emerges, superstar firms jump into the foray of invention – but also add a few guardrails- they avoid full disruption by integrating innovators: they buy or finance start-ups.  Also, they anticipate disruption and create mobility barriers to have the time to react to disruption. Players such as NVIDIA and Apple consolidate their leadership p by creating closed technological ecosystems (CUDA, iOS, App Store) that standardize the industrial use of a technology, building de facto technical standards.

In 2010, the FAANGs — Facebook, Apple, Amazon, Netflix, Google — represented the apex of digital power. Their dominance was built on network effects and consumer data, and applications. By 2025, the script is changing – with more and more intelligent agents, the new superstars are not the apps of the internet but those of owning computation, more accurate data, and orchestration.  This evolution does not mean that FAANGs will do- rather, one should applaud their metabolic rate of change.  Google is migrating towards consumer and enterprise multi-agent tools (Gemini Agent Mode, Vertex Agent Builder),  Amazon (AWS) has built so far the most production-ready enterprise agent stack (Agents for Bedrock, AgentCore GA), and Meta is moving towards an aggressive consumer agent + wearables (AI app, Ray-Ban Display). Many companies should take examples from how quickly those superstars take direction and are able to mobilize organizations.

The Adolescence of AI Firms. Today’s AI darlings — Snowflake (13 years old), OpenAI (9 years old), Anthropic (5), or Cursor (3), are still in childhood. From the above, they have untested durability. Their adolescence will arrive soon —and they will need to survive a crisis at around age 15–20, not those unicorns that peak at 5. As discussed above, superstars are risky play—30% to 40% lose their status, likely to some of the new darlings above.  But remember at 30, a company has the assets and legitimacy to shape standards and influence policy. At 10, it merely challenges them. As said elsewhere: “ startups capture imagination; adults capture institutions”.

3. A Glimpse at the New CEO Playbook

In the age of superstar firms, the first and most crucial step for any CEO or leader is within the broader economic landscape. Not every company will become a superstar, just as not every athlete becomes a global champion. This acceptance isn’t a concession of defeat but a foundation for smart, sustainable strategy and leadership.

Accepting Your Potential—and Leveraging the Ecosystem. Successful CEOs begin by honestly assessing where their firm stands: whether on track to become a superstar, positioned as a thriving mid-sized company, or part of the long tail of specialized or niche players. This clarity enables firms to focus on realistic strengths and strategic roles, avoiding costly missteps chasing superstar status when it may never be attainable.

However, not being a superstar does not imply insignificance. Firms that recognize their limits often thrive by leveraging and integrating into ecosystems dominated by superstar firms. Just as in sports, where an athlete who does not reach superstar status may still contribute as a coach, trainer, or investor, companies can create substantial value by playing complementary roles in the broader economic ecosystem.

Superstar firms embody expansive ecosystems filled with suppliers, technology providers, service firms, investors, and talent developers. Firms acting as ecosystem specialists or strategic partners secure stable, ongoing revenue streams and opportunities for innovation by filling vital roles in these networks.

Luck opens doors, but it is sustained persistence through continuous innovation and agility that determines whether a firm transitions from startup to enduring superstar.

Examples include service providers and suppliers: Offering indispensable capabilities that superstar firms rely on; investors and strategic partners: Capitalizing on growth indirectly by financing or partnering with high-potential players; Talent and knowledge developers: facilitating human capital growth by training and consulting the ecosystem’s participants, or still innovation incubators: remaining agile and experimental, developing niche solutions that superstar firms may adopt or acquire.

The Lifecycle of Superstar Firms. If you have the superstar ambition, there are four elements of the recipe to have a chance to be in the game.

  • The Role of Luck, Persistence, and Agility. CEO strategies must balance the element of luck—a critical early catalyst highlighted by Adler, where small initial advantages snowball into market dominance—with persistence and agility, which are indispensable for surviving and thriving past the early phases of high uncertainty and volatile growth. Luck opens doors, but it is sustained persistence through continuous innovation and agility that determines whether a firm transitions from startup to enduring superstar. Firms that fail to adapt quickly as technologies, competitors, and consumer behaviors evolve risk falling off the top-rank leaderboard—echoing the 30-40% churn seen among superstar firms.
  • The Metabolic Rate of Change: Continuous Reinvention Superstar firms do not rest on past laurels. Instead, they maintain a high metabolic rate of change, continuously reinventing products, platforms, and business models. This requires a culture and leadership that institutionalize innovation beyond founder vision, balancing exploration with disciplined resource allocation. The CEO’s role is to embed mechanisms for sensing and integrating emerging technologies (e.g., AI, cloud computing) and new business models rapidly, while simultaneously harnessing scale advantages from existing ecosystems and intellectual property.
  • Time and the Lifecycle: Building Through Stages. Superstar status is a trajectory, not an instant achievement. Typical timelines show that leading tech firms reach peak maturity between 20 and 35 years after founding, with unicorn or IPO status often achieved around 6-8 years. CEOs must therefore resist short-termism, focusing instead on long-term capability build-up, institutional memory, and ecosystem leadership.
  • Anchoring vs. Reinvention: Navigating Existing Plays A core strategic tension in the CEO journey is whether to anchor the firm in existing plays (products, markets, technologies) or disrupt from within by reinventing the business model. This is nuanced: anchoring creates short-term stability and leverages accumulated assets but risks ossification; Reinvention demands risk-taking and can cause internal upheaval but is essential for circumventing disruption. Superstar CEOs typically manage this tension by integrating disruptive innovation at the edges, acquiring or financing startups, and creating internal innovation units—building guardrails that allow transformation without destabilizing the core​

Five checks for CEOs in the Superstar Era

  • Superstars arise from a combination of luck, strategic persistence, and exceptional agility.
  • Success is a long game—rapid scale is possible, but enduring dominance takes decades.
  • The metabolic rate of change—continuous reinvention—must be institutionalized, not left to heroic founders.
  • CEOs must carefully balance anchoring in existing plays with internal disruption, using ecosystems and acquisitions as critical tools.
  • Leadership maturity parallels the firm’s lifecycle: early risk tolerance evolves into disciplined enterprise governance and ecosystem stewardship.

About the Author

Jacques BughinJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

The post Managing in the Age of Superstar firms appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/managing-in-the-age-of-superstar-firms/feed/ 0
The Generative AI Gold Rush Revisited https://www.europeanbusinessreview.com/the-generative-ai-gold-rush-revisited/ https://www.europeanbusinessreview.com/the-generative-ai-gold-rush-revisited/#respond Thu, 25 Sep 2025 06:53:37 +0000 https://www.europeanbusinessreview.com/?p=236019 By Jacques Bughin The generative AI landscape has rapidly evolved, turning emerging concepts into proven drivers of business value. Jacques Bughin revisits ten key AI investment opportunities and examines how […]

The post The Generative AI Gold Rush Revisited appeared first on The European Business Review.

]]>
By Jacques Bughin

The generative AI landscape has rapidly evolved, turning emerging concepts into proven drivers of business value. Jacques Bughin revisits ten key AI investment opportunities and examines how advances such as agentic orchestration, synthetic data, and verticalized LLMs are reshaping competitive strategies. He provides practical insights to help executives capture AI’s full potential.

Given the transformative evolution of AI, the landscape has since evolved profoundly—maturing some areas into proven profit centers, introducing new AI paradigms like agentic orchestration and verticalized LLMs

By July 2024, or one year ago, we provided a visionary framework identifying ten key generative AI investment and impact opportunities for enterprises. Given the transformative evolution of AI, the landscape has since evolved profoundly—maturing some areas into proven profit centers, introducing new AI paradigms like agentic orchestration and verticalized LLMs, and anchoring AI as a fundamental driver of the way to competitive.

This article revisits and expands on those opportunities, weaving in recent advances in agentic AI orchestration, multi-agent coordination, and robotic augmentation. It provides an integrated perspective on how C-suite leaders should act decisively to capture AI’s full value and competitive edge.

The List

1. AI embedded deeply in business processes

AI’s value translates when integrated end-to-end into workflows. Deutsche Telekom’s “Ask Magenta” chatbot, an AI-powered assistant, offloads 70% of fiber-optic customer support queries, boosting customer satisfaction scores by 15 percentage points and reducing operational costs significantly. Similarly, Walmart’s European logistics AI enhances inventory forecasting and route planning, achieving a 30% cut in stock-outs and millions in annual savings.

Management insight: The recent case experience of rolling out AI states that only cross-functional AI operating models are able to deliver on the AI promises.

2. The Rise of Agentic AI Orchestration: autonomous yet coordinated AI workforces

Agentic AI—AI systems capable of independent decision-making, planning, and goal-directed execution—is rapidly scaling in enterprises. Ampcome, a European logistics AI platform, has demonstrated how multi-agent systems can autonomously coordinate routing, dispatching, and inventory management, achieving operational cost cuts of over 40%. Their agents combine Retrieval-Augmented Generation (RAG), pulling real-time data from complex sources, with autonomous decision-making, showcasing how agentic AI elevates from a reactive tool to a proactive orchestration framework

Wells Fargo’s corporate banking divisions implemented custom AI agents using Google’s Agentspace to unlock new efficiencies—bankers now spend significantly less time hunting for contract clauses or foreign exchange policies. Agents query hundreds of thousands of documents in seconds, enabling client-facing staff to focus on relationships and advisory. Their success underscores the necessity of deep integration with up-to-date internal data and human oversight for high-risk decisions

In manufacturing, Siemens embodies agentic orchestration’s physical extension. Their “Industrial Copilots” coordinate AI agents managing product design, production planning, real-time plant analytics, and robotic task execution, forming an intelligent operational swarm. Pilot factories report up to 50% productivity gains and improved machine uptime, thanks to modular agent orchestration layers that coordinate human and robot collaboration. This architecture allows seamless integration of third-party agents, laying a foundation for scalable AI ecosystems

A 2024 global survey involving 1,650 senior execs revealed 94% acknowledge process orchestration as crucial for AI success, highlighting that without this nervous system, agentic AI deployments often fail or stall. Governance frameworks mandating explainability and audit trails per the EU AI Act further emphasize the human oversight required in agentic ecosystems.

Management insight: Hence, agents are here to stay
and expand, but executives must prioritize investing in agent orchestration platforms, employee reskilling to manage AI interaction, red-teaming AI systems for risk, and establishing compliance protocols to unlock agentic AI’s full potential.

3. Synthetic Data

(At least, European) Leaders face stark regulatory constraints on data use. Synthetic data has emerged as a powerful solution to accelerate AI innovation without compromising privacy. Pfizer harnesses synthetic patient datasets to accelerate drug discovery timelines by 15%, sidestepping patient-identifying information. European fintech startups achieve 30% better fraud-detection model accuracy using synthetic customer profiles while maintaining GDPR compliance.

Top e-commerce companies are now using synthetic customer data to offer personalized shopping experiences. This new method is changing the retail world as retailers struggle with the challenge of offering personalization while still protecting customer privacy. Synthetic data solves this by creating detailed customer profiles without invading privacy. Big retailers such as Target  have seen huge boosts in sales with synthetic customer data., through a radical change in its marketing.

Management insight: Companies must embed synthetic data into their data strategies, engaging domain-focused vendors like MOSTLY AI and Hazy, while collaborating across legal and data science teams to ensure scalable and compliant synthetic data pipelines.

4. Responsible AI as a governance and trust Lever

The EU AI Act’s regulatory regime makes automated AI fairness, transparency, and auditability a competitive boundary in sectors such as banking and energy. European banks employing AI auditing tools reduced regulatory compliance costs by 75%, signaling that responsible AI directly impacts enterprise efficiency. Iberdrola strands regulatory workflows with AI-enhanced monitoring, both accelerating internal processing and promoting customer trust.

Management insight: Leadership mandates are shifting toward establishing dedicated AI ethics and compliance functions, integrating AI transparency by design, and proactively communicating responsible practices externally.

5. Sustainable AI

Reducing AI’s energy footprint has become urgent amid EU Green Deal commitments. Nordic cloud providers lead by cutting AI compute energy consumption by half using custom silicon and renewable power. Mercedes-Benz integrates AI for eco-driving assistance, tightly aligning vehicle AI with sustainability goals

Management insight: Top management teams must demand energy transparency, embed green compute into procurement criteria, and align AI infrastructure strategies with corporate ESG objectives.

6. Multi-Modal & Industry-Specific LLMs

Sanofi’s drug discovery harnesses unique vertical LLMs trained on clinical, chemical, and genomic data, trimming development phases by roughly 20%. Similarly, AI start-ups such as LegalFly, are  fine tuning LLMs for lawyers, boosting document analysis speed and accuracy by 35%.

Management insight: Forward-looking firms invest in domain-specific data assets and collaborate openly with academic and industry partners to continuously evolve their vertical AI capabilities.

7. MLOps—The Backbone for Reliable AI Deployment at Scale

Many organizations suffered a high rate of AI pilot failures until MLOps tools matured. Maersk’s MLOps infrastructure now drives near 90% success on production AI deployment, a leap from under 20%. Renault slashed model retrack costs by over 60% through rigorous ML governance.[9]

Management insight: Governance that unifies IT, data science, and business teams around model monitoring, drift detection, and remediation is now a board-level imperative.

8. AI Cybersecurity: Defending and advancing with AI

Vodafone leverages AI to shrink cyber incident response times fourfold, cutting false alerts by 30%. Dutch financial institutions use generative AI to accelerate phishing detection and regulatory compliance, tripling incident handling speed.

Management insight: Senior leaders must fund AI-augmented cyber defense programs and conduct regular threat simulation exercises.

9. Robotic Augmentation

The boundary between digital and physical is dissolving. Siemens’ copilot factories, GE Healthcare’s autonomously calibrated devices, and Bavaria’s robotic logistics fleets show how agentic orchestration is extending into robotics—fusing multi-agent ecosystems with physical action.

Management insight: Best practices  prioritize pilot sites where robotic augmentation can deliver compounded gains—productivity, uptime, and regulatory assurance.

10. Data, Talent, and Ecosystem as Strategic Assets

Without serious investment in these complements, no AI strategy can sustain competitive advantage.

An AI moat will depend on orchestrating three scarce resources: domain data, partner ecosystems, and reskilled workforces. Without serious investment in these complements, no AI strategy can sustain competitive advantage.

Management insight: Build European data consortia, scale workforce reskilling, and establish venture-style partnerships to access external AI innovation at speed.

Beyond the List – The Recipe?

The best AI adoption journey for companies in 2025—beyond just focusing mechanically on the “10 opportunities”—is about strategic selectivity, speed, and bold reinvention. 

Strategic focus: own some, partner for others

Top-performing firms do not try to own all AI capabilities. Instead, they:

  • Prioritize building proprietary AI where it creates a unique competitive advantage, especially domain-specific AI and core orchestration platforms.
  • Leverage third-party technologies and platforms for commoditized AI functions (e.g., infrastructure, foundation LLMs, synthetic data vendors).
  • Adopt a hybrid build-buy-partner model to accelerate value capture and manage risk.

Speed and front-Loading matter

Enterprises that acted swiftly are outpacing cautious wait-and-see approaches. Successful adopters move rapidly from pilots to scaled deployment, investing early in data infrastructure and MLOps to avoid costly retrofits.

Conclusions

The best journey is selective ownership combined with strategic partnerships, rapid—but disciplined—scaling, and organizational transformation. Companies that own only critical AI capabilities, integrate AI deeply into business processes, and reskill their workforce while front-loading governance and infrastructure investments will lead. Those who delay or attempt to do everything internally risk lagging.

About the Author

Jacques BughinJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

The post The Generative AI Gold Rush Revisited appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-generative-ai-gold-rush-revisited/feed/ 0
Valuation of AI Unicorns: Is it a Ponzi Scheme or rather, a Genuine Growth Option? https://www.europeanbusinessreview.com/valuation-of-ai-unicorns-is-it-a-ponzi-scheme-or-rather-a-genuine-growth-option/ https://www.europeanbusinessreview.com/valuation-of-ai-unicorns-is-it-a-ponzi-scheme-or-rather-a-genuine-growth-option/#respond Thu, 31 Jul 2025 09:02:49 +0000 https://www.europeanbusinessreview.com/?p=233299 By Jacques Bughin AI unicorn valuations have sparked debate over whether they reflect unsustainable speculation or credible growth opportunities. Jacques Bughin examines this phenomenon using real option theory, highlighting uncertainty, […]

The post Valuation of AI Unicorns: Is it a Ponzi Scheme or rather, a Genuine Growth Option? appeared first on The European Business Review.

]]>
By Jacques Bughin

AI unicorn valuations have sparked debate over whether they reflect unsustainable speculation or credible growth opportunities. Jacques Bughin examines this phenomenon using real option theory, highlighting uncertainty, strategic positioning, and ecosystem orchestration as key drivers shaping how these companies can unlock enduring value in the fast-evolving artificial intelligence sector. 

1. Introduction

The surge in the valuation of AI unicorns (private companies valued at over US$1 billion) in 2024–2025 has been unprecedented (see Figure 1), mirroring the stock price increases of major publicly quoted AI firms such as Nvidia, or Palantir.  But it also has sparked a fundamental question in boardrooms: are these valuations speculative bubbles?

Top AI unicorn companies such as Safe Superintelligence, Anysphere, Cyberhaven, Supabase and Harvey have recently experienced valuation surges in under 18 months. Safe Superintelligence, a foundation model developer that emphasises safe AGI development and has an advanced, research-first methodology, crossed a $30 billion valuation in under a year after launching in 2024. Anysphere, a developer toolkit, is valued at $1.8 billion. Supabase, a Postgres-native open-source software (OSS) backend platform, has surpassed $2 billion. Glean, a knowledge orchestration engine for enterprises, was valued at $3.5 billion in its latest funding round, and ClickHouse, a high-performance OLAP database, is now valued at over $6 billion. Scale AI was recently acquired by Meta Platforms which has taken a 49% stake the company.

Great AI Unicorn returns

 

 

Compared to other non-AI unicorns at the same level of funding development (from series C to F), AI unicorns have slightly better economics (in terms of both top and bottom lines), but the valuation multiple is nevertheless twice as high in the late funding stage compared to non-AI unicorns (see Figure 2)[1].

AI Unicorn Premium

 

This raises legitimate scepticism among experienced managers and investors. Are these startups really worth that much? Or is this another bubble ready to burst? From a distance, the temptation may be to shrug it off. Some executives say that these AI companies aren’t generating significant revenue, and older ones say, ‘We saw this in 1999.’

But that may be exactly the danger. Because buried in those valuations is something else: the answer may lie not in discounting, but in real option theory— a probabilistic bet on market-shaping technology where optionality matters more than current margins.

2. Pricing real options

While it is true that many of these companies do not justify their valuations through current cash flows, as with many technology projects involving significant uncertainty and a long timeframe, they can generate substantial returns if a few key strategic assumptions prove correct. This is the principle of real options: the idea that but like in many technology projects with large uncertainty and long period to unfold, the price paid today reflects not only the present value of expected profits, but also the value embedded in growth opportunities, capability scaling and first-mover positioning.

The price paid today reflects not only the present value of expected profits, but also the value embedded in growth opportunities, capability scaling and first-mover positioning.

For example, recent academic research shows that the options embedded in Tesla price can be as large as 65% of its value.  Amazon’s valuation in the early 2000s was also criticised for being bubble-like. However, a decade later, these growth options materialised in the form of AWS, logistics leadership and platform dominance, turning what initially seemed like overvaluation into significant value creation for shareholders. Amazon went from a valuation of less than $500 million at IPO to over $2 trillion, with investors tolerating 15 years of losses. That wasn’t a bubble. That was strategic optionality priced correctly. By the way, super firms are those continuing to build new growth options eg, the embedded option in Amazon stock price remains high, from 35% to 60% of the current market price, -and depending on the hypothesis taken on the uncertainty linked to those options,  such as the significantly larger  data load of AI than typical online activités and impact of AWS profitability, or still the effect of agentic AI on the performance of logistics and e commerce for Amazon.

So, how should today’s AI unicorn valuations be interpreted through this lens? There are five reasons to believe that this is not just a bubble, but rather a reflection of the deep uncertainty surrounding major disruptions that have been priced via strategic options.

3. AI unicorn real options are real.

Firstly, the velocity of value creation has increased dramatically. It’s not just the valuation jumps; it’s also the speed at which companies reach maturity. Companies reach Series D in less than 24 months. Safe Superintelligence was non-existent in mid-2024, but became a global benchmark within 12 months. The cycle of infrastructure deployment, developer tool maturity and API integration has shortened from years to quarters. Funding rounds that used to take 12–18 months are now compressed into six-month periods, with follow-on rounds being preempted by global investors.

Secondly, these start-ups are not just thinly spread SaaS clones. They often achieve growth rates of over 150% year on year, have strong gross margins and operate in capital-light, API-first infrastructures. More importantly, they target large market segments with weak competition. Tools such as Anysphere and Decagon are redefining the capabilities of agents, making significant inroads into areas of knowledge work that were previously dominated by humans, such as legal case management, dev sprints and compliance reporting. The appeal lies in margin expansion and customer lock-in: these tools reduce costs and fundamentally change workflow architecture.

Thirdly, these companies are concentrated in the infrastructure and orchestration layers — places where control over data, models and system logic can produce ecosystem-level lock-in. Consider Supabase and ClickHouse as the new Firebase and Snowflake, but built natively for the AI era. These companies are not just developing apps; they are developing the operating systems of a post-human productivity stack. Much as Android or AWS once did, these AI infrastructure firms are enabling the next generation of software companies.So how do we interpret today’s AI unicorn valuations through that lens? There are five reasons to believe this is not (just) a bubble, but a reflection of deep uncertainty of major disruptions, priced via strategic options.

Fourthly, what is emerging is not just another wave of products—it is a new platform logic. These AI-native models introduce orchestrators, agentic design, transparency layers (such as sandboxing and explainability) and human-AI hybrid architectures. These are not mere extensions of GenAI — they are the beginning of a new digital operating system. We know from past cycles (iOS, Android, AWS) that platform positioning can generate significant growth. The agentic shift also introduces new consumption patterns, such as vibe coding, auto-completing work and multi-agent collaboration, that transform not just efficiency, but also the very structure of work.

Fifth, the market remains concentrated. Despite the existence of thousands of AI start-ups globally, only a few dozen AI unicorns have emerged with elite backing, deep verticalization, and scalable models. This asymmetry reflects what Schumpeter, Christensen, and recent research on firm power suggest: that disruption is rarely democratized. Winners emerge early, create flywheel dynamics, and absorb most of the optionality value.

This was true for Amazon and Google, and it is likely to be true for companies such as Safe Superintelligence (SSI)—SSI is not just another large language model (LLM) builder; it is designed to be AGI-safe from the outset. Its clear goal is to pre-empt future AI alignment concerns. If successful, SSI could define the norms and safety standards for AGI deployment, becoming a central node in the governance of intelligence. ClickHouse is a high-performance OLAP infrastructure offering blazing-fast analytics performance optimised for AI inference and real-time telemetry. It serves as a data execution engine for AI apps across sectors. Just as AWS powers cloud apps, ClickHouse could become the backend layer that AI apps depend on to scale insights and decisions. Finally, just as Google leveraged its early leadership in search and ads to expand into email, cloud and mobile operating systems, many AI unicorns today are already expanding into additional verticals — for example, Anysphere is expanding from developer tools to search.

4. How real options guide the top management AI journey

Firstly, resist the temptation to dismiss AI unicorn valuations as unrealistic. Instead, consider them to be pricing in future asymmetric outcomes in a high-uncertainty environment.

Secondly, options only have value if they are realised. If you are a startup CEO, don’t chase the valuation per se; rather, chase the capability set that unlocks optionality. In particular, the above AI unicorns clearly demonstrate that significant potential is linked to fast orchestration, AI-native stack design and customer-side workflow integration.

Thirdly, AI is not just a technology sector — it is a horizontal meta-capability. This means your firm must invest, adopt and re-architect. Invest in core AI infrastructure (internal or via ecosystem partnerships). Adopt AI in the workflows that matter, such as legal, sales, development, compliance and research. Finally, rethink how you approach software, shifting from deterministic apps to probabilistic, learning and adaptive systems that blend agentic AI and co-pilots with back-end automation.

Value creation will not only come from technology, but also from the ability to orchestrate trust, transparency, and speed.

Fourthly, value creation will not only come from technology, but also from the ability to orchestrate trust, transparency, and speed. This involves building in feedback loops, interpretability and workflow anchoring, as well as minimising the latency between insight and execution. It also means training your people to not only use AI, but also to supervise, shape, and evolve with it.

Fifthly, recognise that much of what appears to be excess is actually the cost of not missing out on the next dominant platform. Back in 1999, investing $1 million in Amazon was considered risky. However, by 2020, it was worth more than $500 million. That was real optionality, and it played out over time. Today, the equivalent might be Harvey, Supabase or Safe Superintelligence. The outcome may not be guaranteed, but the logic remains: small probabilities of significant outcomes can transform the value curve.

Finally, whether you are a tech founder, a senior executive at a large enterprise, or a leader in private equity, don’t confuse valuation multiples with intrinsic value. What we are seeing is the market pricing in scenarios, not certainties. In such environments, the smartest move is not to bet on precision, but to buy access to trajectories and structure your firm to move quickly when the opportunity arises.

The AI pyramid is being built quickly. Its top may look rather speculative. However, history tells us that underlying markets beyond that speculation will clearly define the next economic landscape. So the question is not whether you believe the valuations are rational, but whether you are prepared to recognise that AI is a new platform for your enterprise’s future.

About the Author

Jacques BughinJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

Reference
[1] Source: author, based on Forbes, company’s investment reports, sample include 50 AI and 50 non  AI unicorn

The post Valuation of AI Unicorns: Is it a Ponzi Scheme or rather, a Genuine Growth Option? appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/valuation-of-ai-unicorns-is-it-a-ponzi-scheme-or-rather-a-genuine-growth-option/feed/ 0
How Vibe and Agentic Coding Signal the Next (not only Software) Revolution https://www.europeanbusinessreview.com/how-vibe-and-agentic-coding-signal-the-next-not-only-software-revolution/ https://www.europeanbusinessreview.com/how-vibe-and-agentic-coding-signal-the-next-not-only-software-revolution/#respond Thu, 31 Jul 2025 08:27:08 +0000 https://www.europeanbusinessreview.com/?p=233297 By Jacques Bughin The rapid rise of large language models and generative AI is redefining how software is built and scaled. Jacques Bughin explores how vibe and agentic coding shift […]

The post How Vibe and Agentic Coding Signal the Next (not only Software) Revolution appeared first on The European Business Review.

]]>
By Jacques Bughin

The rapid rise of large language models and generative AI is redefining how software is built and scaled. Jacques Bughin explores how vibe and agentic coding shift value from writing code to orchestrating intelligent systems, signalling a transformation that extends beyond software into broader business strategy and competitive advantage.

The emergence of large language models (LLM) and generative AI programming tools has boosted a paradigmatic change in software development.

Today, a large portion of programming is done through AI assistant, with claims of significant productivity and quality gains. The spread of adoption is unusually fast, with share of code generation doubling every year, driven also by new practices such as vibe coding, centering around a prompt-driven approach to coding, to rapidly turn ideas into working prototypes or features.

The consequence of this paradigm shift, is not only that the software engineer will soon be replaced by a vibe-driven user. It also means that software barriers to entry will plummet and commoditize. Competitive advantage will have to come outside of code; from somewhere else such as trust, distribution, domain expertise and exécution. Software also become consumer goods as they can be adapted on demand, and possibly automaticially through agentic coding.

Software also become consumer goods as they can be adapted on demand, and possibly automaticially through agentic coding.

Finally, even for companies and people with limited exposure to software, the consequence is to reframes complexity as a temporarily unsolved problem, awaiting its own “AI agent and vibes”. Domains like law, medicine, logistics, or financial engineering—all with long-standing barriers of expertise—may equally face disruption from tools that automate away their trickiest challenges. In this new world, value moves from knowing “how” (implementation) to knowing “what” and “why” (problem selection), But is this new world as close as it stands? Here is what to have in mind-from now on.

The democrazation of software development

Historically, software development has been the domain of technically trained professionals, constrained by steep learning curves and the need for mastery in programming languages and environments. Early innovations such as graphical integrated development environments (IDEs) eased usability compared to command-line interfaces, but they retained the core requirement: users still had to code.

No-code and low-code

To address the exclusionary nature of traditional software engineering, successive waves of “democratization” emerged. No-code and low-code platforms introduced visual, drag-and-drop interfaces with pre-built logic components, aiming to empower non-programmers. However, despite enabling broader participation, these tools were limited in flexibility, customization, and scalability. This “visual programming” era still required users to work within rigid templates and faced challenges when applications grew in complexity or required deeper logic control.

Recent breakthroughs in Generative AI, particularly Large Language Models (LLMs) trained on code (e.g., Codex, CodeGen), have radically changed this landscape. These models can now generate functional software components from natural language descriptions—blurring the boundary between technical and non-technical contributors. This capability marks a significant leap from procedural or visual programming to intention-driven development.

Vibe coding

The new emerging concept of “vibe coding”, popularized in early 2025 by AI researcher Andrej Karpathy, where humans guide powerful AI models—mostly using natural language—to generate, debug, and refine code, exemplifies this shift: instead of writing lines of code, users express their desired outcome in natural language, and AI systems translate these “vibes” into functional applications.

The recent hype comes from the explosion in AI tooling. recent advances in LLM-powered coding agents—such as Cursor Composer, JetBrains, and GitHub Copilot—have made it possible to create entire applications by talking to the AI, rather than manually coding every feature. Also, startup and industry are qucly buy into this new tend. Prominent tech industry figures and accelerators like Y Combinator have publicly declared vibe coding “the new dominant way to code,” with 25% of startups reporting more than 95% of their codebase generated by AI.

Platforms like Replit saw astonishing growth (from $10M to $100M ARR in six months), signaling massive adoption and excitement in both consumer and business spaces. Around 75% of Replit users never write a single line of code. The CEO of Robinhood recently declared that nearly 100% of their developers use AI code editors, and around 50% of new production code originates from AI.

On the enterprise side, AWS has entered the ring with Kiro, an agentic IDE in preview that breaks natural-language prompts into structured blueprints, enforces steering policies, and auto-generates verification tests—effectively embedding production-grade rigour directly into vibe-driven flows. Meanwhile, Replit, Visa, Vanguard, and Choice Hotels are already piloting vibe-centric systems, reporting 40% faster UI development cycles and enhanced collaboration between engineers and non-technical stakeholders.

Vibe coding maturity cycle

The messiness of vibe coding

The market for vibe/ AI-Codegenerators was evaluated to USD 4.5 billion in 2023 to become ten times larger by 2030. In aggregate, companies, form Windsurf, to lovable and gityHUb are already worth 50 USD bilion + . But, while vibe coding promises broader participation in software creation, adoption is mixed by maturity. Some companies treat vibe coding as an innovation engine; others warn that hallucinations and package-misnaming errors—coined “slopsquatting”—pose severe risks

In fact, as seen in the early deployements of LLMs, – LLM-generated code can inherit biases, security vulnerabilities, and inefficiencies from the training data. Users often lack visibility into how code is generated, making transparency and explainability key challenges—especially in high-stakes or regulated environments.

There is also a looming risk of vendor dependency and AI system lock-in, as organizations integrate proprietary generative AI tools into their workflows. This necessitates robust governance frameworks, including technical review protocols, domain-specific risk classifications, and phased adoption strategies. For instance, internal applications with low business risk can serve as testbeds before expanding to customer-facing or mission-critical systems.

Christensen theory of disruption

But, disruption rarely begins with elegance. It starts with functional access, then layers reliability. Vibe coding mirrors this: ugly at the start, but full of potential foir those seeing the upgrade path from inferiortiy to disruption, as elegantly discovered by Clay Christensen.

In the recent past, GUIs were seen as slow, cluttered alternatives to the command line — until they unlocked mass adoption. APIs began as fragile connectors, only to become the scaffolding of platforms and microservices. Cloud computing started as an ops headache — then ushered in a DevOps revolution.Promoters like Microsoft, Apple, Amazon, and Google were critical to scaling these interfaces beyond their early mess. Similarly, today’s big cloud players and AI-first startups are the primary promoters of vibe and agentic coding. Their infrastructure, standard-setting, and distribution power will be decisive in whether the chaos resolves into dominance.

The five conditions for vibe  coding to evolve in disruptions are slowly being met

  1. Determinism and auditability: Prompt logs, version control, and reproducibility must mature. Without these, vibe remains chaos. Leading platforms such as GitHub Copilot and Cursor now incorporate prompt logs, code history features, and are rolling out reproducibility safeguards
  2. Agentic refactoring & testing: Tools like Qodo and Devin must become reliable backstops that ensure code correctness. Reports highlight a “burgeoning ecosystem” of agentic tools (e.g., Replit’s Codeium, agentic capabilities in Cursor) acting as automated reviewers and testers. While reliability is not perfect, dependency on these agents for maintaining codebase correctness is increasing rapidly, especially among tech-first startups and digital agencies
  3. Agent orchestration IDEs: IDEs like Cursor and Kiro must become full orchestration hubs — blending prompt history, test logs, and repo memory. e IDE landscape is shifting: developers are flocking to environments (Cursor, Replit, Kiro) that integrate prompt histories, live testing, repo memory, and agent “marketplaces.” Cursor, for example, saw its user base grow to over 1 million—with one-third paying—largely due to its orchestration features.
  4. Traceability & governance: Integration with CI/CD pipelines and audit systems is essential for enterprise scale. Uptake is strongest in startups and digital natives, but more conservative sectors (health, finance) are already piloting integrations with CI/CD and audit pipelines
  5. Swarm agents: Specialist bots must handle packaging, performance, security, and compliance automatically.

These conditions also echo prior disruption conditions from GUI and cloud: a layer of abstraction must normalize chaos, professionalize interfaces, and absorb complexity. Cloud moved to a structuration of I/P/SAAS  creating standardized infrastructure and service layers for deploying, managing, and consuming software. GUI moved to a structuration of standardized interaction paradigms and design metaphors

Strategic lens

The advent of vibe coding and agentic AI represents more than a technical advance—it signals a foundational shift in how software is created, maintained, and scaled.

The advent of vibe coding and agentic AI represents more than a technical advance—it signals a foundational shift in how software is created, maintained, and scaled. Much like previous technology revolutions, it begins with awkward prototypes, imperfect outputs, and early adopters experimenting at the fringes. But as Clayton Christensen taught, disruption never starts with elegance. It starts where incumbents are least likely to pay attention, then climbs the value chain as speed, reliability, and reach improve. What begins today as an experimental tool for developers may soon become the operating system of modern business.

CEOs should view this moment through that exact lens. The disruption is already underway. Code generation by AI is doubling annually. Major tech firms like GitHub, Amazon, and Replit are accelerating orchestration tooling that allows teams to go from idea to deployment through prompts and agent collaboration—cutting development time by up to 40%. The entry point may appear technical, but its impact is strategic: it redefines the source of competitive advantage. Software is no longer a scarce asset. It is becoming abundant, cheap to produce, and increasingly commoditized. In this new environment, value shifts from owning the code to orchestrating how it’s created, validated, and governed. As such, your organization must transition from being a software builder to either a producer of orchestration systems or a sophisticated consumer of AI-produced software.

If your firm is in the business of software, this is the time to reposition. AI is collapsing the cost and time required to build features, apps, and services. Traditional R&D pipelines will become less defensible unless they’re coupled with governance architectures, agent supervision, and trust layers that make AI output reliable, auditable, and tailored to critical domains. The product is no longer the code—it is the environment that allows teams to safely and rapidly produce code that works. Investment must shift toward agentic orchestration, intelligent testing, and cross-agent collaboration environments. You’re not just competing on features anymore—you’re competing on how fast, how safely, and how intelligently your agents can adapt code to user needs.

If your firm is not in the software business, this shift is no less consequential. Historically, creating software required skilled developers, long cycles, and large budgets. But with vibe coding, natural language prompts can drive application development. The bottleneck of coding disappears, replaced by a new frontier: selecting the right problems, framing them effectively, and validating outcomes. In this world, your organization can become a software producer without traditional engineering teams—or remain a consumer of standardized tools, at risk of being locked out of differentiated capabilities. The choice is strategic. To lead, firms must cultivate internal orchestration capacity—not to write code per se, but to shape how AI agents do. Teams in product, legal, finance, or logistics may become creators by guiding agents. But without a coherent governance model and cross-functional prompt fluency, this capability will remain fragmented and underutilized.

Beyond the technical and strategic shifts lies a deeper organizational question: who in your company needs to be empowered to build, orchestrate, and validate digital systems? As AI takes over the “how,” success will depend on those who best define the “what” and the “why.” Companies must cultivate prompt designers, orchestration leads, and agent supervisors across all business units. This change affects training, recruitment, incentives, and culture. It also alters who holds influence: the ability to ask the right question and steer agentic systems will matter more than knowledge of syntax or frameworks. If previously only software engineers built software, now every domain expert can become a producer—provided they are equipped with the right interfaces and governance layers.

This transformation will not happen all at once. But the signals are clear. Firms like Robinhood report nearly 100% of developers using AI editors, and over half of new production code being AI-generated. Replit’s user base is now majority non-coder. Cursor and Kiro are embedding orchestration, traceability, and agent marketplaces directly into development environments. The tooling is catching up. The maturity conditions Christensen warned about—reliability, auditability, governance, ecosystem—are being met faster than most expect.

What matters now is whether your organization learns to orchestrate AI agents effectively, govern their outputs safely, and redeploy software capabilities in every domain—not just in IT.

As CEO, your role is not to manage code. It is to recognize that code has ceased to be the strategic asset it once was. What matters now is whether your organization learns to orchestrate AI agents effectively, govern their outputs safely, and redeploy software capabilities in every domain—not just in IT. This is not the time to wait. It is the time to learn, pilot, and reposition. Because soon, software won’t be something your company builds or buys. It will be something it behaves through—constantly adapted, agentically maintained, and invisibly embedded in how you operate.

The question is no longer whether vibe and agentic coding will arrive. It is: when they do, will your company be leading the orchestration—or watching from the edge, unable to control what it consumes. The AI is full of surprises, but one thing is sure – disruption is about to start.

About the Author

Jacques BughinJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

The post How Vibe and Agentic Coding Signal the Next (not only Software) Revolution appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/how-vibe-and-agentic-coding-signal-the-next-not-only-software-revolution/feed/ 0
The Real AI Battle: OS is the New Prize https://www.europeanbusinessreview.com/the-real-ai-battle-os-is-the-new-prize/ https://www.europeanbusinessreview.com/the-real-ai-battle-os-is-the-new-prize/#respond Fri, 18 Jul 2025 07:39:20 +0000 https://www.europeanbusinessreview.com/?p=232631 By Jacques Bughin The third wave of AI is shifting focus from generative outputs to agentic orchestration. This article explains how the real competitive edge lies in building AI operating […]

The post The Real AI Battle: OS is the New Prize appeared first on The European Business Review.

]]>
By Jacques Bughin

The third wave of AI is shifting focus from generative outputs to agentic orchestration. This article explains how the real competitive edge lies in building AI operating systems that coordinate autonomous agents, workflows, and tools. Control of this orchestration layer will determine which platforms lead the future of enterprise AI.

We are entering the third wave of artificial intelligence. The first wave was predictive, driven by pattern recognition and analytics: while confined to data science analysts and those mastering ML techniques, it proved that AI can deliver strong value. The second, generative AI, dazzled many of us with its ability to produce human-like text, code, and imagery. And in insights, the killer app is how it could make coding and software development a near-commodity.

But the third wave, now gathering dominance, is agentic AI: systems that don’t just generate, but act autonomously, plan, and reflect. What makes them transformative is not just intelligence, but orchestration: the ability to coordinate goals, tools, workflows, and learning loops across complex digital environments.

While agentic systems are still early in development, agentic augmentation has leapfrogged raw model upgrades in less than a year.

While agentic systems are still early in development, agentic augmentation has leapfrogged raw model upgrades in less than a year. Agentic systems are being deployed in production, showing strong executional advantages. Moveworks, acquired by ServiceNow in 2025 for $2.9B, uses agents to resolve IT tickets, HR queries, and access requests. In one municipal deployment, over 3,000 hours of human work were offloaded. Their agentic enterprise search tool reduced lookup time by over one hour per employee per day. Success rates exceed 95% for core workflows. Other agents, like OpenAI’s “Operator” and “Deep Research,” show real-world task execution: browsing websites, booking meetings, summarizing reports, and citing live sources.

In this context, and as seen in the past for any new platform, PC, mobile, and others, the new war is about about who will control the AI-native OS (operating system) — not merely models, but the substrate that governs agency: orchestration — both technically and strategically — is the defining axis of this battle.

Agentic AI needs an OS

Unlike predictive or generative AI, agentic systems pursue goals with minimal supervision. They break objectives into steps, select and invoke tools, execute actions, and reflect on outcomes. These systems are inherently asynchronous, interleaved, multi-agent, and multi-modal. As such, they require a dedicated orchestration layer to manage:

  1. Workflow memory and context
  2. Tool access and chaining
  3. Secure action execution
  4. Role separation between agents
  5. Compliance and guardrails

This orchestration requirement distinguishes the AI OS from traditional operating systems. It is more akin to a cloud-native runtime or a distributed coordination protocol. Without this layer, agents are brittle, untrustworthy, or confined to isolated domains. With it, they become scalable, enterprise-grade workhorses.

Orchestration also underpins economic control. Drawing on platform economics, the AI OS acts as a multi-sided platform: coordinating agents (supply), enterprise users (demand), data and workflow providers (complementors), and infrastructure (foundation). Whoever controls the OS controls pricing, access, monetization, and feedback loops.

The race is on

In this new universe, LLMs may become commoditized. The differentiation would lie in the orchestration stack—how agents are chained, how tools are invoked, and how memory is structured. The OS owner will set the standards, extract value, and control distribution across the ecosystem. The OS also might become the trusted intermediary for sensitive data, workflows, and compliance.

The race for the (agentic) AI platform is on. It includes Microsoft, and with Copilot+ now integrated into Windows and M365, Microsoft controls the agentic layer for over 400 million enterprise users, and its graph API and Semantic Kernel are orchestration initiatives.  Through ChatGPT Team/Enterprise and the Operator browser, OpenAI is building a Chromium-based OS for agents, complete with memory, app store, and execution capability, while Perplexity ‘s Comet is building a vertical agentic stack focused on search and information tasks.

ServiceNow is the most mature enterprise agile platform, embedding AI OS logic across ITSM workflows. And while not an OS vendor, Nvidia’s control of the orchestration runtime (GPUs + microservices) makes it foundational. Nvidia launched NeMo Retriever, NIM microservices, and the AI Workbench SDK to let developers orchestrate agents across devices and clouds. Google Vertex AI Extensions now support tool use, agent scheduling, and dynamic memory.

Call Out from the Roads: Driverless Cars as a Case of OS Deployment

To illustrate the stakes and logic of orchestration, consider the battle for control in autonomous vehicles. The race is not just about whose AI sees pedestrians better. It’s about who orchestrates decision-making, safety layers, navigation, and compliance in real time. Tesla, Waymo, and NVIDIA aren’t just shipping hardware—they are building autonomous operating systems like Tesla’s Dojo or Waymo’s Chauffeur. These AI OS layers integrate real-time sensor fusion, traffic-aware planning, edge computing, and failover strategies. They turn intelligence into coordinated, accountable action.

That orchestration logic is what enterprise AI now faces. Like AVs, agents in business must integrate signals, invoke APIs, adapt in real time, and remain compliant. Whoever owns this AI OS stack—whether Microsoft with Copilot, OpenAI’s Operator, or Perplexity’s vertical browser stack—controls not just intelligence but execution. The AV sector teaches that the biggest prize is not perception, but control of the logic layer where risk, data, and performance converge. The same is unfolding in the AI software stack.

Sorting out winners from losers

The battle for Agentic AI OS dominance must take into account more than firm assets. This should include the effects of:

  1. Open Source: Hugging Face to Warmwind OS, a wave of open-source and cloud-native platforms is challenging closed ecosystems, promising transparency and customization. Langchain’s  90k+ GitHub stars (as of July 2025) and 500+ plugins available and developer traction indicate that open composability is outpacing closed agent stacks in early adoption.
  2. Geopolitics: Through Huawei, China’s HarmonyOS Next is a strategic move to build a homegrown, sovereign digital ecosystem, while Europe pushes for open frameworks and trusted execution environments. Gaia-X, EU AI Act, and TEEs (Trusted Execution Environments) show a strong preference for auditable, privacy-preserving OS layers. Sovereign LLM efforts (Mistral, Aleph Alpha, Luminous) are all building toward agent readiness.
  3. Legal and Ethical Battles: As AI Os become central, legal disputes (such as OpenAI’s recent trademark and IP controversies) and regulatory scrutiny will likely be intensifying :

Lessons from the past

If history is any guide—especially the PC, mobile, and cloud eras—it teaches a few lessons.

Control of the orchestration layer consistently decides platform dominance: During the PC Era, Microsoft didn’t just build an OS—it orchestrated an entire ecosystem. Windows provided standardized APIs, development tools, backward compatibility, and distribution agreements with hardware partners. This made it the default platform for developers, pushing network effects that strengthened its position. The mobile war of the 2000s saw iOS and Android reach dominance through platform orchestration. Apple iOS used vertical integration—hardware, OS, App Store, and SDKs—to guarantee performance, security, and quality. Android, by contrast, leveraged openness and broad adoption across device manufacturers (Samsung, Huawei, Xiaomi). Platform scholarship emphasizes this dual model: “open enough” to scale, “controlled enough” to monetize—exactly as platforms like Uber or Airbnb balance openness with control. During the cloud era, cloud platforms moved orchestration into the data center. Amazon Web Services, Microsoft Azure, and Google Cloud converged on offering not only virtual machines but also dev toolchains, APIs, and serverless runtimes. This “programmable infrastructure”.

Ecosystems—not features—drive platform lock-in. The most successful platforms created massive developer flywheels. Apple did this through its iOS SDK and App Store, offering developers monetization, distribution, and quality control in a single stack. Android scaled globally by opening its OS to device manufacturers while anchoring control through Google Play Services. These ecosystems created positive feedback loops: more developers meant more apps, which attracted more users, which drew in even more developers. In the age of agentic AI, SDKs for building agents, marketplaces for composable tools, and developer-facing orchestration libraries will be the new engine rooms of platform lock-in.

The agentic AI OS must offer composability and extensibility while securing monetization layers such as memory state management, compliance APIs, and runtime governance.

Openness and control must be carefully balanced. Platforms that were too open often failed to capture value, while those too closed risked stagnation. Android succeeded because it was open enough to drive adoption by OEMs, yet retained control through proprietary services and APIs. Kubernetes, an open orchestration framework, became dominant only after managed services by cloud vendors (like GKE or EKS) wrapped it in enterprise-grade compliance and support. The agentic AI OS must offer composability and extensibility while securing monetization layers such as memory state management, compliance APIs, and runtime governance.

Open source shapes the stack but rarely captures the profit. The rise of Linux, PyTorch, and TensorFlow illustrates how open frameworks often define developer standards. However, value capture shifted to those who offered hosted infrastructure, tooling, and compliance. Red Hat, AWS, and Microsoft Azure monetized these ecosystems more effectively than the communities that created them. In the agentic AI context, LangChain, Hugging Face, and LangGraph are winning early adoption, but unless they wrap their offerings in enterprise-grade orchestration and compliance, they risk becoming commoditized.

Regulation is both a constraint and an accelerator of platform consolidation. Past platform giants faced significant regulatory hurdles: Microsoft endured antitrust litigation, Facebook faced data privacy crackdowns, and the GDPR redefined platform responsibility in Europe. In the agentic AI era, regulation will go even further. The EU AI Act classifies agentic systems as “high-risk,” requiring explainability, override mechanisms, and auditable memory. Compliance will not be optional. The platforms that embed safety, audit, and governance into their orchestration layers will gain both trust and a competitive moat.

The futures

These five strategic learnings also lead to three important tensions for the future of the agentic AI OS. The first tension is between centralization and decentralization. Orchestration layers tend to centralize over time due to network effects, but open source and geopolitical forces may resist this. The second tension lies in regulatory burden: platforms may need to slow down or redesign systems to satisfy compliance requirements, or they may embed governance so effectively that regulation becomes a moat. The third tension is the modularity of agentic systems: if agents are portable and composable, they may run across platforms; if not, vertical stacks may emerge.

Crossing those tension lines, three scenarios emerge.

  1. « Power of the few». Or orchestration being bundled into enterprise stacks by a few dominant players. Here, Microsoft and Nvidia extend their lead. Microsoft integrates agent orchestration into every Office workflow, into Azure, and developer tools. Nvidia supplies the runtime SDKs, model deployment frameworks, and infrastructure to host the entire lifecycle. This scenario is marked by tight vertical integration and high lock-in. Innovation continues, but within controlled environments. It is the natural continuation of what worked in the cloud and productivity eras.
  2. “Open federations.” Open-source tools and frameworks like LangGraph, Hugging Face, and LangChain converge to form a standard for portable agents and composable toolchains. Agentic orchestration becomes like Kubernetes: modular, standardized, and wrapped in enterprise offerings by vendors. This scenario reflects the success of Linux. Here, no one controls everything, but value accrues to those who provide the best wrappers, managed services, or domain-specific platforms.
  3. ” Localized sovereignty”. This is a future defined by political fragmentation and regional regulatory divergence. In this world, China advances its closed HarmonyOS Next stack; Europe mandates sovereign AI stacks that comply with Gaia-X, local data residency, and explainability laws. The US becomes a dual-track ecosystem with Big Tech controlling commercial agent systems and a parallel open-source movement serving developers.

Making sense of those futures

While the outcome across these scenarios may vary dramatically, it offers a few important constants for CEOs

The first lesson for CEOs is to move fast. The battle of AI OS means that big players are doubling down on innovations regarding Agentic AI. As a consequence, Agentic AI is evolving at a rapid pace, with new tools and features rolling out every few months. Companies that start early will build up valuable experience and know-how, making it much harder for slower competitors to catch up. Early adoption means your team learns how to automate, adapt, and improve processes, while waiting means you’ll need to spend more time and resources just to close the gap.

The second lesson is that control lies in how you organize and manage work, not in the tasks themselves. The real power is in setting up the flow of work—deciding how agents interact, what data they use, and who checks their work. If you let outside vendors control these rules, you risk losing oversight and flexibility. By designing your own rules and keeping a grip on how agents work together, you can switch tools more easily, protect your data, and stay in charge of your business processes.

The real advantage comes from making agents that can be reused and improved, encouraging teams to share what works, and tracking how much of your work is being handled by agents.

The third lesson is that the new way to compete is by using agents well, not just by having the best technology. Companies with libraries of reusable agent workflows can solve problems faster and adapt to change more easily. Each time you use an agent, you learn and improve, building up a base of knowledge that keeps you ahead. The real advantage comes from making agents that can be reused and improved, encouraging teams to share what works, and tracking how much of your work is being handled by agents.

In this new environment, you should review your current tools to see if they help you control workflows or if they take control away from you. Assign someone to lead your efforts in building and improving agent workflows. Start with small tests, learn quickly, and expand what works. Set clear rules for managing and checking agents, and regularly measure your progress.

Agentic AI is not just another tool—it’s a new way to run your business. Move fast, keep control, and focus on building flexible, reusable systems to stay ahead.

About the Author

Jacques BughinJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

The post The Real AI Battle: OS is the New Prize appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-real-ai-battle-os-is-the-new-prize/feed/ 0
Google or Gemini? A Framework for Navigating Agentic AI Confusion https://www.europeanbusinessreview.com/google-or-gemini-a-framework-for-navigating-agentic-ai-confusion/ https://www.europeanbusinessreview.com/google-or-gemini-a-framework-for-navigating-agentic-ai-confusion/#respond Tue, 27 May 2025 09:58:56 +0000 https://www.europeanbusinessreview.com/?p=230026 By Jacques Bughin  Agentic AI is transforming the digital economy, replacing traditional search with intelligent execution. In this article, Dr. J Bughin presents a five-step framework that challenges binary narratives and […]

The post Google or Gemini? A Framework for Navigating Agentic AI Confusion appeared first on The European Business Review.

]]>
By Jacques Bughin 

Agentic AI is transforming the digital economy, replacing traditional search with intelligent execution. In this article, Dr. J Bughin presents a five-step framework that challenges binary narratives and reveals how businesses can adapt strategically. The future of monetization depends on navigating this shift with clarity, precision, and economic insight.

Summary

In the age of agentic AI, where artificial intelligences no longer simply respond but execute actions, traditional business models – such as Google’s – are being profoundly challenged. Managers need to find a clear, unambiguous answer to this question. This paper proposes a five-step analytical framework for understanding this rupture and deriving well-founded strategic decisions from it. Applied to the case of Google, this process reveals that:

  1. The current model is based on monetizing traffic via the SERP; however, it is structurally fragile. If agents bypass the SERP, disintermediate search, and, above all, reduce the value of the click, it could undermine the system.
  2. On the demand side, agents promise a growing search market by improving conversion rates and making previously ignored queries monetizable. This attracts new entrants and allows Google to cannibalize itself.
  3. Competition is evolving: according to game theory, a new equilibrium should quickly emerge between Gemini (Google) and the integrated advertising of LLMs, and at a pace faster than that driven by the adoption of agentic AI.
  4. The value will shift to execution. Google must therefore become an orchestrator of agents, not just a search engine.
  5. An interesting game balance is not an all-out battle, but a differentiation model where agents focus on industries (verticalization) while Google becomes more integrated, from Google Cloud, and Chrome to Google workplace and Gmail.

This framework makes it possible to move beyond binary reactions and approach transformation in a structured, rigorous, and economically sound way.

Introduction

The rise of large language models (LLMs) and agentic AI has catalyzed a wave of speculation about the end of search as we know it.

While popular discourse is dominated by two opposing conjectures (“Google will be wiped out” versus “LLMs are not profitable”), the future is more complex and requires a structured analysis of how search has been monetized, as well as a theoretical assessment of the evolution of search and monetization in the context of the evolution of AI.

Using models based on the microeconomics of search as well as the type of strategic interaction (static and repeated games) between Google and attackers such as Open AI, Perplexity and others, we try to offer a more powerful framework that not only explains the transformation underway, but debunks simplistic narratives (Table 1). Managers may find this framework important when they’re looking for more solid answers about what to do in AI transformation.

The Five-Step Framework

Table 1: Navigating AI confusion

Step Action Objective
1. Understanding the business model Analyze the current revenue model of the dominant incumbent operators Establish an economic base and structural dependencies.
2. Evaluate actual disturbances Identify how attackers modify monetization channels. Determine the depth and extent of disturbance.
3. Understanding the economics of demand Understanding how new games change demand. Assessing the market’s future – up or down
4. Add supply-side economics Understand the logc of the new equilibrium, dynamically . Assess the intensity, stability and type of new competition
5. Rebuild with aggregates Analyze supply and demand . Find new results and deduce actions/key assets/playing fields

1. Understanding the business model

Google’s sponsored links, which manifest themselves primarily through search ads, are the cornerstone of its revenue model. By 2024, Google’s advertising revenues will reach around $240 billion, with search ads contributing around $175 billion, or 57% of the company’s total revenues.

While these figures underline the significant value of sponsored links within the Google ecosystem, Google has other revenue streams, such as Google Cloud, which will benefit from the deployment of AI. In addition, advertising revenue is driven by three fundamental levers: the immense volume of global search queries, the subset of high-intent queries that trigger paid ad auctions, and the Google platform’s control over the search engine results page (SERP). By dictating page structure and bidding rules, Google effectively monetizes attention and intent on a massive scale.

However, this dominance comes with inherent vulnerabilities. Firstly, the vast majority of queries – around 80% – are not commercially monetizable. They respond to needs for information, navigation, or exploration. Secondly, SERPs themselves are saturated and increasingly commoditized, with search engine manipulation diluting the value. Thirdly, the user must always act outside the Google interface to accomplish tasks, creating friction in the user experience. These limitations constitute the structural exposure of Google’s traditional model.

2. Assessing real disturbances

  • The impact of AI: GenAI, but above all, agentic AI

LLMs clearly change the structure of search by reducing the need for links (direct answers) and reducing navigation (multi-click paths become a single prompt). With LLMs, over 60% of queries are now informative or intent-driven, which is ideal for AI-generated answers. Users interact with summaries and don’t click on links, reducing volume for Google

The other danger is the collapse of traditional ranking logic, as the concept of “#1 ranking” is replaced by being quoted, summarized, or cited by LLMs. The implication is that the ranking value that increased cost-per-click disappears and pricing power is reduced.

Although initially limited to synthesis and dialogue, the integration of agentic AI considerably broadens the scope of disruption. With the emergence of single-agent systems, a single AI entity can autonomously perform discrete tasks – for example, booking a restaurant, sending an e-mail, or initiating the drafting of a document – without human intervention. Multi-agent systems go further: they break down complex workflows into sub-tasks, coordinate APIs, and execute a sequence of decisions on the user’s behalf. In both cases, the agent not only interprets the user’s intention but acts on it, transforming traditional requests into executable commands.

On a large scale, this transition is transforming the very nature of digital search. It replaces the advertising-funded discovery layer with agent-based orchestration, increasing the potential economic value of each query, but also reshaping who controls that value and how it is monetized.

  • Advertising value chain

This evolution is turning the structural microeconomics of search on its head, by orienting it towards the delivery of results. This shift replaces the monetization of navigation (selling advertising space along the way) with the monetization of execution (capturing value at the result level).

But the rise of agentic AI isn’t limited to disrupting search. It’s putting systematic pressure on Google’s broader monetization engine – including display advertising, YouTube content monetization, and even, eventually, a large number of B2B SaaS intermediaries. In display advertising, AI agents bypass banner placement logic by performing tasks directly from the user’s prompt or workflow. In enterprise contexts, agentic AI increasingly disintermediates SaaS categories for which Google (via Workspace, Ads Manager, or Analytics) has monetized coordination or knowledge. When agents plan campaigns, manage CRM entries, or optimize user journeys, they bypass several layers of existing SaaS infrastructure. This creates downward pressure on margins and squeezes the space for traditional marketing and advertising technology.

Ultimately, Google and its AI competitors are converging on a new high-value node: the orchestration layer. This is where decisions are made, actions are initiated, and margins can be captured. Whether powered by Gemini, OpenAI, or specific vertical agents, this layer holds the key to monetization in the age of agentic AI. What search was for information, orchestration is becoming for execution: the critical control point in digital value chains.

3. Understanding the “demand” side of change

An important unknown is how agentic AI will affect the profit reserve. However, microeconomics tells us that the profit pool will be larger due to three factors.  Firstly, agentic execution improves the quality and relevance of interactions. Unlike the current model, where most ads are shown to users who are not yet ready to convert, agentic ads can be integrated directly into high-intent workflows. Secondly, agents reduce transaction friction. By reducing the funnel, they accelerate the passage to action. This reduces waste in sales channels and increases results attributable to advertising. Supply-side efficiency encourages brands to bid higher for access to agent-driven engagement.

Thirdly, the long tail of non-monetized queries – previously low-intent, informative searches – can now be captured and transformed into valuable transactions.

These effects are, in principle, multiplicative on the level of return on (search) advertising spent – so it’s not necessary to have a major impact on a single effect – a smaller, but combined, impact is the real crux of whether the market will grow.  As these three effects are likely to combine with agentic AI, it’s reasonable to think that the market will be bigger, not smaller, as the technology evolves towards agentic AI. 

4. Add the supply side of change

  • Why LLM will establish advertising as an additional source of revenue

If agentic AI increases the value per query, it threatens to cannibalize the very mechanisms that fund today’s search giants. For Google, the main concern is that agentic systems will bypass the SERP entirely, cutting off its advertising supply chain. Gemini, Google’s counter-offensive, seeks to preserve monetization while adapting the interface to a query-driven future.

On the other hand, players like OpenAI and Perplexity face an entirely different challenge: most of their users are free. OpenAI, for example, is said to have over 100 million weekly active users, but less than 5% pay for ChatGPT Plus. To maintain the high costs of LLM inference and GPU-intensive infrastructure, these platforms need to monetize the remaining 95% of users.

The strategic logic behind LLM advertising monetization is therefore simple but unavoidable. First, inference costs at scale require offsetting cash flows. Secondly, user payment models are reaching a ceiling – most users won’t pay for general-purpose chat. Thirdly, verticals such as procurement, local services, and SaaS recommendations are rich in intent and ripe for monetized orchestration.

  • Game-theoretic perspectives: Modeling competition between LLM and Google advertising

    • Pure strategy equilibrium (Nash solution)

When several suppliers compete, it is important to know whether it is possible to categorize the type of competition that is likely to occur. Here, the tools of game theory, which examine the payoffs to each player based on the movements of the others, are uniquely valuable in assessing possible behavior, now and in the future, based on repeated interactions.

Suppose we model the interaction between Google and LLM challengers as a static, repeated game, and the values of the game (including LLM subscription) in the static model are as follows (in billions of dollars by 2030):

Table 2: Game theory payoff matrix (illustrative)[i]

LLM : No ad monetization LLM : Monetizing advertising
Google : Do nothing (144, 30) (108, 75)
Google: Reinvention by Gemini (150, 60) (161, 44,5)

The payoff matrix (Table 2) shows that LLMs have an incentive to engage in advertising for some choice of Google. Thus, the main idea of game theory is the prevalence of a stable equilibrium, where dominant strategies converge with LLM-mediated advertising (Nash stragie) -and… the total market has grown.

  • Pure strategy equilibrium (Nash solution)

These results apply only to the one-shot game. Let’s assume a more realistic game, where there is uncertainty about the profitability and development of AI by agents, and that the game of interactions between Google and LLMs is repeated over 2024-2030. At this level, the dynamic changes: initially, LLMs stay away from advertising monetization, even though they experiment and gain the trust of users. Gemini is also partially deployed, but not in a head-on situation. As the capabilities of LLMs improve, advertising enters their ecosystems. Google, faced with strong erosion, accelerates Gemini deployment and integrates the new advertising logic into AI agent flows. In the end, both parties compete in the field of agent-based monetization.

This type of game is known as a mixed-strategy game, in which the different players combine several strategies at random to test their best position, and of course, hide their first intentions (Table 3). But this uncertainty disappears and converges towards the equilibrium shown in Table 2.

Table 3: Game frame evolution

  1. Mixed strategy phase (2024-2026) Dominant play for Google: 60-80% deploy Gemini (to reinvent itself, but avoid total cannibalization of margins); 20-40% delay Gemini (observe user habits, avoid overreaction Dominant play LLM: 40 -70% monetize advertising (Capture initial value in verticals like travel ) 30-60% – increase their footprint (Build trust,)
  2. Iteration and feedback (2026-2028): Updating beliefs (Bayesian learning on earnings structures) and refining strategies
  3. Convergence towards a pure strategy (2028-2030) Players commit to pure strategies, with Google fully integrating Gemini into search.

This evolutionary path, derived from game theory, is not innocent:

  1. Firstly, it means that rational logic should lead to an equilibrium where the new business model becomes dominant for each player.
  2. This model is evolutionary, not because Google has difficulty executing it, but because it’s more strategically optimal to embark on a mixed strategy. This mixed phase creates a space for experimentation without open conflict. Each party sends strategic signals (e.g., Gemini integration in Android but not in the search home page; OpenAI testing of sponsored suggestions in Pro mode only).
  3. Even if the game is evolutionary, — it’s fast: initially, there’s already more than a 50% chance that Google will launch into LLM, – this is marginally lower for LLM to launch into advertising, but the probability is far from zero. In 3-4 years, the strategy will lead to a reversal of the dominant business model, while the agentic penetration of AI in advertising and search is not yet dominant – 30 to 40% of customers use it.

This dynamic is the result of a positive loop effect. Increased usage leads to better feedback on the user interface and improved agent quality. Better agent quality reinforces trust and leads to more commercial requests. And if more resources are available, LLMs invest more in model optimization.

This loop has other implications: it favors the first to have a closed-loop infrastructure – so we can expect Google to integrate Gemini into Android, Chrome, Maps and Gmail. New LLMS attackers such as OpenAI or Perplexity could then choose to secure their position as agents in the key workflows of other players competing with Google such as Salesforce’s Slack, Microsoft Teams, or Zoom), thus creating multiple different ecosystems, without aggressive competition favoring the extraction of ROI from customers.

5 Bringing together all the elements of microeconomics

5.1. The metamorphosis of online research

From this perspective, the future of online search is not one of extinction or a struggle for survival. It’s about a metamorphosis where the revenue model will evolve from advertising around discovery to monetization around execution.

Google’s dominance depends on its ability to maintain trust, share relevance, and user flow. LLMs, meanwhile, are set to evolve from high-cost, low-revenue utilities to sustainable platforms. This will require a diversification of revenue sources from subscription to advertising, but advertising that is integrated, not imposed.

5.2. News of Google’s death is greatly exaggerated – But Google needs a boost

Google’s destiny is not binary: death or survival, but it is clear that the business model is set to shift towards agent-based execution, — and that this dynamic will force Google to reinvent itself. The success of this reinvention will depend on several interdependent factors.

The demand effect shows that the transition can be profitable.  The loop effect clearly shows that Google must also remain a major player if it is to make a successful transition.  The loss of more than 25% of classic search users, who are turning instead to LLMs (outside Gemini) for their searches, means that it may be difficult for Google to maintain its price levels, CPC. Gemini’s reinvention path is also about achieving a leadership position, but primarily in the search agent (not LLM) arena. So, Google’s current platforms will be Google’s best assets moving forward, while Gemini becomes the journey to execute well for Google’s rosy future.

Final Thought

Ultimately, the application of the above approach can be summarized in a Tabelau (Table 4)

Table 4: Summary of results

Step Applied to search and Google
1. Understanding business income – Google earns around $175 billion a year from search ads (57% of total revenue) – Monetization = Query volume × CTR × CPC – Only 10-20% of queries are monetized – Power lies in the platform’s control over SERPs and bidding rules.
2. Evaluate actual disturbances – LLMs respond directly, bypassing links and SERPs – Agentic AI performs tasks, eliminating navigation steps – Traditional CPC logic is weakened; ranking power is eroded – Platforms like OpenAI/Perplexity intercept high-intent queries.
3. Understanding the economics of demand – Agentic AI improves performance through better targeting and task integration.- Long-tail queries become monetizable
– Funnel friction is reduced→ higher intent capture.
– Result: Market expands through improved advertising results.
4. Add supply-side economics – LLMs must monetize to cover inference costs (subscription ceiling reached) –                                                                                                        Game theory shows that LLMs adopt advertising, Google launches Gemini:                                                                                                  – Competition shifts to agent orchestration (Gemini, Copilot, etc.) – Result: Coexistence in multi-agent ecosystems, no monopoly.
5. Aggregate reconstruction – Execution becomes the new monetization layer – Google needs to integrate deeply (Gemini in Android, Chrome, Gmail) – The new value lies in agent control, task execution and orchestration infrastructure.

-The speed of model changeover is rapid, and faster than customer adoption – because competition takes place at the margins, to ensure growth.

Although this synthesis may seem simple, its “tour de force” lies in the fact that it is the result of a comprehensive and detailed micoreconomic analysis. In fact, in times of disruptive technological transformati, – such as the rise of agentic AI – success doesn’t depend on intuition alone, and even less on fear. In times of disruption, the first thing to do is to make sense of change, and develop knowledge for a clear and persistent path of change. The time has come to establish a discipline aimed at building a solid foundation of strategic data. Business leaders and policy-makers need to rigorously model technological trajectories, changes in user behavior, and competitive dynamics. This five-step framework should enable more decisive and credible action to be taken.

About the Author

jacquesJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

References
Acharya, D., K. Kuppan and B. Divya (2025) , “Agentic AI: Autonomous Intelligence for Complex Goals-A Comprehensive Survey,” in IEEE Access, vol. 13, pp. 18912-18936, 2025, Agentic AI: Autonomous Intelligence for Complex Goals-A Comprehensive Survey | IEEE Journals & Magazine | IEEE Xplore
Bornet, P., Wirtz, J., Davenport, T. H., De Cremer, D., Evergreen, B., Fersht, P., … & Mullakara, N. (2025). Agentic Artificial Intelligence: Harnessing AI Agents to Reinvent Business, Work and Life. Irreplaceable Publishing.
Bughin, J and Ph Remy, (2025), Guiding Agentic AI, European Business Review, April, Agentic AI: The Future of Intelligent Systems – The European Business Review
Hosseini, S., & Seilani, H. (2025). The Role of Agentic AI in Shaping a Smart Future: A Systematic Review. Array, 100399.
Li, M., Nguyen, B., & Yu, X. (2016). Competition vs. collaboration in the generation and adoption of a sequence of new technologies: A game theory approach. Technology Analysis & Strategic Management, 28(3), 348-379. Competition vs. collaboration in the generation and adoption of a sequence of new technologies: a game theory approach: Technology Analysis & Strategic Management: Vol 28, No 3
Yuskevich, I., Smirnova, K., Vingerhoeds, R., & Golkar, A. (2021). Model-based approaches for technology planning and roadmapping: Technology forecasting and game-theoretic modeling. Technological Forecasting and Social Change, 168, 120761. core.ac.uk/download/pdf/478947916.pdf
[i] The model is illustrative and based on the following hypotheses, anchored in case studies. a) 20-30% monetization of the long tail of keywords, thanks to direct execution by AI; reduction in execution time by 50% and increased conversation by 50-100%; value split created 50/50 between customers and executor. LLM has its own orchestration-or pays 20% of revenues to other distributors.  Agent penetration is in the order of 30-40% for marketing and sales. Game values are based on the highest density obtained from MonteCarlo simulation on key variable intervals. Figures are also in real terms, excluding inflation.

The post Google or Gemini? A Framework for Navigating Agentic AI Confusion appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/google-or-gemini-a-framework-for-navigating-agentic-ai-confusion/feed/ 0
Reclaiming SaaS Value in the Agentic AI Era https://www.europeanbusinessreview.com/reclaiming-saas-value-in-the-agentic-ai-era/ https://www.europeanbusinessreview.com/reclaiming-saas-value-in-the-agentic-ai-era/#respond Tue, 27 May 2025 09:04:51 +0000 https://www.europeanbusinessreview.com/?p=230020 By Jacques Bughin  As agentic AI takes over task execution, the foundations of traditional SaaS begin to crumble. Interfaces lose value, orchestration gains power, and control over workflows becomes the […]

The post Reclaiming SaaS Value in the Agentic AI Era appeared first on The European Business Review.

]]>
By Jacques Bughin 

As agentic AI takes over task execution, the foundations of traditional SaaS begin to crumble. Interfaces lose value, orchestration gains power, and control over workflows becomes the new battleground. Dr. Jacques Bughin reveals how companies must pivot fast or risk fading into irrelevance in this rapidly shifting landscape.

1. The End of SaaS as We Know It

The traditional Software as a Service (SaaS) model, characterized by user interfaces, per-seat pricing, and feature sets, is undergoing a significant transformation. The catalyst for this change is the emergence of agentic AI—autonomous digital workers capable of executing tasks on behalf of users. By 2030, it’s projected that 30% of current B2B SaaS revenue is at risk due to orchestration-driven compression. Users will delegate tasks to agents, potentially bypassing multiple software interfaces. This shift changes the pricing paradigm from “software access” to “outcome fulfillment.. SaaS companies, especially those offering horizontal, mid-layer, or UI-heavy solutions, face significant disruption. Products like dashboards, CRMs, schedulers, or project tools without vertical integration risk obsolescence within 2–4 years

Who Owns the Stack Now and how it will change

In the agentic era, value accrues to those controlling the stack: infrastructure providers like Azure, AWS, and GCP; model developers such as OpenAI, Claude, Gemini, and Mistral; orchestrators including Copilot, Gemini 1.5, Dust, and LangGraph; and finally, the SaaS layer, which is increasingly reduced to an API endpoint. Control over the user interface no longer ensures monetization. Agents are UX-agnostic; what matters is control over intent, memory, context, and orchestration flow. The entity that owns the agent effectively owns the user.

Traditional SaaS captures value through accounts, permissions, UIs, and reports. Agentic AI, however, derives value from workflow dominance, automation logic, and autonomous action. This represents a shift from user-driven interfaces to autonomous decision-making architectures. Instead of users navigating multiple tools, agents pull data, query APIs, make decisions, and inform users, rendering entire layers of the traditional stack obsolete.

By the way, the transition is accelerating:

  • Adobe has integrated agentic AI into its suite, enhancing user experiences across its platforms. Salesforce has expanded its family of large action models, designed to predict and perform next actions, powering AI agents across its ecosystem. ServiceNow has introduced autonomous AI agents capable of executing complex tasks, differentiating from traditional generative AI copilots. Startups like Sana (SE), Otherside AI, and Deepop are building orchestrators as core products. Additionally, tools like AutoGen, CrewAI, and LangGraph are rapidly maturing, facilitating agent deployment
  • Major tech companies are aligning around task flow control. Microsoft is transforming M365 into an orchestration hub via Copilot. Google utilizes Gemini to defend search and expand SaaS offerings. Amazon connects agents to transactions through Bedrock and Alexa. Meta develops social/consumer agents to protect advertising revenue. NVIDIA and AMD drive demand for compute through orchestration. Each stands to gain from cloud revenue, LLM licensing, chip sales, or agent UX control.

In parallel, the cost per 1,000 tokens for LLMs has plummeted from ~$10 (GPT-4, 2023) to less than $0.01. Task orchestration costs are decreasing by 80–90% annually, while the number of addressable agentic workflows grows exponentially. By 2026, many SaaS workflows will be more cost-effective and efficient when executed by agentic systems rather than traditional applications.

2. SAAS future is different

Cutting Costs Isn’t a strategy, you need to move

Some SaaS firms may respond by reducing R&D, narrowing product scope, or halting innovation. While this might preserve short-term EBITDA, it jeopardizes long-term viability. Agents will outperform pared-down tools, leading to user attrition and increased churn. Eventually, these firms risk becoming mere wrappers around others’ orchestration logic.

Reality bites: Klarna, a fintech leader, has undertaken a significant transformation by reducing its reliance on over 1,200 SaaS tools, opting for internally developed AI-powered solutions. This strategic shift led to annual savings exceeding $10 million and streamlined operations. Notably, Klarna severed ties with major SaaS providers like Salesforce and Workday, replacing them with internal systems built on AI infrastructure, including OpenAI’s technologies. The company’s AI assistant, powered by OpenAI, managed two-thirds of customer service chats in its first month, effectively performing the work of 700 full-time agents. This move not only improved efficiency but also enhanced customer satisfaction, with errands resolved in less than 2 minutes compared to 11 minutes previously.

Embracing Strategic Pivots

In the dynamic landscape of SaaS and AI, the ability to pivot strategically is crucial. Many successful companies have undergone significant pivots in their business models to adapt to market changes and achieve growth. For instance, Twitter originated as a podcast service called Odeo before pivoting to a microblogging platform. Similarly, Shopify transitioned from an online snowboarding equipment store to a comprehensive e-commerce platform. Flickr began as an online multiplayer game before becoming a photo-sharing site. Pinterest started as a mobile shopping app named Tote before evolving into a visual discovery platform. These pivots often involve redefining core business assumptions and engaging new resources, technologies, and leadership. Such examples underscore the importance of flexibility and responsiveness in business strategy, especially in the face of technological advancements like agentic AI.

Pivots for SAAS include

  1. Build Embedded Agents: Integrate agentic UX within your product. Employ intent-based UI, context memory, and internal Retrieval-Augmented Generation (RAG).
  2. Attack via Vertical Orchestration: Control agents across specific vertical domains (e.g., construction, legal, compliance). Examples include Procore, ServiceNow, and Toast.
  3. Own the Model Logic: While not necessarily owning the LLM itself, manage your RAG, fine-tuning, and abstraction layers. Utilize tools like CoreWeave, Mistral, and LangGraph for efficient development.
  4. Best Ecosystems to Build I: key ecosystems for development include infrastructure providers like CoreWeave, Lambda, and RunPod; model developers such as Mistral and LLaMA 3; orchestration tools like AutoGen, CrewAI, and LangGraph; SaaS innovators including Notion, Intercom, and Deepop; and VC/PE firms like EQT, Point Nine, and Index.

Are you ready to embrace?

About the Author

jacquesJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

The post Reclaiming SaaS Value in the Agentic AI Era appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/reclaiming-saas-value-in-the-agentic-ai-era/feed/ 0
Guiding Agentic AI https://www.europeanbusinessreview.com/guiding-agentic-ai/ https://www.europeanbusinessreview.com/guiding-agentic-ai/#respond Tue, 08 Apr 2025 15:02:26 +0000 https://www.europeanbusinessreview.com/?p=225876 By Jacques Bughin and Philipp Remy Agentic AI is a new class of AI systems that combine goal-driven autonomy and represents the third wave of AI, moving beyond prediction and […]

The post Guiding Agentic AI appeared first on The European Business Review.

]]>
By Jacques Bughin and Philipp Remy

Agentic AI is a new class of AI systems that combine goal-driven autonomy and represents the third wave of AI, moving beyond prediction and generation to execution. While early use cases already show value generation in many sectors, challenges concerning its reliability, governance and accuracy remain. Enterprise adoption is on the rise, but the way forward requires careful design, clear use cases and human oversight (humans in the loop). Business leaders must act with realism and urgency, preparing systems, teams and strategies for AI-powered collaboration.

1. Introduction

Agentic AI refers to intelligent systems capable of pursuing goals through multi-step reasoning, dynamic decision-making and the use of tools. Unlike earlier forms of AI that required constant human intervention, these agents operate semi-autonomously or fully autonomously, relying on APIs and web interfaces to take real action on behalf of users and workers.

Agentic AI uses large language models as its core reasoning engine, but wraps them in agents capable of planning, acting, using tools, remembering context and learning from results. The result is not just increased automation, it’s a new paradigm: an intelligent digital workforce that executes, adapts and collaborates in ways hitherto reserved for humans.

As the next frontier of AI-enabled enterprise, the key question is no longer what’s possible, but what works, and how quickly it will become the norm.”

2. Five Unknowns…to Know

Fact 1: Agentic AI is the third wave of AI

Much confusion still surrounds the notion of agentic AI (AAI). A useful way to understand it is to examine its relationship with previous waves of AI.

Predictive AI (PAI) – the first wave – focused on automating routine tasks such as forecasting, classification and pattern detection from structured or semi-structured data. Think remote sensing, machine translation and speech recognition.

Generative AI (GAI) – the second wave exempted by Open AI – introduced the ability to create new outputs such as text, images or code. It revolutionized content generation, but remained dependent on user prompts and limited to its learning data.

Agentic AIs don’t just assist human action; they enhance it by taking on tasks that require high involvement and multi-tasking without constant human intervention.

Today, agentic AI (AAI) – the third wave – goes a step further. It is based on large language models (LLMs), but extends their use by integrating them into autonomous agents that reason, act and adapt. These systems can make decisions, perform tasks and learn continuously, giving them added value beyond ad hoc content generation. Thus, one of the factors driving the design of agentic AIs is the need for tools designed to operate in better but complex real-world conditions, with plenty of room for maneuver. Agentic AIs don’t just assist human action; they enhance it by taking on tasks that require high involvement and multi-tasking without constant human intervention.

Fact 2: Agent AI – the tip of the iceberg

Agentic AI has long been the stuff of dreams – like the first generation of AI that relies on the cloud and machine learning algorithms, agentic AI is not a “breakthrough ex nihilo”, but rather the result of six enabling forces.

LLM models such as GPT-4, Claude 3, Mistral and others can now reason, understand and adapt to all domains, representing a significant advance over fragile rule-based systems.  Secondly, we now have a scalable computing infrastructure with access to AI-optimized GPUs (NVIDIA H100s), edge computing and containerized deployment frameworks that enable the use of high-performance real-time agents. Thirdly, open-source frameworks such as LangChain and commercial SaaS APIs enable agents to interact with software as actors rather than observers.

And with retrieval-augmented generation (RAG), vector stores and windows of over 100,000 tokens, agents can reflect, learn and remember over time. Finally, natural language interfaces offer code-free delegation and interfaces that can be easily used by end-users and non-technical teams alike.

Fact 3: Combining six unique capabilities

If agentic AI is the result of a set of converging technical forces, agentic AI has a set of capabilities that totally differentiate it from previous AI approaches

One of the hallmarks of agentic AI is autonomy; whereas generative systems wait for instructions, agentic agents can operate independently once given a goal. For example, Adept’s ACT-1 system allows users to enter high-level instructions such as “Plan sales meeting and prepare presentation”, and the agent autonomously takes care of the details: checking calendars, creating slides and sending invitations, all without additional intervention. This is in stark contrast to traditional tools, which require users to orchestrate each step manually.

Agentic AI is also proactive. Unlike generative systems that respond to prompts without awareness of broader goals, agentic agents pursue objectives. They initiate tasks, monitor progress and correct course if necessary. Shopify’s Sidekick, for example, can take into account a merchant’s overall goal – such as “increase my store’s conversion rate” – and proactively suggest price changes, rewrite product descriptions and launch A/B tests. In the consumer field, Amazon’s Returns Assistant demonstrates similar capabilities by accessing a customer’s order history, determining the eligibility of a return and proposing personalized solutions in real time, all without human escalation.

Crucially, agentic systems make complex decisions. They evaluate trade-offs, prioritize actions and adapt according to the results. In sales and marketing, platforms like Relevance AI deploy agents that don’t just send emails, but actively generate and test lead generation strategies by combining data on ideal customer profiles, historical campaign performance and channel effectiveness. Similarly, Composer – formerly MindStudio – uses agents that orchestrate multi-channel campaigns, optimize budget allocations and iterate on messages to boost performance, making real-time decisions at every stage.

Learning and adaptation are another characteristic feature. While generative AI can improve through retraining, agentic systems are designed to constantly evolve from their own experiences.

Learning and adaptation are another characteristic feature. While generative AI can improve through retraining, agentic systems are designed to constantly evolve from their own experiences. Spotter.ai, for example, improves its interactions with customer support over time by learning from feedback and results. The agent aligns itself more closely with a brand’s tone, escalates fewer tickets and anticipates customer needs based on previous cases. Cognosys goes a step further by enabling agents to “learn” an organization’s internal documentation and tools, then autonomously integrate new employees by generating guides and answering domain-specific questions without starting from scratch every time.

Tool integration is another key capability. Agentic AI is not limited to conversation or content generation – it acts in the real world. Thanks to APIs, databases, form submissions and software interfaces, agents can perform end-to-end workflows. AutoGPT and Superagent, for example, can be connected to platforms such as CRMs, messaging services and payment processors to perform real-world business processes such as greeting customers, updating records, sending invoices and tracking transactions – autonomously and on a massive scale. This autonomy is not limited to single-step actions. Agentic systems reason and plan in several stages, adapting as they get closer to a goal. Let’s take the example of an agent built with LangChain, which is asked to analyze the prices charged by competitors. He might start by researching websites, then compare prices with internal margins, recommend markdowns, and finally draft an internal memo for the sales team. Each of these steps depends on the results of the previous one, requiring continuous reasoning and adjustment.

Memory is another cornerstone. Agentic agents retain knowledge of previous interactions, user preferences and the relevant organizational context. This enables them to deliver consistent, context-sensitive performance over time. Tools such as personal AI store and recall information about how a person writes, thinks and communicates, which

It is the combination of these capabilities that makes agentic AI an active collaborator in business processes; autonomy and multi-stage planning enable agents to take on entire tasks (such as qualifying potential customers or reprogramming logistics) that would otherwise require many hours of human labor. Proactivity and complex reasoning enable agents not only to respond to problems, but also to predict and optimize them (avoiding lost sales or supply disruptions). Continuous learning ensures that these agents don’t stagnate – a marketing agent gets smarter with each campaign, a supply chain agent adapts to new market data – delivering cumulative returns over time. Tool integration enables agents to execute decisions directly within enterprise systems, reducing the time between understanding and action. Memory gives agents the rich context needed for personalization and consistency, turning interactions into relationships (customers feel that AI “knows” them, employees find that AI remembers corporate knowledge). Last but not least, human supervision closes the loop by providing the governance that allows all stakeholders to feel comfortable deploying these powerful autonomous systems in mission-critical operations.

Fact 4: Agentic AI is still in its infancy, but is already delivering value.

There are already a large number of use cases for agentic AI technologies in various sectors, from healthcare to industry. In healthcare, an agentic AI-based monitoring system developed for healthcare monitoring can independently identify deterioration in a patient’s condition through regular monitoring of vital signs. In Germany, MediTech AI’s diagnostic system actively highlights areas of concern and suggests diagnoses, resulting in a 30% improvement in diagnostic accuracy and a 50% reduction in diagnosis time

In the financial sector, WorldQuant exploits agentic AI, and more specifically reinforcement learning, to create trading algorithms capable of adapting in real time to changing market conditions. These algorithms analyze large amounts of financial data, identify patterns and execute trades autonomously. AI is designed to continuously learn from its successes and failures, adjusting its strategies to optimize performance. A key element is that AI identifies and exploits market inefficiencies, rather than simply reacting to pre-programmed rules. It works like an agent making decisions.

In sales, Salesforce’s Agentforce 2.0 has introduced agent skills such as sales development and sales coaching, as well as lead development and personal buying. The sales coaching agent, for example, uses AI and CRM data to analyze sales pitches and role-play sessions, providing personalized feedback to help sales reps close deals more effectively. Other AI agents help with marketing campaigns, merchant management and service planning. These additions enable companies to nurture leads, participate in prospecting calls, provide feedback and tailor skills to various use cases, including field service work. Skills can be customized to meet specific business needs

In the field of enterprise automation, players such as UiPath have deployed AI-powered process agents that manage workflows in finance, procurement and IT In a factory, agentic AI calculates the expected time to machine failure and remaining useful life, as well as when to perform maintenance activities to maximize operational availability. This system exploits data from a cluster of machines to proactively predict future breakdowns and optimize resource distribution, which in turn boosts production as part of the company’s automation.

In the contact center sector, Cresta AI has implemented AI coaching agents that operate in real time, suggesting optimal responses, highlighting best next actions and automatically evaluating call quality. Fortune 500 customers using Cresta have reported a 25-30% reduction in handling time and a 20% increase in conversion rates. The AI-based QA system now replaces over 80% of manual QA work.  Another example is cable media company Comcast, which has implemented a search-augmented AI agent called AMA (“Ask Me Anything”) to support its customer service teams. The agent provides real-time answers to questions posed by agents during customer calls, reducing the average handling time of support conversations involving complex issues by more than 10%. The company also reported millions of dollars in annual savings and over 80% positive feedback from human agents using the system.

In the field of human resources and recruitment, Paradox’s “Olivia” agent automates high-volume recruitment for companies such as McDonald’s and Unilever. The agent selects candidates, schedules interviews and answers applicants’ questions. Case studies show that Olivia reduces the time spent by the recruiter on each candidate by over 90%, and cuts the time to hire by a factor of four, while maintaining a candidate satisfaction rate of over 95%.

Agentic cybersecurity systems such as Exaforce, Legion or Aptori are also built to make human experts more efficient by automating certain aspects of their work. They can detect attacks autonomously and produce reports, improving system security and reducing the workload of human experts by up to 90%.36 Agentic AI can also help software development teams detect vulnerabilities in new code. It can run tests and communicate directly with developers to explain how to proceed. It can run tests and communicate directly with developers to explain how to solve a problem – something that human engineers have to do manually today.37

Fact 5: Rome wasn’t built in a day.

These case studies highlight one constant: agentic AI is not a fad, and can deliver operational benefits. But behind the scenes, AI agents need to work with great precision to go mainstream.

With regard to the above cases, we can deduce that agentic AI works (read: requires) well for defined task domains, clean structured data and tool integration. Otherwise, the benefits of agentic AI can only be speculative.

If we look closely at the experience of AI agents today, we have to admit that success rates for agentic AI are still low. Even the most successful AI agents achieve success rates as low as 24% in benchmarks involving realistic tasks such as those encountered in software engineering roles. Tasks requiring long-term planning or multi-stage execution amplify failure rates due to cumulative errors. The most successful models (e.g. Claude 3.5 Sonnet) are also the most expensive, while cheaper models such as GPT-4o achieve significantly lower success rates and require more steps to solve tasks, which is a strong indication of residual inefficiency.

Agents that rely on multiple API calls or attempt long chains of action can become slow or inconsistent, degrading the user experience.

On the other hand, agentic AI remains fragile. One of the most common problems of generic AI is hallucination, i.e. the model’s tendency to invent information when confronted with ambiguity. In agentic systems, hallucination becomes more dangerous because it can lead to actions, not just words. For example, an agent who misinterprets a vague instruction may cancel a customer’s account instead of pausing it. Worse still, AI agents seem to continue to struggle against adverse inputs, hallucinations and cascading failures in high-risk use cases such as medical data processing or financial transactions. Another failure mode is over-autonomy. When agents are allowed to chain tool calls or operate on multiple systems without supervision, unforeseen results become more likely. Companies deploying these systems therefore need to put in place strong safeguards: action constraints, pre-approved APIs, audit logs and optional human intervention steps in the loop.

Latency and complexity also pose problems. Agents that rely on multiple API calls or attempt long chains of action can become slow or inconsistent, degrading the user experience. Many vendors are now implementing optimization layers that prioritize speed and reliability, while others use fallback flows or escalations to avoid critical failures.

Finally, failures often occur during handovers between humans and agents, or when agents have to clarify ambiguous instructions.

The fragility of artificial intelligence agents has several consequences today.  While AI agents can handle routine tasks, they are far from ready for high-stakes or unsupervised applications. Today’s applications must also continue to rely heavily on “agentic workflows”, where AI agents operate semi-autonomously, but under close human supervision. Companies need to carefully assess where and how to deploy these systems to avoid costly mistakes.

Finally, multi-agent systems – where several AI agents collaborate seamlessly – are the new mantra of agentic AI, but they may remain largely experimental as they tend to multiply errors rather than solve them.  

3. Preparing for the Agentic AI Wave

So agentic AI works, but remains narrow, with human supervision in the loop. The way ahead is, like any new technological advance, somewhat uncertain, but some conjecture suggests that the future may develop more rapidly than previously thought.

Technology improvement curves accelerate

The first is that the technological evolution of AI agents could follow a rapid trajectory, similar to that of other transformative technologies such as autonomous cars and generative AI.

Self-driving cars faced similar early challenges: high costs, limited reliability in complex environments and regulatory hurdles. Over time, improvements in sensor technology, machine learning models and safety protocols have made autonomous vehicles more viable, even if they are not yet fully autonomous.

On the other hand, generative AI has seen rapid progress since its inception, In benchmark tests such as GLUE and MMLU, for example, GPT-3 scored 43.9% on MMLU in 2020, while Gemini 1.0 Ultra surpassed human performance with a 90% score in December 2023 – a doubling of accuracy in the space of three years. However, generative AI also shows performance degradation when applied outside its scope, a problem reflected in AI agents.

Agentic AI also builds on the foundations of LLMs, leveraging their capabilities for autonomous decision-making, contextual understanding and continuous learning. For example, agentic AI systems use frameworks such as LangChain and LlamaIndex to connect LLMs to external tools and databases, enabling real-time data access and memory retention. This integration accelerates their ability to manage complex tasks autonomously. The research also shows that by refining LLMs for specific applications (e.g., marketing automation or supply chain management), agentic AI systems can rapidly achieve greater accuracy and efficiency in specialized tasks.

Based on the above, the most likely scenario is perhaps faster than the physical world of intelligent vehicles, and probably slightly faster than the pace of the improvement curve observed in LLMS. However, this tells us that in the not too distant future (3 years or so), we can expect significant advances in reliability, cost-effectiveness and probably the fusion of strong multi-agent collaboration capabilities.

Message to executives

Agentic AI represents a promising, albeit still evolving, technology class with a likely transformative impact on the horizon of the current strategic plan. For business leaders, the opportunity lies in experimenting with lucid realism – piloting where the value is clearest, developing organizational readiness and laying the foundations for responsible scale. We live in an age where pace and precision are important.

The general manager should therefore consider the following as soon as possible:

  1. Think about how agentic AI can change the way your organization creates and delivers value.
  2. Start exploring areas where semi-autonomous agents can generate incremental productivity gains, particularly in high-volume, repetitive workflows.
  3. Start identifying areas where workflows could be restructured to support the increase in AI. Align IT and operational teams with evolving system requirements
  4. Set up internal AI governance, clean data orchestration and workforce capacity in the loop.
  5. Develop and prepare a vision of how your organization will work with the gloves of AI and the impact this will have on your HR strategy.

About the Authors

jacquesJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

Philipp RemyPhilipp Remy is a Partner at Fortino Capital, a European PE and VC fund focused on B2B software companies. Philipp served on the board of Symbio, a provider of AI-driven business process management software, which was acquired by Celonis. He has an international track record in the enterprise AI software industry at leading companies such as C3.ai and Afiniti.

The post Guiding Agentic AI appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/guiding-agentic-ai/feed/ 0
Making Sense of DeepSeek https://www.europeanbusinessreview.com/making-sense-of-deepseek/ https://www.europeanbusinessreview.com/making-sense-of-deepseek/#respond Fri, 31 Jan 2025 11:35:37 +0000 https://www.europeanbusinessreview.com/?p=222223 By Jacques Bughin DeepSeek, a large language model built by China, has become both a sensation as well as a source of concern for the US AI ecosystem, causing the […]

The post Making Sense of DeepSeek appeared first on The European Business Review.

]]>
By Jacques Bughin

DeepSeek, a large language model built by China, has become both a sensation as well as a source of concern for the US AI ecosystem, causing the Nasdaq to tumble.

For many managers, DeepSeek adds struggle and turbulence to an already complex technology evolution attached to AI. It is thus rather important to eliminate the noise and sort out the facts from all the current fantasies about the emergence of DeepSeek and its true meaning and consequences for the AI revolution. Here are some crucial elements to make a more informed judgment.

DeepSeek and the AI revolution

1. LLMs are part of an ecosystem

One should always keep in mind that LLMs work on microprocessors and trained datasets. The strength of DeepSeek lies in its strategy of relying on open source to limit its own training costs, to easily access public data, and to leverage Nvidia chips despite US export restrictions.

Its reliance on open source datasets, including foundational models such as LLaMA or Falcon, has among others allowed DeepSeek to develop at a fraction of the cost of proprietary models such as OpenAI’s GPT-4.

However, DeepSeek’s strength is also its weakest link, given its reliance on the generative AI value chain and publicly available datasets. This dependency limits its ability to create a unique competitive advantage and exposes it to the risk of being overtaken by others using the same resources.

2. The new wave of LLM 2.0

The first generation of the LLM battle focused on “more is better” by scaling parameters and tokens. OpenAI’s GPT-4, for instance, uses approximately 175 billion parameters, representing a significant investment in training and computing resources. In this second iteration of LLM, the focus has shifted to quality data and verticalized, domain-specific models.

DeepSeek’s strategy of optimizing a 20-billion-parameter Mixture of Experts (MoE) architecture aligns with this shift, providing major cost efficiencies without sacrificing much in performance. This performance may look remarkable when compared with other high-profile models, such as open source LLaMa or proprietary models such as OpenAI (figure 1).

Notwithstanding some excellence, a deeper look also suggests the following:

  1. The cost per token of DeepSeek reflects its likely marginal costs, which is around 5/10 cents per 1,000 tokens or a 10-times improvement versus OpenAI. This cost per token is, however, in the range of other open-source models
  2. The LLM 2.0 trend, while demonstrating a better trade-off between cost and performance, does not mean that DeepSeek is an absolute performer. In terms of general knowledge, OpenAI and DeepSeek perform very similarly, but GPT-4o is better at coding tasks and outperforms DeepSeek significantly (75.9 per cent vs. 61.6 per cent) on mathematical reasoning and training. Finally, GPT-4o has multimodal support (69.1 per cent), while DeepSeek lacks this capability entirely, making it cut from a large number of AI use cases based on multimodalities such as content generation (figure1).

Figure 1: Performance benchmarks, DeepSeek version 3.0.

Performance benchmarks, DeepSeek version 3.0.

3. A low-cost LLM model limits its marketability

Note that DeepSeek may have deliberately avoided features such as multimodality and mathematics due to time-to-market and cost considerations. Attempting to estimate these cost / time effects, adding multimodal capabilities or improving coding and mathematical reasoning for DeepSeek will add a multiple of four to eight times the current development cost ($20m-$40m) and a significant time investment (12-24 months) (figure 2).

This will significantly offset any cost advantage for DeepSeek, while today it limits its marketability for domains where mathematical reasoning is large (e.g. insurance and capital markets) and multimodal capabilities are a necessity (media, content generation).

 Figure 2: Adding multimodality and upgrading math / coding for DeepSeek.

Component Cost Estimate Component Cost Estimate
Architectural redesign $1m–$3m Dataset acquisition $2m–$7m
Dataset acquisition $5m–$10m Fine-tuning (coding) $2m–$5m
Training compute $2m–$5m Fine-tuning (math) $1m–$3m
Fine-tuning / deployment $1m–$3m Architectural tweaks $1m–$3m
Evaluation frameworks $500k–$1m
Total $9m–$21m Total $6.5m–$19m

4. DeepSeek is one of many in the global AI race

DeepSeek is only one piece of the large puzzle in play between China and the US regarding AI dominance. The AI war already has quite a long history. Regarding global semiconductor dynamics, China is exploring alternatives in the global semiconductor value chain. By tapping into massive data pools from China, India, Africa and, potentially, Europe, China could diversify its reliance on US-based chip manufacturers. European players, such as ASML, may also see an opening to compete in this space, offering a potential shift in global semiconductor dynamics.

Regarding global semiconductor dynamics, China is exploring alternatives in the global semiconductor value chain.

In all cases, DeepSeek versus OpenAI or Nvidia is also a symptom of the AI war between China and the US. But this has two consequences. The first is that China represents the largest demand pool of semiconductors in the world – and US companies have long relied on the Chinese market for their success. China, for instance, represents 66 per cent of revenue for Qualcomm, 55 per cent for Texas Instruments, and 35 per cent for Broadcom. By comparison, China accounts for (only) 25 per cent of Nvidia’s revenue.

The second is that retaliation by China has often been based on “dumping” (pricing below marginal cost). While it is difficult to assess the marginal cost of the DeepSeek model, which relies on a clever MOE architecture that uses 20 billion parameters, or 10 per cent of its total size at the time, we have already mentioned that the true cost per token of DeepSeek is likely ten times cheaper than fully fledged models like OpenAI, which means that current pricing is still above marginal cost and can not be challenged by the World Trade Organization. Still, China can only continue this if it relies on open source data, models, and the work of others, as DeepSeek is intending to do.

5. AI and the war for talent

The fact that Chinese engineers can match US counterparts is also known – and skills are the main drivers of LLM competition. Let’s remember the Eric Schmidt conversation a few years back, where he said: “Most Americans assume that their country’s lead in advanced technologies is unassailable. … China is already a full-spectrum peer competitor in terms of both commercial and national-security AI applications. China is not just trying to master AI; it is mastering AI.

China’s AI giants, Baidu, Alibaba, and Tencent (BAT), have, in fact, demonstrated a head start in artificial intelligence. Despite claims of lagging behind US companies, BAT’s AI initiatives rival and often mirror those of their US counterparts, such as OpenAI and Google. Baidu’s early AI investments and projects like Apollo have redefined the auto industry, while Alibaba’s Tmall Genie competes with Amazon Echo, and Tencent’s WeChat-integrated smart devices rival Apple’s ecosystem.

BAT’s global ambitions are further hindered by limited access to foreign language data compared to US counterparts like OpenAI and Google.

However, as with DeepSeek, BAT’s performance is heavily reliant on foreign semiconductor supply chains. BAT’s global ambitions are further hindered by limited access to foreign language data compared to US counterparts like OpenAI and Google. This gap has driven BAT to pursue international partnerships to localize their technologies, but with the inherent risk of being sandboxed or outright rejected.

6. The long road ahead for LLMs

Despite impressive advancements, LLMs remain far from meeting the performance requirements of many sectors, such as financial services, precision manufacturing, and healthcare. Financial firms, for instance, require models with precise, real-time analysis capabilities, while manufacturers need robust AI for highly technical use cases.

The battle is just beginning, and models will need to be significantly upgraded to address enterprise-specific requirements beyond customer service and content generation. This opens up opportunities for proprietary players such as OpenAI, as well as emerging competitors such as Mistral AI, which aims to focus on specialised, enterprise-grade solutions.

A key insight comes from the SWE bench (software engineering): at 50 per cent verified accuracy, DeepSeek falls short for enterprise-grade software engineering tasks. This includes issues such as limited ability to generate correct, efficient and production-ready code, as well as challenges in understanding edge cases or debugging complex, real-world software problems.

Finally, even with strong benchmarks, most enterprise use cases demand factual accuracy and consistency, but LLMs often hallucinate, and DeepSeek is no exception, meaning that even with nearly 90 per cent MMLU performance, DeepSeek cannot guarantee correctness in high-stakes domains such as law or medicine. In regulated industries, hallucinated output could lead to compliance violations, legal liability, or even harm.

7. Open-source and proprietary models: A coexistence model

The open-sourceversusproprietary battle is not new in high-tech industries. Historical patterns – such as Linux in servers, ARM in mobile processors, or the GSM standard in telecom – suggest that open-source solutions will often coexist with proprietary models.

Linux fundamentally disrupted the proprietary server market but, rather than eliminating proprietary players, it expanded the size of the market.

The overall size of the server market grew significantly as Linux enabled widespread adoption of web hosting, cloud computing, and enterprise applications.

The adoption of Linux dramatically reduced costs for enterprises and start-ups, making server-based applications affordable for smaller businesses. The overall size of the server market grew significantly as Linux enabled widespread adoption of web hosting, cloud computing, and enterprise applications. In turn, proprietary systems such as Windows Server still exist and thrive in certain segments (e.g., enterprises that value integration with Microsoft’s ecosystem). Companies such as Red Hat have built profitable business models around open source Linux, providing enterprise support, consulting, and security patches.

Similarly, the adoption of GSM as an open standard in telecommunications was a critical factor in the explosive growth of the mobile market in both developed and emerging markets, significantly increasing the size of the global market.

While proprietary technologies such as CDMA initially competed with GSM, they eventually became niche players. Proprietary technologies struggled unless they offered clear advantages (e.g. better coverage or speed in certain scenarios).

As a result, the market can see explosive growth from open source, and proprietary players can still win if they segment the market and have a sufficiently attractive value proposition. Competition, in turn, leads to a significant acceleration of the market based on the attractiveness of cost reductions and quality improvements, to the benefit of all players.

DeepSeek’s reliance on open source tools such as PyTorch and its low-cost architecture are in line with this trend, but players such as OpenAI and Mistral AI are likely to focus on high-quality, domain-specific models.

8. Can Nvidia be the biggest beneficiary of AI growth, after all ?

The rise of open-source AI, including DeepSeek, has significant implications for Nvidia:

  • Without open source: Nvidia would remain reliant on a few major players, with slower market expansion.
  • With open source: Open-source accelerates AI adoption, creating explosive demand for Nvidia GPUs across a broader customer base. Open-source AI fosters decentralization, driving GPU demand among smaller players, start-ups, and emerging markets.

Nvidia is already adapting to this landscape by broadening its offerings. Affordable GPUs like the Jetson Orin Nano and scalable cloud services like Nvidia AI Enterprise cater to cost-sensitive developers and enterprises. Furthermore, Nvidia actively supports open-source initiatives, including PyTorch and NVDLA, to ensure that its hardware remains central to the AI ecosystem. Emerging markets in Asia, Africa, and Latin America – where open-source adoption will likely dominate – present a significant growth opportunity for Nvidia. These moves solidify Nvidia’s role as the foundational provider of AI for the future.

Is DeepSeek changing the CEO journey ?

The above has demonstrated that DeepSeek is part of LLM 2.0, leveraging the open source boost possibly in a context of an AI race between the US and China, and China’s strategic necessity to diversify sources of reliance in the AI ecosystem. More broadly, it demonstrates that open source will likely be part of the outcome.

In these circumstances, what are the key action points for a CEO?

1. Accelerate AI adoption

  1. Affordable and accessible AI: The emergence of open-source models like DeepSeek and LLaMA, combined with increasingly affordable infrastructure costs, is democratizing AI. Enterprises can leverage these advancements to rapidly scale AI capabilities without incurring exorbitant expenses.
  2. Peer adoption drives network effects: The reduced barriers to entry foster widespread adoption across industries, creating momentum and enabling businesses to extract value from AI faster than ever before.
  3. Opportunity for business innovation: The balance between open-source and proprietary models allows enterprises to innovate while managing costs, ensuring that AI investments align with specific operational goals and industry needs.

Action point: Prioritize adopting AI to accelerate operational efficiencies, improve customer engagement, and gain a competitive edge in your sector. Look at open-source solutions to experiment and iterate quickly before scaling.

2. Infrastructure plays will dominate: align with key ecosystems

    1. Ecosystem centralization: Despite the rise of open source, the AI infrastructure market will likely remain concentrated in a few dominant players like Nvidia, OpenAI, Microsoft, and Google. These companies will continue to drive AI development through GPUs, APIs, and cloud services.
    2. Integration is key: Enterprises must align their strategies with the leading ecosystems to access state-of-the-art capabilities while minimizing friction. Building within these ecosystems ensures compatibility, scalability, and support for future innovations.

Action point: Integrate into dominant AI ecosystems early by partnering with key players, leveraging their APIs, cloud services, and hardware platforms. Focus on interoperability and modularity to avoid vendor lock-in while ensuring access to cutting-edge tools.

3. Mitigate risks: Navigate regulatory, political, and compliance hurdles

  • Regulatory uncertainty: Generative AI faces increasing scrutiny, with potential risks around data privacy, ethical concerns, and bias. Governments worldwide are racing to regulate AI, introducing compliance complexities for enterprises.
  • Geopolitical risks: AI is deeply intertwined with global political tensions, particularly between the US and China. Enterprises must be cautious of supply chain dependencies, export restrictions, and geopolitical shifts.
  • Hallucination and liability risks: Generative AI models often produce “hallucinated” outputs, raising risks for regulated industries like healthcare, banking, and legal services.

Action point: Build robust risk management frameworks by:

  • Establishing strong governance policies for AI use.
  • Ensuring compliance with relevant regulations (e.g., GDPR, HIPAA).
  • Diversifying supply chains to reduce dependency on any single region or vendor.
  • Designing contingency plans to respond to regulatory shifts or model failures.

Final message

AI adoption is no longer optional; it’s a necessity. The convergence of open-source accessibility, infrastructure dominance, and regulatory scrutiny creates a dynamic ecosystem. Enterprises that proactively embrace AI while managing risks will not only survive but thrive. The key is moving fast, collaborating smartly, and acting cautiously where necessary to capitalize on the opportunities AI offers, while safeguarding against its risks.

About the Author 

jacquesJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

The post Making Sense of DeepSeek appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/making-sense-of-deepseek/feed/ 0
What Is Your Quantum (AI) Strategy?  https://www.europeanbusinessreview.com/what-is-your-quantum-ai-strategy/ https://www.europeanbusinessreview.com/what-is-your-quantum-ai-strategy/#respond Sun, 05 Jan 2025 10:40:33 +0000 https://www.europeanbusinessreview.com/?p=221925 By Jacques Bughin Introduction Our history is full of big, bold investments united by their ability to mobilize resources and inspire progress and dominance, – from the building of the […]

The post What Is Your Quantum (AI) Strategy?  appeared first on The European Business Review.

]]>
By Jacques Bughin

Introduction

Our history is full of big, bold investments united by their ability to mobilize resources and inspire progress and dominance, – from the building of the pyramids in Egypt and cathedrals across Europe in the Middle Age, to the US NASA programme to land on the moon in the 1960s.   

Humanity’s tradition of bold investments is now seen in the pursuit of cutting-edge artificial intelligence and quantum computing, which become the next frontier for economic and technological leadership. Investment in Artificial Intelligence (AI) has grown rapidly since the 2010s, with a boom due to generativeAI and Large Language Models (LLM) 

If investments in Quantum Computing (QC) started a little later than those in AI, investments are however already comparable to historic projects such as the Apollo programme or the Human Genome Project, and promise a tech that can far surpass traditional binary computing as our generation knows it.   

Governments have long taken notice– the spent led by the classical suspects of China, and US might have already invested $55 billion in QC technology in the recent years , with the European Union spending about 7 billon for its Quantum Flagship initiative, and China has invested an estimated $15–$20 billion in quantum research through initiatives such as the National Laboratory for Quantum Information Sciences in Hefei 

Multiple companies have piloted the technologies with good success, including the likes of Volkswagen, Bosch, Exxon or JP Morgan. A defining moment is the intersection of Quantum with AI opening new possibilities for industries while amplifying existing quantum computing applications. Another « ah ah  moment for the take off of Quantum technology is without doubt, Google showcasing its Willow chip’s ability to perform computations significantly beyond the capabilities of classical computer, with spectacular reduction in error rates. (Table 1) 

Other FAANGS are not resting. Amazon Web Services, for example, offers Braket as a fully managed service platform that allows researchers and developers to design, test, and run quantum algorithms in a cloud environment; it also recently launched Quantum Embark program, a consulting service aimed at preparing customers to integrate quantum computing into their operations. Over the years, NVIDIA has also expanded its technology solutions to include the NVIDIA DGX Quantum for the development of hybrid quantum-classical computing system. 

Table 1: Quantum experiments 

Quantum experimentsAmidst this market evolution, CEOs face a critical question: What should their company’s quantum strategy be?  Should one dive-in now,  or wait for the technology to mature? How can quantum computing align with existing initiatives in artificial intelligence (AI) and other advanced technologies? The first imperative, before acting, is to come to grips with the carousel of this technology.  

The Known Knowns: Quantum is Closer than you Think

1.The Quantum Leap  

Quantum mechanics is related to theoretical physics, but it crossed over into engineering and computing with a couple of milestones.  

The first was a lecture by Richard Feynman in may 1981 entitled “Simulating physics with computers.”, which proposed the idea of using quantum computers to simulate many-body systems that are otherwise too difficult to manipulate with classical computers. The second milestone came with the 1985  study by David Deutsch, which formalised the concept of a quantum computer and described its potential advantage over classical computers in solving problems unrelated to physics. Finally, a third milestone came with the work of Peter Shor, who developed a quantum way of performing a Fourier transform and factoring large numbers that was an order of magnitude more efficient than conventional computing.computer science with a couple of milestones.  

These milestones helped define quantum computing as a technique beyond pure physics that exploits phenomena such as superposition, entanglement and quantum tunneling to process information in a fundamentally different way from classical computing. Most importantly, quantum computers use qubits, which can exist as superpositions of the two binary bits (0 or 1) used in current computing paradigms. This unique property allows quantum systems to process large amounts of information simultaneously, with computational power growing exponentially with the number of qubits.   

2. Google and the Quantum Achilles heel 

But while intellectually powerful, a natural question for quantum AI is whether such large-scale quantum computers could be successfully built.  

Remember that, unlike classical computers, entangled qubits are interdependent, so the state of one qubit immediately affects others, and measuring a qubit also involves probabilistically collapsing its superposition into a single state, permanently altering the systems. These properties mean that quantum computation is challenging because superpositions tend to be quite fragile and decay easily (“decoherence”), while quantum computers can also be inaccurate because the quantum gates used in quantum computation require the tolerance of the computation to these inaccuracies to be large enough to allow quantum gates to be built. 

While Peter Shor also went to  discover quantum error-correcting codes and fault-tolerant methods for reliable quantum computation, this “accuracy threshold theorem” for quantum computing remained a practical challenge for years, until recent efforts by companies such as IBM and Google. Following major advances at its AI lab and promising early deployment of its Sycamore chip, Google has recently achieved the “«  below threshold » milestone where the error rate begins to shrink exponentially with scale. This means that performance goes hand in hand with a much higher quality of quantum computing – opening the venue for a quantum computing market play.  

3. AI can be the killer app

Not only was the error rate low – an improvement many times over Willow’s predecessor, Sycamore, a few years ago – but Google’s 105 physical qubit Willow processor is also a sign of unique power. In fact, the experiment delivered solutions in five minutes to problems that would otherwise take the world’s fastest supercomputer 10 septillion years to solve (Table 2) 

Table 2 : Google Quantum Lab zoom

Google Quantum Lab zoom
Source : Author’s based on Nature, and press release  

This exponential gain in computing power could be a potential game changer in an age of energy-guzzling computing needs and LLM data limitations for effective AI.  

At present, one can either praise and pay the price of Nvidia GPUs for massively parallel architecture, or onez can bet on the potential of quantum for another big leap in computing. In this context, one can anticipate the potential of quantum artificial intelligence (QAI), where Quantum Machine Learning, among other things, could significantly reduce the time required for tasks such as neural network training or combinatorial problem optimization, offering exponential speedups for major complex applications such as finance, portfolio optimization and risk analysis; traffic flow management and vehicle routing in logistics, automated cars, drug discovery and simulation of molecular interactions in biotech and pharma, or still weather prediction and material design. 

Furthermore, AI can support quantum progress. For example, ML can help further reduce quantum errors, as AI’s pattern recognition can be used to detect anomalies in qubit behaviour, predict noise patterns in quantum systems, and optimize quantum error correction codes. 

The Known Unknowns: Show Me the Economics

1. Market potential  

The current market value of QAI applications is still non-existent, with the money currently spent on QAI going to those building the infrastructure and software platforms, such as Amazon or Nvidia, and to quantum native players such as Rigetti and D-Wave, who are at the forefront of building quantum processors. 

Some analysts have ventured to estimate the market potential, with an applications market (representing 15-20% of total spending) worth between USD 5-20 billion in 5 years, but accelerating dramatically thereafter. While the current cost of quantum systems and their operational complexity make them economically viable in the short term mainly for niche applications, the above estimates are likely to be inadequate for a number of reasons.  

First, these estimates did not anticipate Google’s recent breakthrough with Willow, which is likely to bring the market to fruition much faster than their base case. Second, the difference between classical and quantum computing is that the latter will work much faster, reducing the time required by AI engineers. As such, the cost/performance of quantum will be driven primarily by technology, while classical AI will still have to deal with expensive engineering work, so the economics will tilt in favour of quantum AI for complex tasks rather quickly. Third, it has often been assumed that quantum AI will follow AI in adoption – but recent evidence suggests that the new technology will have accelerated adoption due to the capabilities already invested in AI and genAI. Finally, quantum AI and classical AI can work together to great market benefit in the short term, as offered as a hybrid solution. Nvidia is offering such a platform, both to protect its GPU market, but also because quantum/classical are likely to be used differently by industries.  

Finally, quantum AI and classical AI can work together to great market benefit in the short term if offered as a hybrid solution. Nvidia is offering such a platform, partly to protect its GPU market, but also because quantum/classical are likely to be used differently by industries.  

2. Market dynamics 

Besides the technological aspects of quantum, the quantum computing market is in its formative stages, and its dynamics market will depend on a mix of demand and supply factors, including regulation. 

On the demand side, the most obvious cases for quantum AI are in complex areas such as supply chain management, finance, energy and healthcare (drug discovery). While these sectors have the most to gain, industrial sectors such as energy have historically been late adopters of digital technologies.   

On the supply side, the future dynamics among key players—startups, incumbents, and ecosystem facilitators—will shape how the technology is adopted and how its provision evolves. 

Major Incumbents such as IBM, Google, Amazon, Nvidia and Microsoft will likely roll out their quantum platforms (such as IBM’s Q Network, Amazon’s Braket, or Microsoft Azure Quantum) that will democratize quantum computing by offering possible Saas services to quantum resources via cloud services. They are likely to interplay among themselves and regulator to define standards. Finally, those players (are likely to be offering only hybrid quantum-classical solutions (as in the case of NVIDIA’s DGX Quantum) that will blend GPU-based AI with quantum processing, as a way to control the market evolution, and make arbitrage on their legacy. 

Finally, the complete market dynamics are likely to be shaped through a series of innovative  technology pushes set by quantum-native start ups. Specific discovery include  quantum annealing (D-Wave), quantum NLP and cybersecurity (Quantinuum), and hybrid quantum-classical models (Rigetti).  

Those focused innovations boost the market in niche verticals, eg. D-Wave in transport  through optimisation of logistics and scheduling, boost synergies with other technology markets such as genAI (e.g, Quantinuum: Applies quantum NLP for language-based AI systems and cybersecurity, or shape ecosystems through cooperation (eg IonQ partnerships with AWS, Microsoft Azure).  

Finally, an unique aspect of the quantum landscape is the proliferation of collaborative ventures. The quantum computing landscape is characterized by extensive collaboration among universities, research institutions, and private companies, fostering significant advancements in the field. These partnerships combine diverse expertise and resources, with a likely consequence of accelerating the development of quantum technologies.   

3. Market tipping points  

Ultimately, the market will be shaped by a number of tipping points, for which one can already detect some positive, even if still, very noisy, signals at this stage (Table 3).    

In terms of economic viability, cost reductions in quantum hardware and platforms are essential for companies to adopt quantum AI. The cost per AI task for quantum is still an order of magnitude higher than for classical computing, assuming the technology enables a robust use case. However, the cost is also falling relatively quickly, suggesting solid performance for quantum AI in the coming years.  

In terms of technology breakthroughs, Google’s recent Willow chip show early signs of quantum systems achieving better scaling and error reduction.  

Regarding arbitrage opportunities, Quantum AI needs to focus on solving problems where classical AI is slow, expensive or infeasible, such as highly complex optimisation and simulation, while for business transition strategies, the existence of hybrid classical-quantum models offering will allow businesses to experiment with quantum AI without abandoning existing AI investments. 

Finally, governments will have a critical role to play in funding and enabling the quantum ecosystem. The regulatory landscapes for Artificial Intelligence (AI), including Quantum AI, differ significantly across China, the European Union (EU), and the United States (US), each reflecting distinct cultural, political, and economic priorities. 

A reasonable hypothesis about the likely dynamics is that the combination of substantial government support and agile regulatory adjustments is likely to accelerate quantum AI progress in China. However, strict government control may limit international collaboration and the integration of different perspectives, potentially stifling creative approaches. 

While the EU emphasises ethical standards that could lead to responsible quantum AI development, strict compliance requirements could slow innovation and make it difficult for startups and smaller companies to compete. 

The US’s flexible, innovation-focused approach may facilitate the rapid development and commercialisation of quantum AI technologies. However, the lack of a consistent regulatory framework could lead to ethical dilemmas, security vulnerabilities and public distrust if not adequately addressed. 

The Unknown Unknowns

“Unknown unknowns” refer to uncertainties or challenges that one does not yet realise exist – They will often emerge as quantum and AI technologies evolve. Here are some key potential unknown unknowns in the case of quantum AI: 

  1. Emergent phenomena from quantum and AI synergy: Combining quantum computing with AI could lead to unexpected emergent behaviours or capabilities. We don’t yet fully understand how quantum algorithms could fundamentally reshape AI systems beyond classical limits.
  2. Quantum data and quantum noise: Classical AI operates on classical data, but we don’t yet know the implications of dealing with “quantum data” generated by quantum systems.
  3. Quantum black box problems: current AI models (such as deep neural networks) are already black boxes, and adding quantum mechanics could further obscure how decisions or predictions are made.
  4. New quantum AI security risks: Quantum systems and AI models may introduce unforeseen security vulnerabilities that could be exploited in ways we don’t yet understand. There could be unknown methods of adversarial attacks in quantum-augmented AI systems, where entangled or quantum algorithms make systems uniquely vulnerable.
  5. Ethical and societal unknowns: biases and quantum effects: Quantum randomness could exacerbate existing ethical problems in AI, or create unexpected new forms of bias.

Table 3 : The quantum AI tipping points

The quantum AI tipping points

Framing Quantum (AI) Strategy

Let us now summarise the five key findings, from the most certain to the least certain: 

  1. Quantum is real. The promise of quantum computing has been built on seminal milestones such as Feynman’s vision in 1981, Deutsch’s formalisation of quantum computing in 1985, and Shor’s algorithms in the mid-1990s. Since then, companies such as Google and IBM have demonstrated the power of quantum systems, with Google’s “Willow” processor solving problems unattainable by classical supercomputers.  
  2. Quantum markets are just emerging, but may come sooner than expected. The economic viability of quantum computing is not yet proven, but the economics could quickly become attractive in the medium term.  
  3. Transition, more than a complete shift.  Hybrid quantum-classical computing is likely to be the transition model to full quantum in the coming years. 
  4. Your AI is your quantum strategy. Advances in quantum capabilities have transformative potential in optimisation, simulation and most AI and machine learning.  
  5. Anticipate responsible/ethical play. The interaction between quantum computing and AI could lead to unexpected breakthroughs, (cyber) risks or even ethical dilemmas.  

Many factors traditionally shape a strategy, but the above five elements should serve as a canvas for no regret moves. 

Based on the above, corporations should at least build internal quantum literacy and educate employees in AI and quantum and their related synergies, while investing more in hybrid quantum-classical platforms to avoid full reliance on quantum technology at this stage.  

Meanwhile, corporations should think about smart experimentation projects, both in AI and in specific vertical projects where quantum can bypass the benefits of classical computing, such as financial optimisation, logistics or drug discovery. Finally, quantum is an entirely new paradigm shift that requires a collaborative approach to ensure the right ecosystem development, as well as regulatory commitment to address the disruptive socio-economic and cybersecurity risks.

About the Author 

jacquesJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

Sources for Table 3: The quantum AI tipping points
1. Taherdoost, Hamed, and Mitra Madanchian. “AI Advancements: Comparison of Innovative Techniques.” AI 5.1 (2023): 38-54.
2. Coccia, M. (2024). Converging Artificial Intelligence and Quantum Technologies: Accelerated Growth Effects in Technological Evolution. Technologies, 12(5), 66. 
3. Avramouli, M., Savvas, I. K., Vasilaki, A., & Garani, G. (2023). Unlocking the potential of quantum machine learning to advance drug discovery. Electronics, 12(11), 2402. 
4. Zeguendry, A., Jarir, Z., & Quafafou, M. (2023). Quantum machine learning: A review and case studies. Entropy, 25(2), 287 ;Sagingalieva, A., Kordzanganeh, M., Kenbayev, N., Kosichkina, D., Tomashuk, T., & Melnikov, A. (2023). Hybrid quantum neural network for drug response prediction. Cancers, 15(10), 2705.
5. Jiang, W., Xiong, J., & Shi, Y. (2021, January). When machine learning meets quantum computers: A case study. In Proceedings of the 26th Asia and South Pacific Design Automation Conference (pp. 593-598).
6. Khan, M. M., Bari, I., Khan, O. U., Akbar, A., Jeehan, S., & Ullah, N. Quantum AI: Uniting the future of smart technologies. In Artificial Intelligence for Intelligent Systems (pp. 86-102). CRC Press. 
7. Kumar, S., Simran, S., & Singh, M. (2024, March). Quantum intelligence: merging AI and quantum computing for unprecedented power. In 2024 International Conference on Trends in Quantum Computing and Emerging Business Technologies (pp. 1-7). IEEE ; AI, G. Q. Quantum error correction below the surface code threshold. Nature .
8. Acharya, R., Aghababaie-Beni, L., Aleiner, I., Andersen, T. I., Ansmann, M., Arute, F., … & Malone, F. D. (2024). Quantum error correction below the surface code threshold. arXiv preprint arXiv:2408.13687

The post What Is Your Quantum (AI) Strategy?  appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/what-is-your-quantum-ai-strategy/feed/ 0
The Road Ahead for Large Language Models is Brighter than Claimed https://www.europeanbusinessreview.com/the-road-ahead-for-large-language-models-is-brighter-than-claimed/ https://www.europeanbusinessreview.com/the-road-ahead-for-large-language-models-is-brighter-than-claimed/#respond Thu, 05 Dec 2024 14:45:30 +0000 https://www.europeanbusinessreview.com/?p=219372 By Jacques Bughin 1. LLM is dead, long live LLMs Large Language Models (LLMs) have been the talk of the town for the past two years, demonstrating remarkable capabilities from […]

The post The Road Ahead for Large Language Models is Brighter than Claimed appeared first on The European Business Review.

]]>
By Jacques Bughin

1. LLM is dead, long live LLMs

Large Language Models (LLMs) have been the talk of the town for the past two years, demonstrating remarkable capabilities from natural language processing to creative tasks. But if their potential demonstrates large productivity gains (see Table 1), the recent hype is now being followed by significant voices of skepticism – including claims that LLM performance improvements are plateauing

For many CEOs, these technology shortcomings, if real, add some stress. They add to other issues such as LLM so called hallucinations, and the organisational challenge of developing new generative AI capabilities on top of the make or buy of LLMs. So what should they do? Slow down their investments, at the mercy of competitors doubling down on GenAI, or push hard and commit to the technology despite its current shortcomings?

If history is any guide, there seems to be no turning  back from LLMs: academic studies  reveal that users very much like the technology and are prepared to delegate many cognitive and creative tasks to generative AI. This appeal is also visible in enterprise setting, with Gen AI usage nearly doubling in one year in large enterprises, and reaching already more than 70% of workforce. If CEOs take the time to look back at the early waves of digital technologies, they would also draw a parallel between LLM and Internet adoption. They would look back at early Internet deployments and major technical improvements. Internet access went  from dial up at 50Mb/hour by 1995 to 1G/second 25 years later, of more than 1000 times faster) creating an unique General Purpose Technology platform which have brought the FAANGS, social media, platforms and ecosystems, and major softwarisation of our economies.

2. LLM so far

The journey of LLMs such as OpenAI’s GPT, Llma, or Antropic Claude began with foundational models which leveraged transformer architectures to process and generate human-like text.On top came innovations in data volumetry and hardware.

Unlike earlier models trained on single-domain text (e.g., news or Wikipedia), modern LLMs utilize heterogeneous sources such as Common Crawl, GitHub, Wikipedia, and forums.

The largest public text data sets include RedPajama or RefinedWeb and now contain tens of trillions of words collected from web pages. Finally, the exponential growth in computing requirements has driven innovation, particularly in hardware such as GPUs, TPUs and NPUs from players such as Nvidia. While training GPT-3 on a single NVIDIA Tesla V100 GPU would take more than 3 centuries, the 1024×A100would now do it  3600 times faster.

Table 1  – case examples of LLM based productivity gains.

Use Case Industry LLM Used Performance in Productivity
Code Completion Software Development GitHub Copilot Increased coding speed by 55% and reduced cognitive load for developers
Enterprise Information Tasks Various Enterprises Microsoft Copilot Boosted task execution speed by 30% without quality loss
Wireless Communication System Telecommunications Custom LLM Achieved 40% productivity gains in code refactoring and validation
Marketing Campaigns Marketing GPT-4 Reduced content creation time by 50% and increased personalization
Customer Service Automation Retail ChatGPT Improved response times by 60% and customer satisfaction by 20%
Clinical Diagnosis Assistance Healthcare Custom LLM Reduced diagnostic errors by 20% and shortened patient waiting times by 30%
Cybersecurity Regulation Mapping Financial Services Custom LLM Reduced task completion time from months to days, achieving a 90% time reduction
Claims Processing Insurance Custom LLM Expedited processing by 70% and enhanced customer experience
Product Review Analysis E-commerce Custom LLM Enabled data-driven product development and marketing decisions, increasing efficiency by 35%
Innovation and Idea Generation Various Industries GPT-4 Accelerated innovation processes by 40% and improved idea generation

Source: Disguided cases; Arxiv; McKinsey, Accenture; Alto, V. (2023). Modern Generative AI with ChatGPT and OpenAI Models: Leverage the capabilities of OpenAI’s LLM for productivity and innovation with GPT3 and GPT4. Packt Publishing Ltd ; Coutinho, M., Marques, L., Santos, A., Dahia, M., França, C., & de Souza Santos, R. (2024, July). The role of generative AI in software development productivity: A pilot case study. In Proceedings of the 1st ACM International Conference on AI-Powered Software (pp. 131-138). Reshmi, L. B., Vipin Raj, R., & Balasubramaniam, S. (2024). 12 Generative AI and LLM: Case Study in Finance. Generative AI and LLMs: Natural Language Processing and Generative Adversarial Networks, 231 ; Bughin, J. (2024). The role of firm AI capabilities in generative AI-pair coding. Journal of Decision Systems, 1-22.; Cambon, A., Hecht, B., Edelman, B., Ngwe, D., Jaffe, S., Heger, A.,… & Teevan, J. (2023). Early LLM-based Tools for Enterprise Information Workers Likely Provide Meaningful Boosts to Productivity. Microsoft Research. MSR-TR-2023-43.

However, as the adoption of LLM rapidly spreads across industries, challenges to its use are mounting. Much of this involves the need for responsible AI, as large datasets can contain sensitive and private information, and LLMs trained as next-word predictors sometimes produce incorrect or fabricated information (‘hallucinations’), as their probabilistic nature makes it impossible to verify outputs or retract errors.

Finally, models are not easily understood and lack causality, raising suspicions about their reliability, fairness and scalability. In cybercrime, models such as FraudGPT and WormGPT are being developed to simulate cyber-attacks. All of this is being addressed by regulatory and more stringent cybersecurity frameworks, as well as techniques that reverse-engineer model outputs, such as Shap methods and others.

Another real danger is the economic and value sustainability of LLMS. Training data is inherently static, leading to rapidly outdated knowledge, and adapting LLMs to new tasks often requires resource-intensive fine-tuning. More recently, scaling models to trillions of parameters has shown diminishing returns, with marginal performance improvements leading to exponential increases in computational costs, and the fear that  high quality text data must run soon out[1].

There have been some clear warnings, such as the recent statement by Ilya Sutskever, co-founder of OpenAI, and SSI, who claimed that the “bigger (data model) is better” mantra, which has been followed with massive success by the LLM market in recent years, is likely no longer possible – due to diminishing returns to scale and the limitations of data from the public web.

3. LLM Breakthrough Innovations

It is easy to see why LLMs might plateau when their performance has been tied to massive data, and the supply of the latter is an order of magnitude lower than the push in scale of new versions of LLMs.

But does this mean that LLMs will die? First, it says that in a situation of scarce resources, other sources of data can become extremely profitable – such as those embedded in Facebook or Instagram, or in Google. Second, it also makes clear that the exploitation of data multimodalities may become attractive. In general, history is a good guide to how industries turn to innovation to solve bottlenecks and expand. LLM is likely to be no exception.

A first example is the car industry. The automobile industry experienced an explosion of demand in the early 20th century, driven by Henry Ford’s introduction of the assembly line and mass production. But bottlenecks soon became apparent, from high production costs and inconsistent manufacturing processes to unpaved and inadequate road infrastructure.  Finally, petrol availability and engine efficiency were major concerns.

Systematic innovation helped to transform the nascent industry into what it is today. Innovations such as the moving assembly line drastically reduced production costs, making cars affordable. Governments invested in building motorways and roads to support car use. Seatbelts, airbags, ABS and crash testing standards were developed, increasing confidence in vehicles, and modern vehicles now incorporate AI, sensors and IoT, turning them into intelligent systems.

Another example is the pharmaceutical industry. In the early 20th century, groundbreaking drug discoveries (such as penicillin) led to rapid expansion of the industry, but scaling up drug production to meet growing demand posed manufacturing hurdles, while developing and testing new drugs required immense investment and could take decades.

Systematic innovations included advances in computational modelling and chemistry that enabled targeted drug development, reducing the reliance on trial and error. Techniques such as fermentation for antibiotics and synthetic production methods scaled up manufacturing, while biotechnology revolutionised treatments with the development of monoclonal antibodies, gene therapies or mRNA vaccines..

4. What is in the current bag for LLMs 

LLMs also are on the verge of large innovations (Table 2). We had arleady highlighted a set of key innovationss in the making for the future of LLM in a recent article of this Review. We also had presented the evolution towards hybrid AI systems such as Neurosymbolic AI, which combines symbolic reasoning with LLMs to improve cost and  interpretability. We finally warned about the rise of multimodal models, and the attractive paths of Liquid Foundation Models (LFM) that bypass the transformer architecture to achieve strong  performance across various scales while maintaining smaller memory footprints and more efficient inference.

We here include another critical  set of five other recent innovations, suh as:

  1. Retrieval-Augmented Generation (RAG). It is often mentioned that 40–65% of AI-driven decision-making processes failed because the data is either too old or inadequate. RAG bridges the gap between static training data and dynamic, real-time knowledge. By retrieving information from external sources like databases and the web, RAG ensures responses are accurate and up-to-date. It pairs this retrieval with the generative capabilities of LLMs, offering tailored and trustworthy answers. In tasks like diagnosis support, RAG can boost precision by up to 30% over traditional LLMs, as it directly integrates real-world data retrieval.

Table 2  – Innovation parallelism

Automobile Industry Pharmaceutical Industry Large Language Models (LLMs)
Initial Bottlenecks High costs, poor infrastructure, safety concerns, fuel Discovery inefficiencies, production challenges, safety Computational costs, bias, ethical concerns, scalability
Growth Challenges Scaling production, building roads, environmental issues Long R&D timelines, regulatory delays, global access Understanding context, managing biases, reducing misuse
Key Innovations Assembly lines, safety features, IoT-enabled vehicles Rational drug design, mRNA technology, mass production Parameter-efficient models, multimodal systems, fine-tuning
Scaling Efficiency Moving assembly line, synthetic materials, mass adoption Synthetic chemistry, fermentation, generics Efficient training algorithms, model distillation
Safety and Trust Crash tests, airbags, safety regulations Clinical trials, regulatory frameworks (e.g., FDA) AI alignment, ethical AI guidelines, transparency
Infrastructure Highways, fuel stations, urban redesign Global supply chains, public-private partnerships Cloud infrastructure, edge computing
Global Access Mass production reduced costs Generic drugs, global access programs (e.g., Gavi) Open-source models, localization efforts

Source: Aggeri, F., Elmquist, M., & Pohl, H. (2009). Managing learning in the automotive industry–the innovation race for electric vehicles. International Journal of Automotive Technology and Management, 9(2), 123-147. Shimokawa, K. (2010). Japan and the global automotive industry. Cambridge University Press. Schlie, E., & Yip, G. (2000). Regional follows global: Strategy mixes in the world automotive industry. European management journal, 18(4), 343-354 ; Cusumano, M. A. (1988). Manufacturing innovation: lessons from the Japanese auto industry. MIT Sloan Management Review. Malerba, F., & Orsenigo, L. (2001). Towards a history friendly model of innovation, market structure and regulation in the dynamics of the pharmaceutical industry: the age of random screening. Roma, Italy: CESPRI-Centro Studi sui Processi di Internazionalizzazione. Kean, M. A. (2004). Biotech and the pharmaceutical industry: Back to the future. Organisation for Economic Cooperation and Development. The OECD Observer, (243), 21.

2. In-Context Learning (ICL): ICL eliminates the need for fine-tuning by providing examples within the prompt. Innovations like many-shot ICL have improved model performance on complex tasks by increasing the number and quality of examples[2].

3. Hallucination Detection and Mitigation. Techniques like SAPLMA (Statement Accuracy Prediction based on Language Model Activations) analyze internal model activations to detect inaccuracies. These methods enable real-time error correction, reducing reliance on human validation.

4. Test-Time Compute. This paradigm shifts computational focus from the training phase to inference. By allocating more resources during task execution, test-time compute enables deeper reasoning and more accurate responses without escalating training costs. Studies     such in poker AI research show that giving a model 20 seconds to process can improve decision-making performance as much as scaling up by 100,000x. Other studies also demonstrate massive improvements, with test-time compute can be used to outperform a 14x larger model.  This is particularly useful in real-time applications where better output quality is desired but can be balanced with cost efficiency.

5. LLM2Vec. LLM2Vec refines language representation by introducing bidirectional attention and advanced training techniques like Masked Next Token Prediction (MNTP) and SimCSE. These methods enhance the model’s ability to understand and represent text for clustering, semantic search, and context-sensitive applications. The ability to process all context in both directions significantly enhances performance for tasks such as content summarization or text classification. With models like Mistral-7B using LLM2Vec, accuracy on semantic search tasks has reached over 90%, with efficiency gains of 25-30% over traditional unidirectional models.

5. With new LLM innovations be paying off ?

The above clearly suggests that the future of LLM is far from bleak (Table 3). In terms of cost efficiency, innovations such as quantization,  and now Test-Time Compute are critical to reducing computational costs, especially for large enterprises or cloud-based solutions.

Techniques such as RAG, LLM2Vec and Many-Shot ICL significantly improve the accuracy performance of any LLM model, especially for specialised or task-oriented applications. This makes them ideal for industries that require high accuracy, such as healthcare, legal or finance. Also,  Retrieval-based systems (RAG) and reinforcement learning (RLHF) can greatly reduce hallucination problems by basing responses on real-world, verified data or human feedback.

Table 3 : Impact examples of new LLM innovations

Innovations Case Effciency Accuracy
RAG Healthcare knowledge assistance (e.g., ElasticSearch + GPT) Reduced time to information retrieval by 30%, improving staff efficiency Improved diagnosis support with 30% precision improvement
LLM2Vec Legal document analysis using Mistral-7B + LLM2Vec 25% more efficient compared to standard causal models Achieved near 90% accuracy for document relevance
Test-Time Compute Streaming services (dynamic subtitle generation) Dynamic scaling reduced compute costs by 30% Improved accuracy on complex tasks with more time to process
ICL Real-time essay feedback systems in education (e.g., GPT-3 with few-shot prompts) 15-25% more efficient by reducing retraining needs Performance improved for complex tasks with 5-10 examples per prompt

Source: BehnamGhader, P., Adlakha, V., Mosbach, M., Bahdanau, D., Chapados, N., & Reddy, S. (2024). Llm2vec: Large language models are secretly powerful text encoders. arXiv preprint arXiv:2404.05961. Wang, F., Lin, C., Cao, Y., & Kang, Y. (2024). Benchmarking General Purpose In-Context Learning. arXiv preprint arXiv:2405.17234.Xu, P., Ping, W., Wu, X., Xu, C., Liu, Z., Shoeybi, M., & Catanzaro, B. (2024). Chatqa 2: Bridging the gap to proprietary llms in long context and rag capabilities. arXiv preprint arXiv:2407.14482. ; Snell, C., Lee, J., Xu, K., & Kumar, A. (2024). Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314. ; Yue, Z., Zhuang, H., Bai, A., Hui, K., Jagerman, R., Zeng, H., … & Bendersky, M. (2024). Inference scaling for long-context retrieval augmented generation. arXiv preprint arXiv:2410.04343.

Over time, therefore, companies that exploit the innovation potential of the above for LLM/LFMS can be expected to improve significantly as more of these innovations (e.g. RAG, LLM2Vec, Sparse Models, etc.) are rolled out and integrated.

In the short term, innovations such as Test-Time Compute and Sparse Models provide significant improvements in cost efficiency, with savings of up to 40-50%. Performance improvements are also noticeable, with a 5-10% increase in accuracy. These early-stage innovations enable organisations to perform common tasks more cost-effectively.

In the medium term, we are likely to see a stronger push from innovations such as LLM2Vec and Many-Shot In-Context Learning, which improve model performance for complex tasks, before companies embrace a large combination of advanced sparse models, adaptive retrieval systems and advanced in-context learning, which will significantly improve the performance of LLMs.

In fact, using benchmarks of how these will be used and the synergies between the above innovations, we have computed an expected 25-50% gain in cost per query, a 15-30% improvement in accuracy, and a 30-45% gain in data usage, so that the cost per accurate data will drop by 16-30% per year in 5 years, or a significant drop after 5 years.

So, if LLMs initially show moderate performance in the short term (1 year), performance will more than double in the next 3 years and be up to 6 times better in 5 years, with exponential gains.  For businesses, also this means that in the short term, LLMs will be good for tasks that require less computation, but in the medium and long term, the economics and performance of LLMs will make them more accessible for a wider range of tasks, with a strong ROI.

6. The CEO journey ahead

Generative AI have truly exploded in the last two years, following the breakthough of transformers as basis for LLM platforms. In turn, the exploitation of massive amounts of public data for training has set an unique performance uplift trajectory, which however is seen to be likely plateau-ing, and with public text data entering a period of scarcity in the next 5 years

The messages for the astute executive are thus relatively clear:

  1. Be AI-ready. The AI journey is not over – with advances such as RAG, ICL, SAPLMA, test-time compute and LLM2Vec, the next generation of LLMs or liquid ones promise to be more efficient, reliable and context-aware. As the field evolves, the balance between effectiveness, cost and accuracy will shape the role of AI in business, but the quality/cost/performance triad will continue to tilt towards mass customisation of AI.
  2. Scale AI. However, the potential of AI improvements depends on embedding all these current innovations across all business systems to achieve significant gains. This includes embedding AI in customer service, sales, marketing, operations and G&A. There is no place to hide.
  3. Your data is your AI. In addition to innovative AI-based LLMs, complementary data will be key. While training data and algorithm techniques are also improving every year, data scarcity will loom, forcing companies to use their own private data outside the indexed web. Encouraging dialogue to generate a variety of data from LLMs (synthetic data) and from customer dialogue and insights will also be key.This data, when integrated, provides uniquely private and high quality data that may be needed to drive innovation in LLMs. As such, this balance of privacy and high quality is the bargaining power of any CEO organisation against a future driven solely by the LLM.The best companies will bet on those that are both AI savvy but also the best orchestrator between their closed knowledge and privacy and the need for genAI automation for the ethical and bright future of their business.

About the Author

jacquesJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

References

  1. [1] The year 2032 is the median year in which the stock of high-quality public text data is expected to be exhausted. For computer vision, however, data will be exhausted at a much slower rate, averaging 2045, partly because the data size of computer vision models has so far grown three times more slowly than text. It remains to be seen whether there will be a shift as computer vision becomes less complex and less expensive.

  2. [2] Note that the technique must optimize example ordering for large effiency, for example, in problem-solving tasks, presenting the examples in the most logical order can help the model perform better.

The post The Road Ahead for Large Language Models is Brighter than Claimed appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-road-ahead-for-large-language-models-is-brighter-than-claimed/feed/ 0
Beyond Bitcoins: The Future of Real Assets Tokenisation https://www.europeanbusinessreview.com/beyond-bitcoins-the-future-of-real-assets-tokenisation/ https://www.europeanbusinessreview.com/beyond-bitcoins-the-future-of-real-assets-tokenisation/#respond Thu, 05 Dec 2024 14:09:21 +0000 https://www.europeanbusinessreview.com/?p=219363 By Jacques Bughin The recent past has been turbulent for digital assets, including many Web 3.0 bankruptcies, a variety of scams and attacks, and new regulations. But while some media […]

The post Beyond Bitcoins: The Future of Real Assets Tokenisation appeared first on The European Business Review.

]]>
By Jacques Bughin

The recent past has been turbulent for digital assets, including many Web 3.0 bankruptcies, a variety of scams and attacks, and new regulations. But while some media have proclaimed its death, recent news, – such as the recent rise of bitcoins flirting with the USD 100,000 by the end of November 2024, – may have built a unique “Mark Twain” moment that reports of the death of Web 3.0 are greatly exaggerated.

In fact, many companies in industries such as financial services, retail, media, or real estate, have continued their foray into  Web 3.0, possibly outside of the crypto labelling, such as tokenized loyalty programs, or luxury goods (see Table 1). Furthermore, more and more companies are becoming ambassadors of the potential of « tokenization, » that is, the process of converting assets into digital tokens that represent ownership through the blockchain. Finally, although highly uncertain and subject to frequent revision in recent years, tokenisation has been computed to grow into a $10 trillion global market by the end of the decade. For example, Citibank had recently estimated that tokenised real estate could pass the $1.5 trillion mark by 2030, while equity and venture capital could be close to $1 trillion, for $4 trillion to $5 trillion of tokenised digital securities such as mutual funds, debt and equity securities.

The above then leaves no choice for executive and managers, but to actively reconsider their strategy for Web 3.0. While a lot is said about tokenisation of financial assets, this article focuses on real assets, in particular, the category of real estate—as many companies own/lease some, and is potentially one of largest vertical opportunities in real asset tokenisation.

Show me the money

The real estate industry is one of the most important and oldest asset classes in the world, valued at close to 300USD trillion. But it should be much bigger. The actual worldwide asset allocation is currently 10%, while given its risk/return profile, as much of 30-60% should be allocated to real estate in an optimal institutional portfolio,

Why this is so is the result of three sins:

  • accessibility: Marketability should have definitely increased as many countries have been passing laws for the applicability of REIT (real estate investment trust)  since 50 years,  as a means by which the investing public can gain exposure to real estate.In thirty years, a real change has come from that vehicle, from 120 listed REITs in two countries to 940 listed REITs in 42 countries and regions. Yet today, still 80-90% of the value of the overall underlying real estate market is not listed.
  • Regarding liquidity, the typical holding period for real estate debt seems to be 4-7 years, while private equity real estate funds have lock-up periods of up to seven years. Public and private REITs, as an improvement, deliver a better liquid channel for both retail and institutional investors but virtually all markets have struggled to build more  recognized REITs markets and still face regulatory and tax-code hurdles to gain access to the international investment community.
  • In terms of costs and inefficiency, real estate deals require several parties and significant amounts of manually generated paperwork. The process of structuring an offering, arranging financing, and gathering necessary due diligence items often takes weeks or months.

God Bless Real Estate Tokenisation

On top of its sins, the real estate industry has been rather immune to technologies. But the  development of Blockchain technologies (plus AI) may be the set of technologies that will revolutionize everything. The blockchain is a powerful technology that safely stores transaction records on a distributed peer-to-peer computer network. Among (many) use cases, blockchain can specify a link to Real estate administration and title registration systems, ensuring transparent and immutable records of ownership; can facilitate secure and efficient transactions; can offer new ways to manage estate assets, with proposals for concepts such as rental platforms, real estate data storage solutions, and multiple listing services; or finally can build a system of real estate as blockchain-based tokens. All those effects can further be turbocharged by AI.

In particular, blockchain tokenisation offers specific advantage to counter the three sins. Among those:

  1. Accessibility: Tokenization lowers the entry barrier for real estate investment, allowing smaller investors to participate in high-value real estate projects. This democratizes access to real estate investment opportunities. Blockchain technology ensures that all transactions are recorded on an immutable ledger, as smart contract, enhancing transparency and reducing the risk of fraud for participants, enhancing accessibility to more secured returns
  2. Enhanced Liquidity: Traditional real estate investments are typically illiquid.

Tokenization allows for fractional ownership, making it easier to buy and sell small portions of a property without the need for a full property sale.

  1. Fast and Lean Transactions: Blockchain eliminates intermediaries, streamlining the real estate transaction process through efficient peer-to-peer (P2P) transactions, automation, and smart contracts.

The excitment beyond Real Estate tokenization emerged about a decade ago. The St. Regis Aspen Resort in Colorado was tokenized, allowing investors to buy shares in the luxury hotel. Similarly, the luxury Manhattan condo, 436 & 442 East 13th Street, was tokenized by the real estate firm Propellr and the tokenization platform Fluidity. In Europe, the AnnA Villa in Paris became the first property to be sold fully through a blockchain transaction. The luxury building, located in the city’s Boulogne-Billancourt district, was valued at €6.5 million and sold to French real estate companies Sapeb Immobilier and Valorcim. The procedure began with the transfer of ownership of the building to a joint-stock company (SAPEB AnnA), followed by the division of the firm into 100 tokens powered on Ethereum and each subdivided into 100,000 units, with each share traded for €6.50.

Since then tens of tokenization projects have seen the light in many cities around the worlds. Furthermore, platforms like RealT and Polymath offer tokenized real estate investment opportunities, attracting new investors.

Back (or Crash?) To Earth

But behind those attempts to initialize the market, market remains tiny- In effect, by 2022, the market size for real estate tokenization was $2.7 billion. A few elements also appear as headwinds.

  1. The first is regulation: despite progress, regulatory environments for tokenized real estate vary widely across jurisdictions. Navigating these regulations can be complex and costly.  Regulatory bodies are however beginning to recognize and approve tokenized real estate securities. For example, in the United States, the Securities and Exchange Commission (SEC) has approved several tokenized real estate projects, providing a legal framework for their operation.
  2. Another issue is that while tokenization aims to increase liquidity, the market for real estate tokens is still developing, and actual liquidity might not match expectations initially. Arguments heard are:

“Crowdfunding did not work”. Tokensiation may face the crowdfunding liquidity issue. The later was supposed to  add liquidity to the commercial real estate space, but it never took off.  In fact, while user experience and manual processes were cited as flaws within most crowdfunding platforms, akey issue was  that investments can only be traded on one specific platform. Tokenization on more widely adopted networks would be a solution by introducing a much more global audience that is not confined to a single platform. Tokenization could also reduce the number of third parties involved (i.e. brokers, escrow agents, etc.) in the typical investment process. Reduction of third parties involvement naturally adds fluidity to the real estate investment process

-“Goodbye ISPX”. Liquidity is challenging nevertheless even for blockchain. The most famous case is ISPX which went listed in the UK market, as a market maker of listings on the blockchain. In the end, there were only three listings – all of which were linked to one of the founding IPSX shareholders, M7 Real Estate. These were also wholesale listings, i.e. not open to retail investors. Perhaps IPSX was unlucky with its timing: Brexit was unfolding and Covid lay ahead.

Proving the asset class tokenisation value and differentatiors to traditional REITs.

Swinkels (2023) considers a sample of 58 real estate tokens in the USA. He finds that a token has on average 254 owners, which shows that tokenization can improve risk sharing across households. Kreppmeier et al. (2023) provide first empirical evidence on real estate tokenization by analyzing a data set on 173 real estate tokens in the USA with more than 200,000 blockchain transactions. They found that the ownership of properties is not concentrated on a small number of small investors, which confirms that tokenization can provide broad access to real estate for many small investors.

Liquidity yes, but does it go against the profitability? On RealT, where Investments are available through the Ethereum blockchain, the answer seems yes: in fact, various studies have demonstrated that tokenization generated returns of 15%, compared to a -15% return on traditional Real Estate Investment Trusts (REITs) in 2022.

But there as well, there might be catch: in the 1990s and early 2000s, participants in the property investment market became fascinated by the potential for the securitisation or unitisation of real estate. REITs became popular, and it also became easier to raise – and offload – debt. The result was a financial crash and the insolvency of many banks, driven by downside volatility in real estate prices.

Arguably, therefore, illiquidity is a necessary evil in justifying the defensive role of real estate. As theory suggests that the illiquidity of property means that its required – and expected – return is higher than it would otherwise be, introducing liquidity to property may damage returns, as the illiquidity premium may be eroded. Nevertheless, Real estate market participants has this curious belief in the superiority of an ‘offmarket’ transaction. In fact, it is strongly believed that a wider secondary market will also increase effective demand in the primary market and improve the perceived quality of the asset.

An executive journey to real estate tokenisation success

Given the above, the optionality of playing in tokenisation can not be dismissed—let alone the risk of disruption by competitors and entrants. A strategy roadmap that is highly valuable, in our experience is based on the following five steps:

Prospection

Step 1: Define segments to play. For example, larger assets already held in fund structures may eventually be tokenized successfully; there may also be an alternative market for tokenized residential, social impact, or community assets where investment regulation and risk/return are not the main drivers of behavior. The mass market for the tokenization of single commercial real estate assets, however, may be some way down the road.

Step 2: Define the conditions for market take offA few developments are necessary for the digital tokenization of single real estate assets: (a) an expressed demand for the fractionalization of single real estate assets, b) )market participants need to be comfortable with blockchain, the digital underpinning of tokenization, b) Further, in many land markets fractionalization requires an intermediate structure to be established because the direct ownership of land cannot be shared amongst many, increasing the cost of tokenization.

Those two steps provide a view of marketability of tokenization, which coupled with competitive intelligence of attackers and peers launching their projects, gives a guidance for early mover or fast follower strategy in active tokenization

Capability building

Independently of sensing the market readiness, executives may not take time in investing in capabilities to quickly pilot and, especially scale opportunities. They must

Step 3: Pilot. Given large uncertainty, but also ecosystem play in tokenization, it is imperative that executives get resources ready to pilot and better crystallize the use cases for their business plan. In particular, the following needs to be assessed:

  1. Tokenization type. Not all tokens are equal. The perspective on token already is shifting from cryptocurrency towards more comprehensive token standards to set the basis for much large enterprises use. The so called International Token Standardization Association (ITSA) has been establishing a taxonomy along the four dimensions of: purpose, industry, technological setup and legal claim.
  2. Token interoperability. Interoperability is known to be critical to scale market, but it also induces a common, and possibly more competitive play. Institutions such as InterWorkAlliance drives standards for interoperability.
  3. Token compliance. Different countries have developed various infrastructures. In the US, while real estate tokens often trade as securities and have to beregistered in an exemption like Regulation D or Regulation Aþ, tokenization platforms  must also comply with SEC regulations, Europe, has also a recent MICA policy and  real estate tokens are considered as securities
  4. Token technology platform. In particular, blockchain is not enough – it must be integrated with advanced technologies, such as AI and IoT, is set to further streamline property management and investment decisions
  5. Token economics. It is really important to test the assumption of the underlying economics, and the competition reactions. Who are the main entities involved in the value chain of the asset? How competitive is the market for the asset and what is the interest of existing Real estate players to play coopetition and coordinate with a model of tokenization? What is the appetite of end customers and willingness to pay?

Take off

Step 4: Commercial launch and scale.  A typical mistake in digital projects, of which tokenisation is one, is to scale slowly instead of intending to ‘hyperscale’. Hyperscaling is a must in tokenisation, as most business models have a platform architecture component – with a possible winner-take-all structure. It is also time to explore DeFi integration for broader utility of tokens.

Phase 5: Evolve. Remember that tokenisation remains uncertain. You must remain agile to pivot if necessary. In the meantime, you need to build the foundations for brand recognition. In particular, at this stage, it’s more than time to leverage AI and other cutting-edge technologies to enhance the value proposition (e.g., real-time property monitoring) and position the company as a leader in Web 3.0-driven asset management.

Ready to go?

Table 1: Case examples of tokenization projects

 

 

 

Case Example Year of Launch Revenue Generated Business Model
Real Estate Aspen Coin (St. Regis Aspen Resort) 2018 $18M raised (initial offering) Fractional ownership of luxury real estate via security tokens, traded on secondary markets like tZERO.
Gaming Axie Infinity 2018 $1.3B annual revenue (2021) Play-to-earn gaming model using NFT-based characters and in-game assets tradable on blockchain platforms.
Luxury Goods LVMH Aura Blockchain 2021 Enhanced brand value; internal system savings Authenticity verification for luxury goods using blockchain to track provenance and fight counterfeiting.
Retail Walmart (VeChain) 2019 Efficiency improvements in supply chain Blockchain for product tracking and transparency in supply chains to enhance trust with consumers.
Financial Services JPM Coin 2019 Indirect via service efficiencies Digital token for interbank payments, settlement efficiency, and liquidity management.
Healthcare Medicalchain 2016 N/A Blockchain for secure and accessible patient health records and interoperability between providers.
Heavy Industry MineHub 2021 Efficiency improvements in global supply chains Tokenization and blockchain for tracking mineral sourcing and trade documentation, enhancing ESG compliance.

About the Author

jacquesJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies.

The post Beyond Bitcoins: The Future of Real Assets Tokenisation appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/beyond-bitcoins-the-future-of-real-assets-tokenisation/feed/ 0
Symbian or Uber? Thriving or Collapsing in Digital Business Ecosystems https://www.europeanbusinessreview.com/symbian-or-uber-thriving-or-collapsing-in-digital-business-ecosystems/ https://www.europeanbusinessreview.com/symbian-or-uber-thriving-or-collapsing-in-digital-business-ecosystems/#respond Fri, 27 Sep 2024 15:28:00 +0000 https://www.europeanbusinessreview.com/?p=214279 By Jacques Bughin “In an avalanche, no snowflake ever felt accountable,” – Voltairei What makes successful digital business ecosystems? Beyond a common evolutionary vision, we discuss the success factors successful […]

The post Symbian or Uber? Thriving or Collapsing in Digital Business Ecosystems appeared first on The European Business Review.

]]>

By Jacques Bughin

“In an avalanche, no snowflake ever felt accountable,”
– Voltairei

What makes successful digital business ecosystems? Beyond a common evolutionary vision, we discuss the success factors successful ecosystems have working for them.

The rise of business ecosystems 

What do Apple, Amazon, Alibaba, Netflix, Uber, TradeLens, and SAP have in common? In the contemporary digital economy, all are platform-based. In turn, platforms are the tip of the iceberg of a new market organization as the main orchestrators of so-called business ecosystems, – in which organizations interact co-dependently to increase, share, and maintain value. 

Ecosystems can facilitate collaboration and knowledge sharing, accelerating innovation.

In order to thrive, ecosystems have a shared common evolutionary vision, and each player contributes to the growth and robustness of the network. To paraphrase Moore, strong ecosystems are dynamic systems of diverse interactions, arrangements and partnerships between the leading center (orchestrator or platform), the network and peripheral players, to exploit complementarities. Through its strategic vision and ecosystemic relational skills, the orchestrator firm is key as it builds and governs ‘the most collaborative path possible around a strategic intention that allows the greatest number of people to adhere to the project. In that perspective, the orchestrator aims to have complements to grow, while constituents are there to maximize the benefits of participation, through critical mass and network effects. Other players, hub landlords or niche players—control localized nodes, and create mass and diversity to secure networks-effects.  

Digital changes everything 

Historically, ecosystems have been limited in number, but have been observed in multiple fields, such as ATM and payment services, telecom, or airline equipment (Boeing and Airbus in the early 2000’s). However, with the rapid evolution of digital and AI technologies, ecosystems have begun to “rule the world”. They have become increasingly important for several reasons: 

    1. Technology complexity: Developing and deploying tech solutions is complex and often requires specialized expertise and resources. Ecosystems can provide the necessary infrastructure, tools, and talent.
    2. Innovation speed: Digital technologies (think AI) are evolving rapidly, and businesses must be able to adapt quickly to new technologies and market trends. Ecosystems can facilitate collaboration and knowledge sharing, accelerating innovation.
    3. Scale and scope: Ecosystems can provide businesses with access to a wider range of customers and markets, enabling them to scale their operations and grow their revenue. In the context of digital technologies, marginal economics are often close to zero, making scale a concentration game.

In addition, digital architecture and the virtualization of practices have boosted the opportunities for large network effects, and hence ecosystems, namely: 

    1. Modularity: A classic example of the power of modularity was when PCs were built from standardized modules and semiconductor chips could be designed by one firm and produced by another. Companies that designed chips contributed to a substantial increase in semiconductor patenting and have since ushered in new technology and new client industries to build large ecosystems of connected cars, wearables, implants, cloud computing and Industry
    2. Data and Digital Transformation: The increasing volume of data is transforming business practices, and supports the effect of network economics and interconnected relationship business models.
    3. New appropriability mechanisms have come into life such as patent/licenses or Saas Models that sustain the interest of players to remain in the ecosystem.

A look at ecosystem performance 

Before the digital age, business ecosystems were considered a strange, but powerful idea, especially if they could impose standardization. For instance, IBM, HP and Seagate created business ecosystems for a new open format of linear tape technology in 2000 by using standardizations and succeeded in expanding the market for limited-time offers (LTO) drives. Media manufacturers such as Fujifilm, Sony, Hitachi, Maxell, and others, manufactured and sold their LTO media as complementary business ecosystem members. The LTO format share increased from 12% in 2001 to 77% in 2008 in the backup market of midrange and low-end servers. 

In digital, ecosystems have also become king. Today, the Apple Store, which launched by 2008 with 500 apps, now hosts more than 2 million apps, or a compound rate of more than 20% a year, for an ecosystem generating 1 US Trillion of yearly revenue, or bigger than the GDP of the Netherlands.  

Deepening on the evidence on how business ecosystems drive corporate performance, ecosystems are clearly influential: 1.) on Innovation Acceleration: 50% faster throughput, and 100% when it concerns radical innovation; on 2.) market access and extended customer reach. Through partnerships and collaborations, companies can tap into new markets, customer segments, and distribution channels, driving business expansion. The typical effect is large, in the range of 20% to 30% sales. In a study of the enterprise sales ecosystem with SAP, small software players enjoyed a 26 percent increase in sales after they became SAP certified. A study by this author had shown that platform ecosystem play may boost the sale/profit growth of participants, which was as large as many successful private R&D programs.  

Dependency on specific ecosystem partners for critical resources or capabilities may make a firm vulnerable to disruptions if those partners face challenges or exit the ecosystem.

Yet, there is never a free lunch in business. Against business ecosystems come along dependency risks. For instance, overreliance on an ecosystem can pose risks to firms. Dependency on specific ecosystem partners for critical resources or capabilities may make a firm vulnerable to disruptions if those partners face challenges or exit the ecosystem. Also, sharing knowledge and intellectual property within an ecosystem may lead to concerns about the protection of proprietary information. Finally, ecosystems also can be highly competitive, making it difficult for businesses to differentiate themselves.  

More crucially, recent studies have demonstrated that barely 1 out of 7 ecosystem play remains alive and flourishing after 15 years. And worse, from Symbian to eBay, ecosystems may collapse when unsuccessful 

So what should you do?  

To play or not to play in digital ecosystems? 

To answer the question, the recent 15th G-20 Y at Evian-Les-Bains, of which this author is a leading member of the leadership group, has welcomed business leaders ranging from Tokyo, Kuala Lumpur, and Riyadh to Jo’burg or Chicago and Los Angeles, – to discuss how to best play in a business ecosystem where network effects have become the management foundations of the 21st century organization.   

To tilt toward success, and not collapse in business ecosystems, we have synthesized a checklist of 10 key success factors to monitor:  

    1. Users value creation. Users ultimately define value creation in ecosystems—technology can help to be direct to customers, while data and AI can build prediction and personalization. Ecosystems that thrive are obsessed by end users, to lure them into being prosumers, like in games and video ecosystems such as YouTube, or Netflix.  
    2. Role play. Don’t be obsessed with being the orchestrator—the main platform tends to be the one taking that role, given their end-user reach without mass. Being a node in an ecosystem is possibly as rewarding if you specialize without the obligation and integration/ maintenance requirements of the orchestrators. 
    3. Coopetition mindset. The traditional principles of strategy are rooted in concepts of competition—winning at the expense of, etc. In ecosystems, codependence and value sharing are as important as competition.  
    4. Beyond industry concept—In ecosystems, industry boundaries no longer hold—most platforms in digital blur payments, products and services together. Uber leverages a mobile network, and payment applications, plus location services, and offers end-to-end delivery of mobility services, including transport, but also food. 
    5. Creative versus operational capabilities – Ecosystems can be transactional, but also innovative. In an ecosystem, creativity, risk, agility, and exploration are core capabilities.
    6. Governance is king —A large portion of ecosystems fail because of weak governance design. In particular, governance rules must among many ensure that they:
      a)  Limit free-riding in favor of fair contribution
      b) Favor open rather than closed forms and vertical integration
      c) Favor extensive “flying wheels” to densify network effects
      d) Support transparency for benefits and duties of members in the ecosystem, in order to build trust
      e) Develop mechanisms of conflict resolution
    7. Control versus market. Control is a typical reflex as a way to ensure rents and participation—but in most digital business ecosystems, players often find that market excess demand prevails in assets such as skills, data, or models—in this instance, outsourcing through the network may be more agile, and the integration capabilities of those market access may build the participatory value, rather than own the assets. Most platforms are actually asset-light, “without mass”.
    8. Participation, but with a hedge. The risk of the collapse of an ecosystem invites caution. Firms should consider multihoming – providing a credible threat to leave the ecosystem if this does not grow, and because multihoming allows to maximize reach for niche players. 
    9. Ecosystem, not self-efficiency. Nature shows how the ecosystem maximizes efficiency. For instance, drafting allows geese to fly about 70 percent farther than they could on their own. Each goose will take the lead slot for about the same amount of time, and then spend the rest of its journey drafting behind other birds.
    10. Network economics. In ecosystems, network economics plays a crucial role. Two important points should be analyzed to understand the minimal economics of an First, check the strength of weak ties; and second, understand the players at the edge. In a network, the key is not direct links but the relays of those direct links, the indirect “weak” links. They must develop deep interdependence and dense links to support the viability of the network and its stability/robustness. In general, companies only look at direct vicinity players – the reflex should however be to look at higher-order interdependence.

Finally, the edge defines the structure of the network and the economics of belonging or not. By looking at edge players and how their performance is supported by ecosystems, companies may have a very good hint at the minimal attractiveness of ecosystems. 

About the Author 

Jacques Bughin

Jacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC /PE firms, and serves on the board of several companies. 

References
1. Predators and Prey: A New Ecology of Competition. May-June 1993. Harvard Business Review. https://hbr.org/1993/05/predators-and-prey-a-new-ecology-of-competition.
2. Jose Jr, L. A., Brintrup, A., & Salonitis, K. (2020). Analysing the evolution of aerospace ecosystem development. Plos one, 15(4), e0231985.
3. Jose Jr, L. A., Brintrup, A., & Salonitis, K. (2020). Analysing the evolution of aerospace ecosystem development. Plos one, 15(4), e0231985.
4. Kuan, J., & West, J. (2023). Interfaces, modularity and ecosystem emergence: How DARPA modularized the semiconductor ecosystem. Research Policy.
5. Libert, B., Beck, M., & Wind, J. (2016). The network imperative: How to survive and grow in the age of digital business models. Harvard Business Review Press.
6. Awano, H., & Tsujimoto, M. (2021). The mechanisms for business ecosystem members to capture part of a business ecosystem’s joint created value. Sustainability, 13(8), 4573.
7. Guzman, J., Murray, F., Stern, S., & Williams, H. (2024). Accelerating innovation ecosystems: The promise and challenges of regional innovation engines. Entrepreneurship and Innovation Policy and the Economy, 3(1), 9-75.
8. Ceccagnoli, Marco, et al. “Cocreation of value in a platform ecosystem! The case of enterprise software.” MIS quarterly (2012): 263-290.
9. Jacobides, M. G., Cennamo, C., & Gawer, A. (2018). Towards a theory of ecosystems. Strategic Management Journal, 39(8), 2255-2276.
10. Why Do Most Business Ecosystems Fail? June 22, 2020. BCG. https://www.bcg.com/publications/2020/why-do-most-business-ecosystems-fail.

The post Symbian or Uber? Thriving or Collapsing in Digital Business Ecosystems appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/symbian-or-uber-thriving-or-collapsing-in-digital-business-ecosystems/feed/ 0
Sorting Out the AI Gold Rush? Ten Opportunities in the Generative AI Value Chain https://www.europeanbusinessreview.com/sorting-out-the-ai-gold-rush-ten-opportunities-in-the-generative-ai-value-chain-2/ https://www.europeanbusinessreview.com/sorting-out-the-ai-gold-rush-ten-opportunities-in-the-generative-ai-value-chain-2/#respond Wed, 11 Sep 2024 08:08:13 +0000 https://www.europeanbusinessreview.com/?p=212067 By Jacques Bughin and Duco Sickinghe As the AI boom continues to accelerate, investors are keenly eyeing the generative AI sector for potential breakthroughs. Jacques Bughin and Duco Sickinghe explore […]

The post Sorting Out the AI Gold Rush? Ten Opportunities in the Generative AI Value Chain appeared first on The European Business Review.

]]>

By Jacques Bughin and Duco Sickinghe

As the AI boom continues to accelerate, investors are keenly eyeing the generative AI sector for potential breakthroughs. Jacques Bughin and Duco Sickinghe explore ten key opportunities within the AI value chain that promise to address user challenges and unlock significant market potential.

Since our article of March 2022 (“AI Inside? Five Tipping Points for a New AI-based Business World”),1 which warned about the big boom in AI, artificial intelligence has indeed become red hot, with the birth of many large language models (LLMs), the launch of Apple AI Intelligence,2 and the booming demand for GPUs.

In this mania, investors are not only pouring money into the public AI companies (Nvidia is one of the most valuable public companies, exceeding $3 trillion in market value),3 they are also fighting to invest in the new private AI darlings, with private-equity-backed investment multiplied tenfold in one year.4 But the key, as a wise investor, is not to follow the herd as in a gold rush, but to anticipate the most important future opportunities in the AI value chain. The best opportunities are those that ultimately solve user issues and create a major buy-side market. Here are 10 examples.

Ten Investment Opportunities

glowing computer chip

1. Low-bit quantization

Why? The high computational and memory requirements of modern LLMs is costing generative AI users a lot by consuming a large amount of energy, in addition to generating significant pollution.

To give a sense of this, training a single 200 billion parameter LLM on AWS p4d instances consumes more energy than a thousand households for a year.5 In some dense areas in the US, datacenters consume 20 per cent of the grid power, endangering its reliability. Based on current GPU usage, the associated gigatons of CO2 emissions, according to figures laid out by the ACM, could well be the equivalent of 5 billion US cross-country flights.6

Any player that optimizes sustainable energy in the context of AI will be a big winner.

The opportunity: If these numbers are correct, any player that optimizes sustainable energy in the context of AI will be a big winner, as energy consumption will also be closely watched by sensitive users and regulators, who are already pushing for much better optimization and transparency in this area (c.f. the EU AI Act).7 Further, such a reduction in energy consumption might open the low end of the enterprise users market, easily doubling the demand for generative AI.

Examples: Low-bit quantization corresponds to the idea that fixed LLM weighting parameters could be optimized instead of their current “standard” of 32/16 bits in traditional LLMs. Players in low-bit quantization include the big names such as Nvidia and its software TensorRT and cuDNN, or Google TensorFlow Lite, which offers support for quantization-aware training and post-training low-bit quantization. More recently, Microsoft has unveiled its BitNet b1.58 (alluding to a 1.58 bit LLM where every single weight of the LLM is ternary {-1, 0, 1} and is able to match the full precision of 16 bits), demonstrating strong gains, up to 70 per cent down, in energy consumption over traditional LLM models.

The world of startups is also pushing ahead, with low-bit quantization startups including DeepAI, OctoML, and SambaNova Systems, which has established a strong moat through its innovative Reconfigurable Dataflow Architecture, and an integrated hardware-software solutions platform for low-bit quantisation to deliver high-performance AI applications.

2. Liquid neural networks

Why? Neural networks (ANN) have been driving the LLM revolution because of their parallel processing capabilities and their ability to model complex relationships, even with unstructured data. The drawback of ANNs is that they require large amounts of data and computing power, exhibit low explainability, and are still in a race of bigger and larger models.

The issue with ANN is thus a mix of energy cost (see above), a race of concentration whereby LLM models become a key bottleneck resource, and a poor reach, as LLM models cannot be incorporated in thin client layers.

The opportunity: Enter liquid neural networks (LNN), a novel type of neural network architecture inspired by neuroscience, which rely on dynamic connections and weights between neurons.

While LLN technology is potentially revolutionary according to its MIT proponents,8 its value is in shifting “the existing big is better“ paradigm to smaller models, where LLN has a potential to replicate fixed ANN performance with at least a 1,000 times lower number of weights. For instance, drones can be guided by a small 20,000-parameter LNN model9 that performs better in navigating previously unseen environments than other neural networks with millions of neurons.

Two other features of liquid neural networks are their ability to adapt in real time to new and evolving data without requiring extensive retraining, and their ability to perform continuous learning. This means they can integrate new information on the fly, which is particularly useful in real-world applications where data patterns can change rapidly, such as automotive (autonomous driving), industrial robotics (logistics), healthcare (real-time monitoring of patients), and customer service (chat interactions).

Examples: Startup companies such as Liquid AI10 and Vicarious11 are already experimenting with liquid neural networks. The latter aims to create more adaptable and efficient robotic control systems.

3. Quantum computing

Why? Many problems remain too complex to resolve with the current state of generative AI. In parallel, generative AI has boosted a major cybersecurity risk,12 resulting in the majority of enterprises resisting leveraging generative AI.

Meanwhile, superconducting technologies are now reaching over 120 qubits with IBM’s latest Eagle processor.

The opportunity: The promise of quantum computing lies in its combination with generative AI to solve problems previously thought to be intractable. The combination has the potential to solve a wide range of optimization problems, from logistics to financial portfolios, with unprecedented efficiency.

Quantum computing can not only destroy classical encryption methods, but also develop new, quantum-resistant cryptographic algorithms. In addition, quantum computing can be used to improve the efficiency and effectiveness of cybersecurity solutions.

Examples: Rigetti Computing is an example of a company exploring quantum computing applications in drug discovery. A competitor, PasQal, is also redefining energy efficiency standards in quantum computing through neutral atomic quantum.

4. Knowledge distillation

The promise of quantum computing lies in combining it with GenAI to solve problems previously thought to be intractable.

Why? Besides the energy power issue, the size of LLMs makes companies desperate for GPUs. In this evolution, Nvidia is definitely the big winner, with massive excess demand waiting in the wings. However, in the long run, this slows down the evolution of the market and makes the cost of generative AI too high for a large number of companies.

Opportunities: These lie in optimizing training and usage to reduce the number of GPUs. As we said earlier, liquid neural networks are a major breakthrough if they deliver on their promise, as training will be much easier and models will be much less expensive.

Meanwhile, in the short term, knowledge distillation is a technique that makes ANN run much more efficiently through model shrinking. As first introduced by Geoffrey Hinton in 2015,13 the latter technique transfers knowledge from a teacher LLM model to a much lighter student model. During the learning process, a complex neural network is taught to generate meaningful and helpful representations of data. The distillation processes are based on these thorough representations of data, or knowledge, stored by the teacher network in its hidden layers.

Example: distilBERT is a lightweight compressed counterpart of a larger BERT language model, with 60 per cent of the size of the original BERT model while retaining 97 per cent of its performance and being 60 per cent faster.

5. Synthetic data

Why? There are many problems with data, not only because they are increasingly scarce for AI training, but also because data quality can be questionable. There are two possibilities.

The opportunity: Synthetic data. We witness that synthetically trained models are becoming quite powerful and can even outperform models trained on real data.14 Successful applications are emerging in financial services, healthcare, and retail. In addition, the advantage of synthetic data is its privacy clearance.

Examples:  We have already reported on the Nvidia simulator application15 in its industrial metaverse that successfully leverages synthetic data to train robots. In general, synthetic data can also be used to rebalance samples when the required prediction concerns rare events such as financial fraud or manufacturing defects.

Startups that rely on the power of synthetic data include SBX Robotics,16 whose generated synthetic data are built 10 times faster and more cheaply than annotation services teaches robots to see; or MOSTLY AI, a leading synthetic data platform with a proprietary GenAI model architecture that results in the highest-accuracy synthetic data, and enables sophisticated AI/ML use cases, including multi-variate time series data and relational databases.

AI assistant

6. AI agents

Why? In our current work environment, there has been a lot of talk about how AI can steal our work.17 In a typical AI model, tasks are well described and automation can be done if the value productivity of humans was lower than the task performed by AI. However, by the end of 2023, the most capable Generative AI could learn many other skills through this process of next token prediction – for example, translation between languages, math and reasoning skills, and much more.

But the most interesting capability is the ability of LLMs to use software tools. ChatGPT, for example, can now browse the web, use a code interpreter plugin to run code, or perform other actions enabled by a developer.18

Opportunities: In this new environment, AI agents represent a leap from traditional automation in that they are designed to think, adapt, and act independently, rather than simply follow a set of instructions. The assertion is that agentic AI systems could dramatically increase users’ abilities to get more done in their lives with less effort,19 and with significantly better effectiveness, especially in complex dynamic tasks.

The second opportunity coming from AI agents is real-time data. We are creating more data every year than has been accumulated in the past. One can argue indeed that the collection of data from AI agents will play at the center of task workflows of enterprise processes.

The rush to adopt AI should not be reckless; the use of AI technologies leads to significant risks.

Examples: Microsoft’s Project AutoGen20 demonstrates a multi-agent framework that simplifies building workflows and applications with LLMs. It features specialized agents that can be configured with different LLMs and enables seamless human interaction. Another prime case of the value of AI agents through generative AI relates to the analysis of real-time traffic data from numerous sources, including road cameras, GPS in vehicles, and social media, to optimize traffic flow dynamically. These AI agents analyze real-time traffic data from numerous sources, including street cameras, in-car GPSs, and social media, to dynamically optimize traffic flow. In Hangzhou, China, the system built by Alibaba has reduced traffic congestion by 15 per cent and sped up emergency response times by 49 per cent .21

7. Responsible AI

gold brain

Why? The rush to adopt AI should not be reckless. In particular, the use of AI technologies leads to significant risks due to inaccurate data, privacy and copyright violations, algorithmic bias, or even fake and harmful content. Moreover, agents have specifically been shown to be less robust, prone to more harmful behaviors,22 and capable of generating stealthier content than LLMs, highlighting significant safety challenges. Finally, Europe is clearly moving ahead by passing its EU AI Act, which is rather stringent regarding trustworthy AI compliance.

The risks attached to generative AI are such that the largest bottleneck to date in enterprises adopting generative AI is misbehaving models and agents,23 on top of cybersecurity risks.

Opportunities: The development of the EU AI law would continue to make companies comply with a large number of responsible practices, for which a large number of them are not ready. In terms of the metrics, the auditing of practices is a significant regtech opportunity, as it is for other regulatory push areas, such as sustainability.

In addition, the intersection of generative AI and regtech presents significant opportunities to improve regulatory compliance processes. The ability to automate, analyze, and predict through AI provides significant value in terms of cost savings, accuracy, and efficiency.

Examples: Key startups in this space, such as Behavox, are pioneering generative AI solutions to help organizations monitor and manage compliance risks. Their flagship product, Behavox Quantum AI, includes monitoring of text and voice communications, reduction of alert volumes and false positives, and high recall rates for detecting compliance violations. In a recent test, ChatGPT detected less than 20 per cent of the intentionally planted phrases, compared with more than 80%24 for Behavox Quantum AI.

8. ML ops

Why? The majority of generative AI bottlenecks inside firms adopting the technology occur at the transition from prototype to production. In fact, research25 has found that 87 per cent of AI projects never make it into production.

Opportunities: MLOps (machine learning operations) is essential for managing the complex life cycle of generative AI models, and hence delivers multiple advantages over scaling. By standardizing workflows and automating repetitive tasks, MLOps ensures consistent performance and reliability of models in a time of strong compliance linked to generative AI.

MLOps tools provide robust capabilities for monitoring model performance, detecting drifts, and triggering alerts for anomalies. This ensures that generative AI models remain accurate and effective over time. Finally, MLOps fosters better collaboration between data scientists, developers, and operations teams by providing a unified framework and tools for managing the entire machine learning life cycle.

Examples: Robust Intelligence is an automated intelligence platform that integrates with ML models to “stress test” AI models prior to deployment, detect vulnerabilities, highlight erroneous data, and identify data issues that could compromise ML integrity.

Comet’s machine learning platform integrates with your existing infrastructure and tools, allowing you to manage, visualize, and optimize models – from training runs to production monitoring.

9. Cybersecurity

While generative AI is a problem, it is also a powerful solution for cybersecurity development.

Why? From former star Cisco buying Splunk to SentinelOne buying Attivo, the M&A race in cybersecurity is a symptom of the rush to provide a global platform for all cybersecurity issues. These issues have only grown exponentially in the last few years, and they are experiencing a major uptick through LLM. Currently, it reflects email phishing attacks, but with the rise of multimodal LLM, the threats will only expand and diversify. With quantum AI, the cryptographic elements may also be at risk.

Opportunities: While generative AI is a problem, it is also a powerful solution for cybersecurity development. Some specific opportunities in cybersecurity software and management that leverage generative AI are: a) automated threat detection and response (AI models can generate threat signatures and response strategies in real time, enabling faster and more effective incident response, while AI-powered tools can automatically filter out phishing emails and generate alerts for suspicious messages, reducing the risk of successful phishing attacks); b) vulnerability management (AI models can generate reports on potential vulnerabilities and suggest patches or mitigations, helping organizations to proactively address security weaknesses); c) malware analysis and generation (AI can assist in creating honeypots and decoys that mimic vulnerabilities, attracting and analyzing malware to better understand and mitigate threats).

Examples: There are a number of startups emerging from Jericho that are looking at better security, from governance to security solutions such as providing AI firewalls and threat detection and response solutions to detect malicious behavior that attacks generative AI models. For example, Vectra AI uses AI to identify and stop cyberattacks by analyzing network traffic and detecting malicious behavior.

While there are many companies in the space, opportunities remain, especially for more real-time risk detection and more sophisticated detection models.

woman

10. Thin-client generative AI

Why? By 2024, barely 10 per cent of smartphones are GenAI capable26 but, as in the first wave of the internet, the market truly exploded when the internet migrated to mobile and became the reference access for everyone. The battle is on,27 with Samsung taking the lead in the GenAI smartphone market and Apple’s recent entry into the artificial intelligence space.

Opportunities? Generative AI is not yet widespread on thin clients, due to significant computational, storage, and latency challenges, on top of mobile network infrastructure interfacing with typical other IP networks. Finally, the user experience on mobile is tied to the multimodal experience.

Regarding mobile networks, it seems that many standardization activities currently exist for AI functionality in RAN and core parts of mobile networks.28 Yet, the user experience may require the roll-out of 5G or more. The opportunity will thus manifest itself as a mix of the model and architecture optimization we have touched upon earlier, such as low-bit quantization, and LNN, on top of advances in edge computing, for more feasible deployment of generative AI on mobile and other lightweight devices. Opportunities are not only consumer-based, but also enterprise-based. Digital twinning29 for training, and mobility is, for instance, a major opportunity under inspection by many players in the mobile ecosystem.

Examples: Chip set manufacturers like ARM and Qualcomm have already made significant strides in launching thin-client GenAI-powered chip sets. But startups are also making inroads in this large market: Syntiant, on the other hand, develops ultra-low-power AI chips for edge devices that use advanced quantization techniques to enable efficient AI processing in power-constrained environments.

Conclusions

anthropomorhic robot

The world of generative AI is only just opening. It is still complex and faces major hurdles to being deployed. Still, these hurdles imply a major fix opportunity to scale the market significantly. Investors should pay close attention to how solutions will impact the market, and whether solution providers offer an opportunity to be exploited in exchange for a large market (TAM) in the making.

About the Authors

jacquesJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC / PE firms, and serves on the board of several companies.

Duco SickingheDuco Sickinghe founded Fortino Capital in December 2013 and has overseen Fortino’s growth to a recognized technology VC firm. Before Fortino, Duco was CEO of Telenet, INED of CME, and is currently Chairman at KPN. Other positions that Duco held are General Manager at Wolters Kluwer, founder of Software Direct, Product & Channel Manager at HP, and VP Marketing & General Management at NeXT Computer, where he was a contemporary of Steve Jobs. He holds a degree in Civil and Commercial Law and obtained an MBA from Columbia Business School.

References

1. AI Inside? Five Tipping Points for a New AI-based Business World. March 2, 2022. The European Business Review. https://www.europeanbusinessreview.com/ai-inside-five-tipping-points-for-a-new-ai-based-business-world/.

2. Introducing Apple Intelligence, the personal intelligence system that puts powerful generative models at the core of iPhone, iPad, and Mac. June 10, 2024. Apple. https://www.apple.com/newsroom/2024/06/introducing-apple-intelligence-for-iphone-ipad-and-mac/.

3. Everybody Loves … Nvidia. March 19, 2024. The European Business Review. https://www.europeanbusinessreview.com/everybody-loves-nvidia/.

4. Private equity-backed investment surge in generative AI defies 2023 deal slump. March 1, 2024. S&P Global. https://www.spglobal.com/marketintelligence/en/news-insights/latest-news-headlines/private-equity-backed-investment-surge-in-generative-ai-defies-2023-deal-slump-80625128.

5. Measuring GPU Energy: Best Practices. July 24, 2023. ML.ENERGY Blog. https://ml.energy/blog/energy/measurement/measuring-gpu-energy-best-practices/.

6. GenAI: Giga$$$, TeraWatt-Hours, and GigaTons of CO2. ACM Digital Library. August 2023. https://dl.acm.org/doi/pdf/10.1145/3606254.

7. The EU Coordinated Plan on AI: Expectations for the Energy Transition. August 12, 2022. EEIP. https://ee-ip.org/en/article/the-eu-coordinated-plan-on-ai-expectations-for-the-energy-transition-1-5799.

8. “Liquid” machine-learning system adapts to changing conditions. January 28, 2021. MIT CSAIL. https://www.csail.mit.edu/news/liquid-machine-learning-system-adapts-changing-conditions.

9. Robust flight navigation out of distribution with liquid neural networks. April 19, 2023. Science Robotics. https://www.science.org/doi/10.1126/scirobotics.adc8892.

10. “Liquid” machine-learning system adapts to changing conditions. January 28, 2021. MIT News. https://news.mit.edu/2021/machine-learning-adapts-0128.

11. MIT Technology Review. https://www.technologyreview.com/video/604683/ais-next-leap-forward-artificial-intelligence-at-work/.

12. Cybersecurity in the age of generative AI. September 10, 2023. McKinsey & Company. https://www.mckinsey.com/featured-insights/themes/cybersecurity-in-the-age-of-generative-ai.

13. Distilling the Knowledge in a Neural Network. March 9, 2015. arXiv. https://arxiv.org/abs/1503.02531.

14. James, S., Harbron, C., Branson, J., & Sundler, M. (2021). Synthetic data use: exploring use cases to optimise data utility. Discover Artificial Intelligence, 1(1), 15.

15. 2024: What is the Near Future of Generative AI? December 28, 2023.The European Business Review. https://www.europeanbusinessreview.com/2024-what-is-the-near-future-of-generative-ai/.

16. SBX Robotics. Y Combinator. https://www.ycombinator.com/companies/sbx-robotics.

17. Does artificial intelligence kill employment growth: the missing link of corporate AI posture. November 17, 2023. Frontiers. https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2023.1239466/full.

18 .CodePori: Large Scale Model for Autonomous Software Development by Using Multi-Agents. February 2, 2024. arXiv. https://arxiv.org/pdf/2402.01411.

19. Practices for Governing Agentic AI Systems. https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf.

20. AutoGen. Microsoft. https://www.microsoft.com/en-us/research/project/autogen/.

21. Hangzhou Smart Traffic. Bing. https://www.bing.com/search?q=Hangzhou+Smart+Traffic&cvid=2c83c1abd93d4190b810da24f94c5d2f&gs_lcrp=EgZjaHJvbWUyBggAEEUYOdIBBzQzMGowajSoAgiwAgE&FORM=ANAB01&PC=LCTS.

22. The Landscape of Emerging AI Agent Architectures for Reasoning, Planning, and Tool Calling: A Survey. April 17, 2024. arXiv. https://arxiv.org/html/2404.11584v1.

23. Adoption and impacts of generative artificial intelligence: Theoretical underpinnings and research agenda. April 2024. Science Direct. https://www.sciencedirect.com/science/article/pii/S2667096824000211.

24. AI Showdown: Behavox AI Outperforms ChatGPT in Compliance. April 4, 2023. Behavox. https://www.behavox.com/blog/behavox-outperforms-chatgpt-compliance/.

25. Why do 87% of data science projects never make it into production? July 19, 2019. Venture Beat. https://venturebeat.com/ai/why-do-87-of-data-science-projects-never-make-it-into-production/.

26. The Future of GenAI Smartphones: A New Era of Personalized Experiences. April 16, 2024. Smartphone Magazine. https://smartphonemagazine.nl/en/2024/04/16/the-future-of-genai-smartphones-a-new-era-of-personalized-experiences/#:~:text=By%202024%2C%20it%20is%20estimated%20that%2011%20percent,total%20of%20over%20550%20million%20units%20(Counterpoint%20Research).

27. The race to bring generative AI to mobile devices. March 15, 2023. Financial Times. https://www.ft.com/content/6579591d-4469-4b28-81a2-64d1196b44ab.

28. Generative AI in mobile networks: a survey. August 17, 2023. Springer Link. https://link.springer.com/article/10.1007/s12243-023-00980-9.

29. Demo: Scalable Digital Twin System for Mobile Networks with Generative AI. https://drive.google.com/file/d/1g8im3L7nLBu0TJXUtSAQK0ze0CJyYeg9/view.

The post Sorting Out the AI Gold Rush? Ten Opportunities in the Generative AI Value Chain appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/sorting-out-the-ai-gold-rush-ten-opportunities-in-the-generative-ai-value-chain-2/feed/ 0
Sorting Out the AI Gold Rush? Ten Opportunities in the Generative AI Value Chain https://www.europeanbusinessreview.com/sorting-out-the-ai-gold-rush-ten-opportunities-in-the-generative-ai-value-chain/ https://www.europeanbusinessreview.com/sorting-out-the-ai-gold-rush-ten-opportunities-in-the-generative-ai-value-chain/#respond Mon, 01 Jul 2024 09:26:11 +0000 https://www.europeanbusinessreview.com/?p=208582 By Jacques Bughin and Duco Sickinghe As the AI boom continues to accelerate, investors are keenly eyeing the generative AI sector for potential breakthroughs. Jacques Bughin and Duco Sickinghe explore […]

The post Sorting Out the AI Gold Rush? Ten Opportunities in the Generative AI Value Chain appeared first on The European Business Review.

]]>
By Jacques Bughin and Duco Sickinghe

As the AI boom continues to accelerate, investors are keenly eyeing the generative AI sector for potential breakthroughs. Jacques Bughin and Duco Sickinghe explore ten key opportunities within the AI value chain that promise to address user challenges and unlock significant market potential.

Introduction

Since our article of March 2022 (“AI Inside? Five Tipping Points for a New AI-based Business World”), which warned about the big boom in AI, artificial intelligence has indeed become red hot, with the birth of many large language models (LLMs), the launch of Apple AI Intelligence, and the booming demand for GPUs.

In this mania, investors are not only pouring money into the public AI companies (Nvidia is one of the most valuable public companies, exceeding $3 trillion in market value), they are also fighting to invest in the new private AI darlings, with private-equity-backed investment multiplied tenfold in one year. But the key, as a wise investor, is not to follow the herd as in a gold rush, but to anticipate the most important future opportunities in the AI value chain. The best opportunities are those that ultimately solve user issues and create a major buy-side market. Here are 10 examples.

Ten Investment Opportunities

1. Low-bit quantization

Why? The high computational and memory requirements of modern LLMs is costing generative AI users a lot by consuming a large amount of energy, in addition to generating significant pollution.

To give a sense of this, training a single 200 billion parameter LLM on AWS p4d instances consumes more energy than a thousand households for a year. In some dense areas in the US, datacenters consume 20 per cent of the grid power, endangering its reliability. Based on current GPU usage, the associated gigatons of CO2 emissions, according to figures laid out by the ACM, could well be the equivalent of 5 billion US cross-country flights.

The opportunity: If these numbers are correct, any player that optimizes sustainable energy in the context of AI will be a big winner, as energy consumption will also be closely watched by sensitive users and regulators, who are already pushing for much better optimization and transparency in this area (c.f. the EU AI Act). Further, such a reduction in energy consumption might open the low end of the enterprise users market, easily doubling the demand for generative AI.

Any player that optimizes sustainable energy in the context of AI will be a big winner.

Examples: Low-bit quantization corresponds to the idea that fixed LLM weighting parameters could be optimized instead of their current “standard” of 32/16 bits in traditional LLMs. Players in low-bit quantization include the big names such as Nvidia and its software TensorRT and cuDNN, or Google TensorFlow Lite, which offers support for quantization-aware training and post-training low-bit quantization. More recently, Microsoft has unveiled its BitNet b1.58 (alluding to a 1.58 bit LLM where every single weight of the LLM is ternary {-1, 0, 1} and is able to match the full precision of 16 bits), demonstrating strong gains, up to 70 per cent down, in energy consumption over traditional LLM models.

The world of startups is also pushing ahead, with low-bit quantization startups including DeepAI, OctoML, and SambaNova Systems, which has established a strong moat through its innovative Reconfigurable Dataflow Architecture, and an integrated hardware-software solutions platform for low-bit quantisation to deliver high-performance AI applications.

2. Liquid neural networks

Why? Neural networks (ANN) have been driving the LLM revolution because of their parallel processing capabilities and their ability to model complex relationships, even with unstructured data. The drawback of ANNs is that they require large amounts of data and computing power, exhibit low explainability, and are still in a race of bigger and larger models.

The issue with ANN is thus a mix of energy cost (see above), a race of concentration whereby LLM models become a key bottleneck resource, and a poor reach, as LLM models cannot be incorporated in thin client layers

The opportunity: Enter liquid neural networks (LNN), a novel type of neural network architecture inspired by neuroscience, which rely on dynamic connections and weights between neurons.

While LLN technology is potentially revolutionary according to its MIT proponents, its value is in shifting “the existing big is better“ paradigm to smaller models, where LLN has a potential to replicate fixed ANN performance with at least a 1,000 times lower number of weights. For instance, drones can be guided by a small 20,000-parameter LNN model that performs better in navigating previously unseen environments than other neural networks with millions of neurons.

Two other features of liquid neural networks are their ability to adapt in real time to new and evolving data without requiring extensive retraining, and their ability to perform continuous learning. This means they can integrate new information on the fly, which is particularly useful in real-world applications where data patterns can change rapidly, such as automotive (autonomous driving), industrial robotics (logistics), healthcare (real-time monitoring of patients), and customer service (chat interactions).

Examples: Startup companies such as Liquid AI and Vicarious are already experimenting with liquid neural networks. The latter aims to create more adaptable and efficient robotic control systems.

3. Quantum computing

Why? Many problems remain too complex to resolve with the current state of generative AI. In parallel, generative AI has boosted a major cybersecurity risk, resulting in the majority of enterprises resisting leveraging generative AI.

Meanwhile, superconducting technologies are now reaching over 120 qubits with IBM’s latest Eagle processor.

The promise of quantum computing lies in its combination with generative AI to solve problems previously thought to be intractable. 

The opportunity: The promise of quantum computing lies in its combination with generative AI to solve problems previously thought to be intractable. The combination has the potential to solve a wide range of optimization problems, from logistics to financial portfolios, with unprecedented efficiency.

Quantum computing can not only destroy classical encryption methods, but also develop new, quantum-resistant cryptographic algorithms. In addition, quantum computing can be used to improve the efficiency and effectiveness of cybersecurity solutions.

Examples: Rigetti Computing is an example of a company exploring quantum computing applications in drug discovery. A competitor, PasQal, is also redefining energy efficiency standards in quantum computing through neutral atomic quantum.

4. Knowledge distillation

Why? Besides the energy power issue, the size of LLMs makes companies desperate for GPUs. In this evolution, Nvidia is definitely the big winner, with massive excess demand waiting in the wings. However, in the long run, this slows down the evolution of the market and makes the cost of generative AI too high for a large number of companies.

Opportunities: These lie in optimizing training and usage to reduce the number of GPUs. As we said earlier, liquid neural networks are a major breakthrough if they deliver on their promise, as training will be much easier and models will be much less expensive.

Meanwhile, in the short term, knowledge distillation is a technique that makes ANN run much more efficiently through model shrinking. As first introduced by Geoffrey Hinton in 2015, the latter technique transfers knowledge from a teacher LLM model to a much lighter student model. During the learning process, a complex neural network is taught to generate meaningful and helpful representations of data. The distillation processes are based on these thorough representations of data, or knowledge, stored by the teacher network in its hidden layers.

Example: distilBERT is a lightweight compressed counterpart of a larger BERT language model, with 60 per cent of the size of the original BERT model while retaining 97 per cent of its performance and being 60 per cent faster.

5. Synthetic data

Why? There are many problems with data, not only because they are increasingly scarce for AI training, but also because data quality can be questionable. There are two possibilities.

The opportunity: Synthetic data. We witness that synthetically trained models are becoming quite powerful and can even outperform models trained on real data. Successful applications are emerging in financial services, healthcare, and retail. In addition, the advantage of synthetic data is its privacy clearance.

Examples:  We have already reported on the Nvidia simulator application in its industrial metaverse that successfully leverages synthetic data to train robots. In general, synthetic data can also be used to rebalance samples when the required prediction concerns rare events such as financial fraud or manufacturing defects.

Startups that rely on the power of synthetic data include SBX Robotics, whose generated synthetic data are built 10 times faster and more cheaply than annotation services teaches robots to see; or MOSTLY AI, a leading synthetic data platform with a proprietary GenAI model architecture that results in the highest-accuracy synthetic data, and enables sophisticated AI/ML use cases, including multi-variate time series data and relational databases

6. AI agents

Why? In our current work environment, there has been a lot of talk about how AI can steal our work. In a typical AI model, tasks are well described and automation can be done if the value productivity of humans was lower than the task performed by AI. However, by the end of 2023, the most capable Generative AI could learn many other skills through this process of next token prediction – for example, translation between languages, math and reasoning skills, and much more.

But the most interesting capability is the ability of LLMs to use software tools. ChatGPT, for example, can now browse the web, use a code interpreter plugin to run code, or perform other actions enabled by a developer.

Opportunities: In this new environment, AI agents represent a leap from traditional automation in that they are designed to think, adapt, and act independently, rather than simply follow a set of instructions. The assertion is that agentic AI systems could dramatically increase users’ abilities to get more done in their lives with less effort, and with significantly better effectiveness, especially in complex dynamic tasks.

The second opportunity coming from AI agents is real-time data. We are creating more data every year than has been accumulated in the past. One can argue indeed that the collection of data from AI agents will play at the center of task workflows of enterprise processes.

Examples: Microsoft’s Project AutoGen demonstrates a multi-agent framework that simplifies building workflows and applications with LLMs. It features specialized agents that can be configured with different LLMs and enables seamless human interaction. Another prime case of the value of AI agents through generative AI relates to the analysis of real-time traffic data from numerous sources, including road cameras, GPS in vehicles, and social media, to optimize traffic flow dynamically . These AI agents analyze real-time traffic data from numerous sources, including street cameras, in-car GPSs, and social media, to dynamically optimize traffic flow. In Hangzhou, China, the system built by Alibaba has reduced traffic congestion by 15 per cent and sped up emergency response times by 49 per cent .

7. Responsible AI

The rush to adopt AI should not be reckless. In particular, the use of AI technologies leads to significant risks.

Why? The rush to adopt AI should not be reckless. In particular, the use of AI technologies leads to significant risks due to inaccurate data, privacy and copyright violations, algorithmic bias, or even fake and harmful content. Moreover, agents have specifically been shown to be less robust, prone to more harmful behaviors, and capable of generating stealthier content than LLMs, highlighting significant safety challenges. Finally, Europe is clearly moving ahead by passing its EU AI Act, which is rather stringent regarding trustworthy AI compliance.

The risks attached to generative AI are such that the largest bottleneck to date in enterprises adopting generative AI is misbehaving models and agents, on top of cybersecurity risks.

Opportunities: The development of the EU AI law would continue to make companies comply with a large number of responsible practices, for which a large number of them are not ready. In terms of the metrics, the auditing of practices is a significant regtech opportunity, as it is for other regulatory push areas, such as sustainability.

In addition, the intersection of generative AI and regtech presents significant opportunities to improve regulatory compliance processes. The ability to automate, analyze, and predict through AI provides significant value in terms of cost savings, accuracy, and efficiency.

Examples: Key startups in this space, such as Behavox, are pioneering generative AI solutions to help organizations monitor and manage compliance risks. Their flagship product, Behavox Quantum AI, includes monitoring of text and voice communications, reduction of alert volumes and false positives, and high recall rates for detecting compliance violations. In a recent test, ChatGPT detected less than 20 per cent of the intentionally planted phrases, compared with more than 80% for Behavox Quantum AI.  

8. ML ops

Why? The majority of generative AI bottlenecks inside firms adopting the technology occur at the transition from prototype to production. In fact, research has found that 87 per cent of AI projects never make it into production

Opportunities: MLOps (machine learning operations) is essential for managing the complex life cycle of generative AI models, and hence delivers multiple advantages over scaling. By standardizing workflows and automating repetitive tasks, MLOps ensures consistent performance and reliability of models in a time of strong compliance linked to generative AI.

MLOps tools provide robust capabilities for monitoring model performance, detecting drifts, and triggering alerts for anomalies. This ensures that generative AI models remain accurate and effective over time. Finally, MLOps fosters better collaboration between data scientists, developers, and operations teams by providing a unified framework and tools for managing the entire machine learning life cycle.

Examples: Robust Intelligence is an automated intelligence platform that integrates with ML models to “stress test” AI models prior to deployment, detect vulnerabilities, highlight erroneous data, and identify data issues that could compromise ML integrity.

Comet’s machine learning platform integrates with your existing infrastructure and tools, allowing you to manage, visualize, and optimize models – from training runs to production monitoring.

9. Cybersecurity

Why? From former star Cisco buying Splunk to SentinelOne buying Attivo, the M&A race in cybersecurity is a symptom of the rush to provide a global platform for all cybersecurity issues. These issues have only grown exponentially in the last few years, and they are experiencing a major uptick through LLM. Currently, it reflects email phishing attacks, but with the rise of multimodal LLM, the threats will only expand and diversify. With quantum AI, the cryptographic elements may also be at risk.

Opportunities: While generative AI is a problem, it is also a powerful solution for cybersecurity development.  Some specific opportunities in cybersecurity software and management that leverage generative AI are: a) automated threat detection and response (AI models can generate threat signatures and response strategies in real time, enabling faster and more effective incident response, while AI-powered tools can automatically filter out phishing emails and generate alerts for suspicious messages, reducing the risk of successful phishing attacks); b) vulnerability management (AI models can generate reports on potential vulnerabilities and suggest patches or mitigations, helping organizations to proactively address security weaknesses); c) malware analysis and generation (AI can assist in creating honeypots and decoys that mimic vulnerabilities, attracting and analyzing malware to better understand and mitigate threats).

While generative AI is a problem, it is also a powerful solution for cybersecurity development.

Examples: There are a number of startups emerging from Jericho that are looking at better security, from governance to security solutions such as providing AI firewalls and threat detection and response solutions to detect malicious behavior that attacks generative AI models. For example, Vectra AI uses AI to identify and stop cyberattacks by analyzing network traffic and detecting malicious behavior.

While there are many companies in the space, opportunities remain, especially for more real-time risk detection and more sophisticated detection models.

10. Thin-client generative AI

Why? By 2024, barely 10 per cent of smartphones are GenAI capable but, as in the first wave of the internet, the market truly exploded when the internet migrated to mobile and became the reference access for everyone. The battle is on, with Samsung taking the lead in the GenAI smartphone market and Apple’s recent entry into the artificial intelligence space.

Opportunities? Generative AI is not yet widespread on thin clients, due to significant computational, storage, and latency challenges, on top of mobile network infrastructure interfacing with typical other IP networks. Finally, the user experience on mobile is tied to the multimodal experience.

Regarding mobile networks, it seems that many standardization activities currently exist for AI functionality in RAN and core parts of mobile networks. Yet, the user experience may require the roll-out of 5G or more. The opportunity will thus manifest itself as a mix of the model and architecture optimization we have touched upon earlier, such as low-bit quantization, and LNN, on top of advances in edge computing, for more feasible deployment of generative AI on mobile and other lightweight devices. Opportunities are not only consumer-based, but also enterprise-based. Digital twinning for training, and mobility is, for instance, a major opportunity under inspection by many players in the mobile ecosystem

Examples: Chip set manufacturers like ARM and Qualcomm have already made significant strides in launching thin-client GenAI-powered chip sets. But startups are also making inroads in this large market: Syntiant, on the other hand, develops ultra-low-power AI chips for edge devices that use advanced quantization techniques to enable efficient AI processing in power-constrained environments.

Conclusions

The world of generative AI is only just opening. It is still complex and faces major hurdles to being deployed. Still, these hurdles imply a major fix opportunity to scale the market significantly. Investors should pay close attention to how solutions will impact the market, and whether solution providers offer an opportunity to be exploited in exchange for a large market (TAM) in the making.

About the Authors

Jacques Bughin

Jacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC / PE firms, and serves on the board of several companies.

Duco Sickinghe

Duco Sickinghe founded Fortino Capital in December 2013 and has overseen Fortino’s growth to a recognized technology VC firm. Before Fortino, Duco was CEO of Telenet, INED of CME, and is currently Chairman at KPN. Other positions that Duco held are General Manager at Wolters Kluwer, founder of Software Direct, Product & Channel Manager at HP, and VP Marketing & General Management at NeXT Computer, where he was a contemporary of Steve Jobs. He holds a degree in Civil and Commercial Law and obtained an MBA from Columbia Business School.

The post Sorting Out the AI Gold Rush? Ten Opportunities in the Generative AI Value Chain appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/sorting-out-the-ai-gold-rush-ten-opportunities-in-the-generative-ai-value-chain/feed/ 0
How AI Liberates the Transition to a Skill-Based Organisation https://www.europeanbusinessreview.com/how-ai-liberates-the-transition-to-a-skill-based-organisation/ https://www.europeanbusinessreview.com/how-ai-liberates-the-transition-to-a-skill-based-organisation/#respond Mon, 13 May 2024 02:21:35 +0000 https://www.europeanbusinessreview.com/?p=205955 By Jacques Bughin and Jeroen Van Hautte What do companies as diverse as Booking.com, Accenture, Revolut, GSK, Walmart, or Unilever have in common? Answer: they have been on a common journey […]

The post How AI Liberates the Transition to a Skill-Based Organisation appeared first on The European Business Review.

]]>

By Jacques Bughin and Jeroen Van Hautte

What do companies as diverse as Booking.com, Accenture, Revolut, GSK, Walmart, or Unilever have in common? Answer: they have been on a common journey to migrate their organisation to a skills-based organisation (SBO).

At a time when the skills mix of workers is exploding and the future of work is dramatically changing with work-from-home, automation, and AI, front-running companies are moving their organisations to a world of better skill use and continuous learning. But what they mostly find is that AI skill tech is a critical software solution to support the journey to an SBO.

1. The Causes and Rewards of SBO

The concept of the SBO represents a paradigm shift in the traditional working model. Instead of rigid job-centric structures, these organisations prioritise human skills, defining work by breaking down roles into tasks and activities based on required competencies. This transformative approach fosters an environment that values employee expertise, continuous learning, and adaptability over traditional siloed structures.

The trend towards skills-based organisations (SBO) is now well established and is inevitable for at least three reasons.

The first is that digitisation and other trends are shifting the skill set needs for the workforce. One would argue that skill shift has always been there. For example, coal miners in the past used to carry out heavy physical and manual tasks requiring gross motor skills and physical strength. Today, they increasingly operate machines that do the heavy and dangerous toil, and need to apply more complex skills by monitoring equipment and problem solving. Nurses in 1957 were required to administer medicines, monitor patients by taking their pulse and temperature, and help with therapeutic tasks, including bathing, massaging, and feeding patients. Today, they still administer medicines to patients but also help perform diagnostic tests and can analyse the results, employing skills and filling roles that were more common to doctors half a century ago. But our research with Nobel Prize recipient Chris Pissarides shows that the skill shift has been accelerating in recent years, and the skill obsolescence rate has been doubling in the last decade.

Digitisation and other trends are shifting the skill set needs for the workforce.

Second, the number of skills the workforce needs to master is only getting larger, not smaller, per individual. The skill set is moving to soft skills and, under a digital lens, skills that are also notoriously absent from the main scope of traditional educational systems. The result is an increasing mismatch where the skills of workers are badly utilised As a case example, consider taxi drivers. While, in 1970, fewer than 1 per cent of US taxi drivers had a college degree (meaning they master some clear cognitive skills), the proportion had risen to nearly 15 per cent by 2013 and is now reaching 17 per cent , with close to 10 per cent of them with a business and  engineering degree. Sure, those skill sets may be useful elsewhere.

Other research by OECD and other academic labour market scholars, using the PIACC skill taxonomy, concluded that skill mismatch affects 30 per cent  of workers in any of the 34 countries it analysed.

Third, AI itself is radically building a major skill shift and a new organisational model of the workforce where workers must augment their skills with technology while seeing tasks automated. Finally, using the catalyst of the COVID-19 crisis, a lot of organisations have been testing and promoting new work models, such as remote work. What we recently found is that, in general, the difference between using and avoiding fully agile work environment has been associated in the last three years with 3.1 points of extra revenue growth annually for large companies worldwide. for large companies worldwide.

Given those trends, companies which have adopted an SBO are demonstrating some clear rewards. A plethora of research mentions among other things that SBOs are 52 per cent more likely to innovate and 57 per cent more likely to anticipate and respond effectively to change. They have a 98 per cent likelihood of retaining their top talents.

2. AI Tech Is a Must-Have to Power the SBO Transition

From the above, pivoting to an SBO is one of the most robust proven ROI cases. In fact, a typical mismatch of skills of 20-30 per cent at the level of the firm is not unusual, and may translate to a gap (versus a perfect match) of more than 5-6 per cent of productivity loss. On a global basis, this is a 5 trillion GDP gap, linked to misuse of labour skills, according to consultancy BCG. And this does not take into account the fact that employees may feel frustrated, especially high performers.

Evidently, the organisational pivot is a fantastic opportunity for the CHRO, but  it is nevertheless a massive enterprise-wise task for which the CHRO may have operational accountability. Fortunately, this is where AI tech comes to the rescue.

AI itself is radically building a major skill shift and a new organizational model of the workforce.

While AI adoption has seen a staggering 70 per cent increase across business over the past five years, the spotlight has often been on supporting customer services and supply chain optimisation, but is now also moving into a key untapped potential in the field of so-called “skills tech”. This emerging market, with pioneers like TechWolf, Workday, or Skillate, is at the forefront of delivering AI solutions and tools to unleash the power of the skill-based organisation, storing and defining skills, inferring competencies across the workforce, and predicting / recommending training needs using AI and machine learning.

Powered by machine learning and data-driven tools, companies can then exhaustively map the current skills of their workforce. This involves creating skills matrices to identify existing skill sets within the organisation, highlighting strengths and weaknesses. This information then enables strategic planning of skills enhancement and renewal initiatives, ensuring that employees remain relevant and equipped for the jobs of tomorrow. AI skills technology acts as a visionary force, predicting future workforce needs, also identifying areas where additional training or recruitment may be required to meet the demands of future work.

3. Mapping the HR Tech Journey

Mapping skill partner tech

If a company has not already embarked on the journey, then here are five key important steps.

Step 1: Get ready

SBO is a pivotal change. Hence, only a CHRO who has the drive, the vision, and the support of the board can make it happen.

Step 2: Select your skill tech partner

As discussed, SBO must be implemented and facilitated by skill tech. While some large providers will claim to have the right solutions, the best of these are coming from AI-native firms that can support a comprehensive AI factory for HR, including cleaned and accurate skill data, AI algorithms that fit HR needs, and AI Ops that make AI easy to use by employers and employees. On top of those qualities, (mostly cloud) app-based solutions allow ease of integration and access, and providers must have strong proof of data security.

Step 3: Clarify  

One major issue of the SBO is that CHRO and organisations will jump without having built the definition and taxonomy of skills they want. Having one unique definition of skills is a critical first step. The second step is the main objective of the transition, whether it is mismatch resolution and hiring speed-up, a skill-based internal labour market with the enterprise, etc.

Step 4: Implement and build workflows around skills

Step 5: Move from employer-centric to employee-centric: develop the best skill portfolio for each employee

If all this is clear, it is time to launch that SBO.

What are you waiting for?

About the Authors

Jacques BughinJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He retired from McKinsey as a senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC / PE firms, and serves on the board of several companies.

Jeroen van Hautte

Jeroen van Hautte is a co-founder and CTO of TechWolf, one of the fastest-growing AI companies in Europe. The company leverages AI to help organisations understand the skills of their workforce and works with some of the world’s biggest brands. Jeroen developed his expertise in AI at Cambridge University and is recognised by the World Economic Forum and Forbes Under 30 as a leader in technology.

Additional References:

The post How AI Liberates the Transition to a Skill-Based Organisation appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/how-ai-liberates-the-transition-to-a-skill-based-organisation/feed/ 0
Strategising GenAI: Open Source or Proprietary LLMs? https://www.europeanbusinessreview.com/strategising-genai-open-source-or-proprietary-llms/ https://www.europeanbusinessreview.com/strategising-genai-open-source-or-proprietary-llms/#respond Thu, 11 Apr 2024 13:54:10 +0000 https://www.europeanbusinessreview.com/?p=204487 By Jacques Bughin Generative AI and large language models (LLMs) are in the process of defining a future where the battlegrounds of business innovation and competitiveness are radically changed. As […]

The post Strategising GenAI: Open Source or Proprietary LLMs? appeared first on The European Business Review.

]]>
By Jacques Bughin

Generative AI and large language models (LLMs) are in the process of defining a future where the battlegrounds of business innovation and competitiveness are radically changed. As these technologies revolutionise the way businesses operate, the ongoing tussle between proprietary and open-source models shows no signs of abating.  

1. Introduction  

The sudden explosion of generative AI owes much to large language models (LLMs), whose “self-attention” architecture allows for massive data parallelisation. 

From retrieval augmented generation (RAG) to personalised automated customer support and new autonomous agents, the variety of business use cases for LLMs is already vast, offering an exciting source of business competitiveness and productivity gains rarely seen in the past. In less than one year, the market for LLM was estimated to be close to US$5 billion, and it is projected to grow at a compound annual growth rate (CAGR) of 35.9 per cent from 2024 to 2030.

Along with this explosion, the first AI-as-a-service providers (Anthropic Claude, OpenAI ChatGPT, and Googler Bard) were all proprietary. But if you look at the history of information technology, the introduction of proprietary software has been rapidly matched by the development of so-called “open source” (OSS) alternatives. This is also happening in the LLM space, even if today ChatGPT is the overwhelming winner, with about a three-quarters share of usage across the globe (figure 1).   

Figure 1: ChatGPT versus LLama, last 12 months 

Figure 1
Source: author’s own computation, based on Google Trends  

How will this evolve going forward? And what should companies do in current times? 

2. A brief history of open source 

2.1. Kick-starting a collaborative movement 

In the 1960s, software was typically bundled with hardware but, as software grew in complexity with the advent of operating systems, databases, and high-level programming languages, IT companies began charging separately for software, which also became copyrightable. The 1980s marked a pivotal moment with the creation of the GNU project by Richard Stallman, aimed at bypassing the proprietary nature of Unix. It was during this time that the term “open source” was first coined at the Foresight Institute to reflect the collaborative nature of OSS.  

By the end of the 1990s, the term “open source” gained traction within communities like Linux, Perl, and Python, as well as among companies such as Netscape and Red Hat. This period also saw the establishment of the non-profit Open Source Initiative (OSI) in 1998, inspired by Netscape’s decision to open-source Netscape Communicator, and the founding of the Apache Software Foundation by developers of the Apache web server in 1999. 

The launch of SourceForge.com in 1999 provided developers with a platform to easily share and develop source code, further fuelling the growth of open source software. In 2000, the Linux Foundation was founded, solidifying its position as one of the largest and most influential open source software foundations. 

2.2. Traction 

Since the introduction of the Linux operating system in 1991, OSS has steadily gained momentum, challenging proprietary market leaders such as OS2 and Sun OS. Linux owns 4 per cent of the desktop market and 14 per cent of the server market, and is an especially dominant player in cloud and web servers to date. 

The browser wars of the early Internet era, notably between Netscape and Microsoft, highlighted the importance of open source projects like Mozilla, which gave rise to the Firefox browser. Today, open source software dominates various sectors, with applications like Android, WordPress, and Apache leading the way. In databases, OSS such as MySQL and MongoDB have become industry standards. Similarly, in e-commerce, the open source WooCommerce and OpenCart are often ahead of, or at least on a par with, proprietary software such as Shopify and Wix. 

2.3. Revenue model 

If OSS is typically freely available for anyone to use, modify, and redistribute, this removes a large opportunity pool of value capture in terms of traditional licensing and SaaS revenue. 

So is OSS doomed to fail? Not necessarily. 

For one thing, an open source model can be a pure deterrent; for example, when IBM blessed Linux, it also blocked Microsoft from the Blue Giant’s bread and butter. Google did the same trick when it pushed Android to block Apple in the mobile ecosystem.  Secondly, open source is also a smart “attacker” strategy. It allows you to build critical mass quickly and always be at the top of the development cycle – two aspects (performance and scalability) that make all the difference in software. The OSS development model also eliminates the need for significant up-front investment in the development of new products, and is often cheap because the work is done by passionate volunteers. Open source development is also free from vendor lock-in, giving organisations greater flexibility and control over their technology stack.  

In a nutshell, then, the OSS strategy is to create a large pool of value for a possible small appropriation of value, but the multiplication makes it larger than if the company had chosen a high proprietary source and high price. The key is nevertheless to find multiple complementary revenue streams. Red Hat, an open source pioneer before it was bought by IBM, generated most of its revenue by selling service contracts and complementary software applications for the Linux operating system. Typical OSS revenue streams include selling support and consulting services, offering hosted or managed services, and creating proprietary add-ons or plug-ins that extend the functionality of open source software.  

3. The era of open source LLMs 

Open source LLMs face the same trend, with the advent of the likes of Mistral AI, GPT-NeoX-20B built by EleutherAI, or LLaMA by Meta AI, among others. From the use cases that have leaked, these models also seem to be as revolutionary as other private ones (exhibit 2).  

Exhibit 2: OS LLM use case examples. 

Exhibit 2
Source: Litreview, author

The future of open source is therefore assured. And this raises the final question: should one choose OSS over proprietary and, if so, which one? 

The choice between open and closed software depends on a number of factors. As shown in exhibit 3, typical criteria include security, cost, performance, scalability, etc. In general, proprietary systems win on security and reliability, but this has been changing in recent years in favour of OSS. 

Exhibit 3: comparing OSS and proprietary software 

Exhibit 3

Regarding LLMs, some important points have been emerging for consideration:  

  1. Open source LLMs have significantly narrowed the quality gap with proprietary closed LLMs in a variety of tasks, such as chatbots, etc. Performance metrics such as token task completion and accuracy rate are now on a par with proprietary models. 
  2. OS LLMs provide access not only to the source code, but also to its architecture and training data. This facilitates rigorous testing and customisation of models. 
  3. Furthermore, the use cases for LLMs are only now emerging and are often being created by the users themselves. Thus, open source can drive innovation by harnessing diverse expertise, creativity, and ideas. 
  4. Licensing fees associated with proprietary LLMs can be a significant financial burden. However, open source does not mean free. Organisations using open source LLMs should expect to pay for operational costs such as infrastructure and cloud services. 
  5. While transparency is a core advantage of open source LLMs, security and privacy may be their Achilles’ heel. Cases of data breaches and unauthorised access to sensitive information characterise LLMs. On the one hand, open source LLMs place the responsibility for data protection on the user, and thus the user can ensure better security and privacy measures. However, newer tool-augmented LLMs mostly rely on closed LLM APIs, exposing internal company workflows and information to those APIs. Finally, the issue may not be privacy per se, but maliciousness. This risk may be much higher in the case of OSS, as third parties may have access to the source code. As cybersecurity risks and ethical AI are the biggest issue for many companies, open source may create a lot of uncertainty, even more than proprietary LLMs. 
  6. Over time, generative AI technology may rely on more standardised and modular building blocks within software libraries (such as prompt templates that allow easier adoption and customisation in downstream applications). The interoperability of pre-trained models across platforms should then dramatically reduce the need to retrain large models and make LLMs a natural software element of many business cases. The speed of this standardisation and modularisation will also determine how open source LLMs will be used in the future. 

About the Author

Jacques Bughin

Jacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He is retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC / PE firms, and serves on the board of several companies.  

Additional References  

  1. Ahmed, T., Bird, C., Devanbu, P., & Chakraborty, S. (2024). “Studying LLM performance on closed and open source data”. arXiv preprint arXiv:2402.15100. 
  2. Blancaflor, E. B., & Samonte, S. A. (2023). “An analysis and comparison of proprietary and open source software for building an e-commerce website: A case study”. Journal of Advances in Information Technology, 14(3), 426-30.  
  3. Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). “Generative AI at work” (No. w31161). National Bureau of Economic Research. 
  4. Bughin, J (2023),  “Is the impact of generative AI overhyped? Insights from one hundred AI business success stories”. Medium. 
  5. Finlayson, M., Swayamdipta, S., & Ren, X. (2024). “Logits of API-protected LLMs leak proprietary information”. arXiv preprint arXiv:2403.09539. 
  6. Kogut, B. & Anca, Metiu (2001). “Open-Source software development and distributed innovation”, Oxford Review of Economic Policy, volume 17, number 2, pp. 248-64. 
  7. Irshar, M., Ali, A., & Ibrahim, S. A (2019). “Comparative Analysis Between Open Source And Closed Source Software in Terms of Complexity and Quality Factors”. 
  8. Lerner, J., & Tirole, J. (2002). “Some simple economics of open source”, Journal of Industrial Economics, volume 50, number 2, pp. 197-234. 
  9. Peng, S., Kalliamvakou, E., Cihon, P., & Demirer, M. (2023). “The impact of AI on developer productivity: Evidence from GitHub Copilot”. arXiv preprint arXiv:2302.06590. 

The post Strategising GenAI: Open Source or Proprietary LLMs? appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/strategising-genai-open-source-or-proprietary-llms/feed/ 0
The EU AI Act: Trustworthy or Reckless AI? https://www.europeanbusinessreview.com/the-eu-ai-act-trustworthy-or-reckless-ai/ https://www.europeanbusinessreview.com/the-eu-ai-act-trustworthy-or-reckless-ai/#respond Mon, 01 Apr 2024 23:07:04 +0000 https://www.europeanbusinessreview.com/?p=203993 By Jacques Bughin Alongside the lightning-paced advancements in AI, a number of ethical quandaries have emerged, evidenced for example by deep fakes and incidences of bias. Here, Jacques Bughin puts […]

The post The EU AI Act: Trustworthy or Reckless AI? appeared first on The European Business Review.

]]>
By Jacques Bughin

Alongside the lightning-paced advancements in AI, a number of ethical quandaries have emerged, evidenced for example by deep fakes and incidences of bias. Here, Jacques Bughin puts the case for regulation combined with responsible implementation. 

Ethical AI on the move 

In recent years, the rapid development of artificial intelligence technologies has led to growing concerns about their ethical implications. There have been numerous cases of AI technologies being misused, including the recent deep fake linked to Taylor Swift on X, or the fake video of Ukrainian President Volodymyr Zelenskyy surrendering. In fact, a report by the Stanford Institute for Human-Centered AI found that AI incidents and controversies have increased 26-fold since 2012.  

The fundamental question, then, is whether the technology should be released from sandboxes without careful rules associated with the use of the technology. The problem is that some people may be malevolent; others, such as private companies, may simply be reckless – forgetting to internalise major social risks and opting for the AI race at all costs, in the hope of market leadership. 

In this vein, the New York Times published an article on the race for AI and the risk of reckless AI. The paper referred to how two Google employees, mimicking a similar attempt by employees at rival Microsoft 10 months earlier, “tried to stop Google from launching an AI chatbot that was likely to generate inaccurate and dangerous statements”. As the NYT article also noted, “both companies released their chatbots anyway. In the race to lead generative AI, it’s better to be first and worry about things that can be fixed later.” Note that while Elon Musk criticised Microsoft / Google for their arms race and called for a pause in AI development, he has since changed his mind and launched Gork. 

AI ethics: to regulate or not to regulate 

Obviously, we should not kill the golden goose of AI. AI innovations are already at the heart of major gains in productivity, with AI so embedded in many of our daily activities (from taxi hailing, Google search, and call centres, to product recommendations and weather forecasts) that we easily forget how powerful it is.  

However, the issue with AI is that it combines multiple problems, from misuse to  the transparency of its own algorithms (“black box”), or major biases of AI models induced by the type of data collected.  

The AI Act, which will become law around April 2024, is a clear test of business ethics, balancing the need for technological progress with the notion of doing the right thing.

One solution to fix those issues is regulation to impose a level playing field. In addition to its General Data Protection Regulation (GDPR), the European Union has emerged as a frontrunner in addressing the ethical challenges posed by AI through regulatory measures such as the European Artificial Intelligence (AI) Act. Spearheaded by EU Vice President Margrethe Vestager, the AI Act, which will become law around April 2024, is a clear test of business ethics, balancing the need for technological progress with the notion of doing the right thing. 

Everyone knows that regulation is not always optimal. For instance, there are clear drawbacks, we believe, in the EU AI Act; for example, the fact that it is essentially an ex ante regulation makes it possibly costly for some small companies to bear the full costs, as also argued elsewhere. Second, regulation may not be needed per se if firms can develop an awareness that the wrong kind of AI will build huge risks, for example in brand reputation damage or worse (remember the data scandal of Cambridge Analytica, which led the FTC to fine Facebook US$5 billion and led CA to bankruptcy). 

However, the main issue with regulating AI at the firm level is that ethical AI is not the common type of business ethics, as we have seen for issues such as implementation of governance board independence, corporate social responsibility, or inclusiveness. In fact, it is way harder to implement ethical AI. Not only does it require the AI foundation in terms of technical AI infrastructure, data and skills, but it also requires the AI models to be closely audited and monitored, with accountability assigned across the organisation. This is not a small task. 

Ethical AI operations in practice 

We have recently tried to assess the adoption of responsible AI practices among European listed firms in major European markets. According to the survey, 94 per cent of European listed firms are expected to have developed their AI principles by 2024 in line with their AI strategy, or more than double the rate by 2021. However, only 41 per cent of firms feel that RAI is sufficiently embedded in the daily work of all employees, highlighting the need for further integration, and methods of RAI operationalisation of practices across organisational functions. 

Based on multiple discussions, scholar reviews, case studies, and own experience, the key to companies’ success in day-to-day AI ethics is based on two pillars: a) a complete journey, and b) an adequate staffing level  

The journey 

Using Eitel-Porter’s work, the journey consists of 1) an organisation’s choice of ethical principles, 2) the setting up of an AI ethics board, followed by 3) a robust governance process to be set up to clarify how decisions around responsible AI are made and documented, before 4) a company-wide training of responsibilities and usage, to be completed by 5) a strong stress test of the AI practices. 

While the journey may seem common sense, a few tips are that firms should select ethical principles that are crucial in order to develop trust and reputation in their industry, that the ethical board should be composed of true external experts in both ethics and AI, that the governance process should be staged, for example starting from data accuracy / traceability, etc. to model development before moving to production. Also, training is a must-have in order to ensure that corporate AI principles are applied at all levels of the organisation. One approach of stress testing is to establish “red teams“, as in the world of cybersecurity, and relates to the use of “white hat” hackers to test enterprise defences: “Applied to responsible AI, Red Teams constitute data scientists charged with reviewing algorithms and outcomes for signs of bias or the risk of unintended consequences.” 

Resources 

There is no rule here per se. As an example of full regulatory compliance, banks support costs of about 0.5 to 1 per cent of the total for compliance risk monitoring. Typical high-tech companies seem to have an ethical AI team of about 20-40 people. 

 But, just as importantly, teams tasked with designing AI, should be multicultural as a guard against unconscious bias. Likewise, the ethics board should be high-profile to demonstrate commitment – and be responsible and accountable for its actions. 

 

About the Author 

Jacques Bughin

Jacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He is retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC / PE firms, and serves on the board of several companies. 

The post The EU AI Act: Trustworthy or Reckless AI? appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-eu-ai-act-trustworthy-or-reckless-ai/feed/ 0
Everybody Loves … Nvidia https://www.europeanbusinessreview.com/everybody-loves-nvidia/ https://www.europeanbusinessreview.com/everybody-loves-nvidia/#respond Tue, 19 Mar 2024 14:37:37 +0000 https://www.europeanbusinessreview.com/?p=203057 By Dr. Jacques Bughin Semiconductors are positioned at the forefront of innovation and digital transformation. With companies like Nvidia leading the charge and the semiconductor sector potentially reaching a staggering […]

The post Everybody Loves … Nvidia appeared first on The European Business Review.

]]>
By Dr. Jacques Bughin

Semiconductors are positioned at the forefront of innovation and digital transformation. With companies like Nvidia leading the charge and the semiconductor sector potentially reaching a staggering valuation of $1 trillion in the next five years, we contemplate the bright future and emerging trends shaping this critical industry. 

The Future Looks Bright for Semiconductors 

For those who may not remember, “Everybody Loves Raymond” ran for nine seasons on CBS in the US, and was (apparently) voted the 35th-best sitcom of all time by Rolling Stone magazine. Borrowing the title to include Nvidia is not far-fetched. As the Financial Times recently reported, Nvidia ranks 5th in terms of the number of hedge funds holding shares and, most importantly, it is the stock that has added the most this year. 

This interest is clearly linked to the fact that the digital revolution is finally putting all the pieces together, with the cloud, big data and AI. But for this to run, one needs semiconductors. 

The role of semiconductors in electronic circuits and lasers demonstrates their undeniable importance in our modern world.

Since their inception, semiconductors have radically changed the course of technology, with the successful demonstration of the first transistor in the 1940s. The use of semiconductors as the base material for optical fibres was then widely introduced in 2000. The role of semiconductors in electronic circuits and lasers demonstrates their undeniable importance in our modern world. As the world moves into the next phase of digitalisation and the Web 3.0 era, semiconductors are once again at a crucial inflection point. This is not just due to geopolitical factors such as TSMC-Taiwan and China, or supply chain disruptions caused by the COVID-19 pandemic, which has led to delays in various industries, including automotive. Instead, it is being driven by the shift towards electronic and electric vehicles, the transition to 5G/6G wireless networks and, mostly, artificial intelligence.  

The recent surge in the share price of Nvidia testifies to the enthusiasm for semiconductors. The SMH index of 25 industry leaders is up around 25 per cent this year, with a lower beta than most technology and artificial intelligence stocks. These trends point to a bright future for semiconductors, with forecasts suggesting that the sector could reach a valuation of US$1,000 billion over the next five years. 

Bubble or beyond Moore? Five trends to consider 

However, this optimism begs the question: is this growth sustainable, or is it simply a bubble? For some players, such as Nvidia, their share price performance is closely linked to their return on assets (ROA) and return on equity (ROE), which have expanded significantly in recent months. Nevertheless, as the semiconductor industry continues to evolve at a rapid pace, it is important to identify and manage new dynamics around at least five emerging trends. 

  • Trend 1: Hello, (generative) AI. How will demand evolve, especially with the emergence of generative AI models driving semiconductor demand? 
  • Trend 2: Product evolution. Is silicon still the reigning champion, or will compounds such as gallium nitride (GaN) dominate the landscape, thanks to their superior electrical properties and energy efficiency? 
  • Trend 3: Dual transformation. Can sustainability and digitalisation coexist harmoniously, or is the energy-hungry nature of digital technologies a stumbling block? 
  • Trend 4: Hyper-competition. How will the competitive landscape evolve as technology giants increasingly design their own chips? 
  • Trend 5: Battle of the platforms. The rise of the ARM architecture is challenging the dominance of the x86 architecture. How will this reshape the semiconductor ecosystem, particularly in terms of chip architectures and supplier dynamics? 

Trend 1: Generative AI 

One of the key drivers of semiconductor demand in recent months has been the development of powerful generative AI models to complement already-burgeoning AI applications such as deep learning, computer vision, robotics, and Internet of Things (IoT). 

For these models to work, a special type of chip – AI accelerator chips (or deep learning processors) – is needed to speed up AI computations, making them significantly faster and more energy efficient than using general-purpose processors. These AI accelerators often have multiple cores and focus on low-precision arithmetic, are optimised to process data with reduced precision using new dataflow architectures, and efficiently process data through specialised pipelines. 

AI accelerator chips (or deep learning processors) – is needed to speed up AI computations, making them significantly faster and more energy efficient than using general-purpose processors.

Key supplies include Nvidia’s Tensor Processing Units (TPUs) and AMD’s Radeon Instinct accelerators. These specialised chips are optimised for deep learning tasks and are widely used in data centres for AI inference and training. Nvidia’s Tesla GPUs, for example, are powering AI applications in sectors ranging from healthcare to autonomous vehicles, demonstrating the important role of semiconductor companies in meeting the evolving demand for AI. 

The uncertainty does not come from the surge in AI demand; it comes from the fact that there is no dominant design (yet?), and the evolution of AI in terms of horizontal / vertical LLMs is not yet defined. 

Trend 2: Is silicon here to stay? The rise of gallium nitride (GaN) 

GaN is a compound semiconductor with superior electrical properties that will usher in a new era of energy-efficient electronics. GaN has a very hard crystalline structure and a wide bandgap, making it more suitable for high-power, high-frequency optoelectronic applications such as blue LEDs, microwave power amplifiers, and space applications (e.g., solar panels on satellites). 

However, it is increasingly being used in power supplies for electronic devices, converting alternating current from the grid into low-voltage direct current. GaN technology can handle larger electric fields in a much smaller form factor than silicon, while offering much faster switching. GaN is becoming indispensable, for example, in power conversion platforms, where silicon has reached its limits, or in the transition from mobile computing to Web 3.0. GaN chips are also easier and faster to manufacture than silicon chips, a major drawback of semi-finished products in the recent past, so companies are turning to GaN for smaller, more efficient electronic devices. 

Trend 3: The promise of dual transformation 

Dual transformation is the hope that sustainability and digitalisation are highly complementary. For example, digital technologies can enable people to work efficiently from home, reducing the environmental impact of commuting. At present, however, the vision of dual transformation is not justified, because digitalisation is an energy-intensive process. A single semiconductor factory can consume up to 1 TWh of energy per year and 2 to 4 million gallons of ultra-pure water per day. Semiconductor manufacturers have understood the challenge and, like native digital players, are unveiling their sustainable development initiatives. These include moving cloud workloads to GANs with access to renewable energy or improving semiconductor design. However, moving to GANs is a game changer, because it radically reduces energy consumption. 

Trend 4: Hyper-competition? 

Until now, the AI mega-users (e.g., Google, etc.) have outsourced the value chain, buying chips from third parties. But this is changing. Many tech giants, such as Apple, Tesla, Google, and Amazon are now making their own chips, designed specifically for their products. Google has just unveiled its new Pixel 6 and Pixel 6 Pro phones, which use Tensor, the first chip designed by Google to bring AI capabilities to its range of mobile phones. Apple’s new MacBook Pros 2021 are based on the company’s own M1 chips. Importantly, this development could challenge the current horizontal model of AI players such as Nvidia. 

This move towards in-house chip development could challenge the current model of outsourcing semiconductor production to third-party manufacturers. In particular, the trend could disrupt the dominance of companies such as Nvidia in the horizontal AI accelerator market, as technology giants seek greater control over their semiconductor supply chains. 

Trend 5: Platform battle: chip architectures 

The x86 architecture has dominated the microprocessor industry for more than 50 years. However, this is changing with the growing popularity of the ARM architecture. While this architecture was born out of the need for low-power chips for vertical applications, it is beginning to establish itself not only as a low-power solution, but also as a high-performance competitor to the established x86 players

Google and AWS have decided to build their own chips, choosing the ARM architecture for its performance and low power consumption, which has become so important for power-hungry data centres, consumer products, and sustainability efforts. This growing shift to the ARM architecture is changing the dynamics of the semiconductor ecosystem. Unlike the x86 platform, where companies can buy from one or two suppliers, ARM has become a broker, making its intellectual property available to multiple companies. 

About the Author

Jacques Bughin

Dr. Jacques Bughin is the CEO of Machaon Advisory and a professor of Management. He retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of multiple companies.

The post Everybody Loves … Nvidia appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/everybody-loves-nvidia/feed/ 0
The Key Success Factors of a Powerful AI Factory  https://www.europeanbusinessreview.com/the-key-success-factors-of-a-powerful-ai-factory/ https://www.europeanbusinessreview.com/the-key-success-factors-of-a-powerful-ai-factory/#respond Wed, 10 Jan 2024 08:48:02 +0000 https://www.europeanbusinessreview.com/?p=199181 By Jacques Bughin More and more companies are leveraging data and deep machine learning algorithms, leading to the emergence of the “AI factory” model. As with any transformation however, knowing […]

The post The Key Success Factors of a Powerful AI Factory  appeared first on The European Business Review.

]]>
By Jacques Bughin

More and more companies are leveraging data and deep machine learning algorithms, leading to the emergence of the “AI factory” model. As with any transformation however, knowing the keys to success makes the difference. This article explores those factors. 

1. The AI factory paradigm shift 

The beginnings of artificial intelligence (AI) can be traced back to Alan Turing’s visionary ideas in the 1950s. Today, AI drives many successful businesses such as Netflix’s video recommendations1, Airbnb’s assorted rentals, Google search, and Hubspot’s copilot software codes2 

Although often used interchangeably, AI and digital technologies differ, with AI representing a more powerful subset focused on tasks requiring human intelligence. With these unique characteristics, the emergence of the “AI factory” model has marked a significant paradigm shift, where companies are leveraging data and deep machine learning algorithms at their core. 

Many companies are moving towards this new AI factory model, including incumbents. Companies such as Lockheed Martin3, BBVA4, Cleary Gottlieb5 and EQT Ventures6 present a variety of applications, from transactional legal processes to the reorganisation of venture capital deal flows. The current evolution of AI is evident, particularly in the high-tech and B2C sectors.  

Digital transformation has paid off handsomely when incumbent companies have linked their transformation to a major renewal of their strategy.

There are also reports of significant gains for companies leveraging AI. UK-based Ocado Retail customers are using Ocado’s flagship deep learning model of inventory replenishment7 to boost product availability from 90% to over 98%. Webhelp, a French company founded in 2000 at the dawn of the internet age as an online customer support interface, has transformed its model into an AI factory. Using an architecture based on UiPath Orchestrator, the data analytics factory constantly provides new, factual lead generation data, enriches information data from the corporate universe while workers can directly and seamlessly obtain and share information through a set of powerful user interfaces, and provide user feedback to continuously improve the platform. Webhelp reports productivity increases of up to 40%8 in some lead generation projects that were originally conducted blind, outside its AI factory model. Moderna used AI algorithms9 to cut the development of its COVID-19 vaccine by a factor of 20 in just 65 days, a process that would previously have taken years. Chinese company ByteDance10 has adopted the AI factory approach with its flagship TikTok product, which automatically delivers a direct stream of short, personalised videos to users, instead of relying on recommendations. This user switch makes the platform the market leader in short videos, despite YouTube’s leadership for at least a decade.   

But as with the first era of digital transformation, transforming AI doesn’t seem so easy11. What are the key elements for success in the right AI factory?   

2. Five Key Success Factors

1. The art of strategic renewal

Digital technologies are “strategic” in nature12. When it comes to strategic change, digital transformation has paid off handsomely when incumbent companies have linked their transformation to a major renewal of their strategy. The same should be true of AI transformation. A clear case study in the media can illustrate this point: when the New York Times successfully migrated to a “reader-first paradigm”13, with its online paywall model. As part of this strategic shift, the paywall is both a billing mechanism and an elaborate readership personalisation system that attracts and retains readers over the long term with the compelling content they seek out. Personalisation is in turn continuously triggered by open data collection and additive microsegmentation, and leveraged by the move to an agile AI factory model. 

smart-warehouse-management-system-worker-hands-holding-tablet-blurred-warehouse-as-background (1) (1)

2. Data is the new oil, but it needs a refinery

The supply chain of your data is essential in the AI factory – you need to understand where it comes from, how it’s stored, its traceability, and how it’s used. In general, dataOps is a crucial methodology for managing the data supply chain, but it’s lacking in the majority of companies. 

3. People have a professional future

One of the paradoxes of AI has been the emphasis on a work-less future14, in which AI will replace much human activity. Contrary to this prevailing thinking, the success of the AI factory still depends on many human skills. As an example, BBVA’s AI Factory Center15 counts 150 people among its 1,000-strong analytics community and incorporates multidisciplinary profiles, including data scientists, software engineers and developers, data architects, as well as “business translators” who define the technical and commercial role of AI. 

 In fact, a comprehensive analysis of people skills and AI that we recently conducted with Accenture Research16 concluded that successful AI adoption requires the skills of the entire workforce to be adjusted. We see that adapting the skills of the entire workforce to the new AI operating model, and training leaders to master this new operating model, are two additional crucial elements in leveraging AI.

4. Your AI must be more trustworthy than your current brand

Whether it’s beneficial AI, responsible AI or trustworthy AI17, the various terminological variants always remind us that AI needs fundamental trust to thrive18. As AI is automating decisions in more and more mission-critical use cases, it’s important to add some accountability to the whole system19. After all, most reputable companies have faced serious problems when using AI, for example when Microsoft’s chatbot was on the verge of spreading hate speech, or when Amazon’s online recruitment tended to favour a certain gender or race. In general, the issue of trust can only grow with the use of Generative AI, and with Large Language Models (LLMs) being mostly opaque, and relying on possible data bias.   

young female teacher

5. Innovation is radical, not incremental 

Innovation relies on establishing and embedding a culture of innovation, enriched by experimentation. Especially in the context of AI, innovation is a question of business model. The top quartile of AI companies at scale often engage in a platform strategy and collaborate with ecosystem partners. In 2020, for example, American Express partnered with the Indian Institute of Technology Madras to create a “Data Analytics, Risk and Technology” lab. Microsoft has opened its AI Factory to startups in Europe in fields as varied as healthcare, green energies and agri-food.  

There are a large set of new business models emerging thanks to AI, such as immersive AI, global AI Marketplace, Computing, or AI as service, and a plethora of new models such as copilots attached to generative AI. In general, the companies mastering the AI factory are at the frontier of experimenting with those models. Are you ready to follow suit?  

About the Author 

Jacques BughinJacques Bughin is the CEO of MachaonAdvisory and a former professor of Management. He is retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of several companies. 

References 

  1. Artificial Intelligence at Netflix – Two Current Use-Cases. 10 January 2022. Emerj. https://emerj.com/ai-sector-overviews/artificial-intelligence-at-netflix/
  2. GitHub Copilot: Everything You Need to Know. 31 October 2023. HubSpot. https://blog.hubspot.com/website/github-copilot
  3. Accelerating Artificial Intelligence (AI) at Scale. 05 May 2022. Lockheed Martin. https://www.lockheedmartin.com/en-us/news/features/2022/accelerating-artificial-intelligence-ai-at-scale.html
  4. How we drive innovation at AI Factory?. BBVA. https://www.bbvaaifactory.com/
  5. Cleary Gottlieb Launches ClearyX, A Platform for Highly Efficient, AI and Data-Driven Legal Services. 23 June 2022. ClearyX. https://clearyx.legal/2022/06/23/cleary-gottlieb-launches-clearyx-a-platform-for-highly-efficient-ai-and-data-driven-legal-services/
  6. EQT’s AI platform Motherbrain pushes the boundaries of the private markets with novel algorithm for better decision making. 05 November 2021. EQT Group. https://eqtgroup.com/news/2021/eqt-s-ai-platform-motherbrain-pushes-the-boundaries-of-the-private-markets-with-novel-algorithm-for-better-decision-making/
  7. What are Stockouts? (+ How to Prevent Out of Stocks in 2024). 13 June 2023. Shopify. https://www.shopify.com/retail/what-causes-a-stockout
  8. HOW WEBHELP ENTERPRISE OPTIMISED LEAD GENERATION IN THE B2B SECTOR. Artefact. https://www.artefact.com/cases/how-webhelp-enterprise-optimised-lead-generation-in-the-b2b-sector/
  9. Moderna leveraging its ‘AI factory’ to revolutionise the way diseases are treated. 17 May 2021. ZDNET. https://www.zdnet.com/article/moderna-leveraging-its-ai-factory-to-revolutionise-the-way-diseases-are-treated/
  10. Artificial Intelligence Factory, Data Risk, and VCs’ Mediation: The Case of ByteDance, an AI-Powered Startup. 02 May 2021. MDPI. https://www.mdpi.com/1911-8074/14/5/203
  11. Now It is Time for AI Transformation. 19 September 2023. The European Business Review. https://www.europeanbusinessreview.com/now-it-is-time-for-ai-transformation/
  12. 6 Digital Strategies, and Why Some Work Better than Others. 31 July 2017. Harvard Business Review. https://hbr.org/2017/07/6-digital-strategies-and-why-some-work-better-than-others
  13. Disruptive Innovations and Paradigm Shifts in Journalism as a Business: From

Advertisers First to Readers First and Traditional Operational Models to the AI Factory. 2022. Sage Pub. https://journals.sagepub.com/doi/pdf/10.1177/21582440221094819 

  1. Artificial Intelligence, Its Corporate Use and How It Will Affect the Future of Work. 27 June 2020. SpringerLink. https://link.springer.com/chapter/10.1007/978-3-030-46143-0_14
  2. BBVA AI Factory, among the world’s best financial innovation labs, according to Global Finance. 18 May 2021. BBVA. https://www.bbva.com/en/innovation/bbva-ai-factory-among-the-worlds-best-financial-innovation-labs-according-to-global-finance/
  3. The art of AI maturity. Accenture. https://www.accenture.com/us-en/insights/artificial-intelligence/ai-maturity-and-transformation
  4. NRI 2023: Translating AI Principles Into Industry Practice. 2023. Portulans Institute. https://portulansinstitute.org/translating-ai-principles-into-industry-practice/
  5. A Unified Framework of Five Principles for AI in Society. 02 July 2019. HDSR. https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/8
  6. How can we ensure trustworthy AI?. 22 August 2022. Technology Magazine. https://technologymagazine.com/ai-and-machine-learning/how-can-we-ensure-trustworthy-ai

The post The Key Success Factors of a Powerful AI Factory  appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-key-success-factors-of-a-powerful-ai-factory/feed/ 0
Bitcoin is 15: Towards Adulthood or Speculative Bubble? https://www.europeanbusinessreview.com/bitcoin-is-15-towards-adulthood-or-speculative-bubble/ https://www.europeanbusinessreview.com/bitcoin-is-15-towards-adulthood-or-speculative-bubble/#respond Mon, 08 Jan 2024 08:45:45 +0000 https://www.europeanbusinessreview.com/?p=198946 By Jacques Bughin Globally, more than 15,000 businesses accept Bitcoin. To date, high flyers such as Microsoft, AT&T, Starbucks, Gucci, and Shopify accept Bitcoin in one form or another. As […]

The post Bitcoin is 15: Towards Adulthood or Speculative Bubble? appeared first on The European Business Review.

]]>
By Jacques Bughin

Globally, more than 15,000 businesses accept Bitcoin. To date, high flyers such as Microsoft, AT&T, Starbucks, Gucci, and Shopify accept Bitcoin in one form or another. As an executive or, for that matter, as an individual and investor, should you embrace crypto or shy away from it? 

“Happy Birthday, Mr Bitcoin“ 

Bitcoin, and cryptocurrencies in general, are peer-to-peer payment networks in which the verification of transactions is decentralised outside of a third “official” party, such as a central bank. Bitcoin has notably just passed the 15-year milestone after its principles were disseminated in a white paper in late October 2008, by Satoshi Nakamoto. In 15 years, the story of Bitcoin is nothing short of a saga (see figure 1), moving to a price of US$12 by 2012, and to US$40,000 more than 10 years later.  

Is Bitcoin only a difficult teenager or will it move to adulthood, outside speculation and bubble? The fact that countries are slowly adopting Bitcoin as a legal currency or that massive or high-profile funds such as Ark Invest and BlackRock are applying for a Bitcoin spot EFT are possible testaments to its legitimacy. 

bitcoin's journey

The jury is still out. Bubbles have clearly happened in the short lifetime of Bitcoin, and are likely to coexist with fundamentals, depending on the timeline. Further, on the scepticism side, 2022 saw the failure of Celsius and Voyager and foremost FTX after Bitcoin crashed by 80 per cent. Even if it recovered and increased by 150 per cent in 2023, the current price is still about US$25k from its peak by November 2021. 

On the optimistic side, though, we once remarked that the demand for Bitcoin is getting stronger and correlates with economic factors that are linked to fundamentals such as inflation expectations, among others. Similarly, on the supply side, one can note that the Bitcoin ecosystem has been around for years now and, while it has seen very large dips (such as after 2013 and 2017), it has always bounced back – stronger. This resilience is a fortiori remarkable when the ecosystem has miners rather exposed, given that their activities are closely linked to the price of Bitcoin, since they own a large stock of Bitcoin in order to ensure the best liquidity in the field and to be able to profit from a significant market rise should the blockchain market turn bullish again. Even during the recent crash in 2022, the gross margins of mining layers such as Argo or Marathon remained largely healthy, above 80 per cent (or at the top of range compared say to SaaS businesses), with operating margins higher than 50 per cent in a sharp downturn scenario, which is significantly above any type of business. 

The relevance of bitcoin: three filters  

So are cryptocurrencies “useless” bits or are they really useful? We propose three lenses as a way to make sense of the crypto mania. 

Empirical evidence 

Question: Is there a link between the price of Bitcoin and the potential intrinsic value of the Bitcoin economy? 

Let’s start with a few points about how Bitcoin works. First, the supply of Bitcoins is totally inelastic, determined by a protocol, with a fixed issuance schedule that is halved every four years, up to a final amount of 21 million Bitcoins. Second, a new Bitcoin is accompanied by a block that is valid only by virtue of a protocol such as proof of work that someone (the miner) has committed a certain amount of computing power. Agents on the Bitcoin network compete to have their transactions included in the first blocks. Computing power is measured in hashes per second.  

These two characteristics of the Bitcoin economy imply that: 

  1. the limited supply of Bitcoins can be anti-inflationary compared to the supply of money from central banks. (Just look at what happened post-COVID after governments issued lots of money to finance economies, and how that created massive inflation afterwards, and that the Bitcoin price should be positively correlated with the supply of money.) 
  2. if the Bitcoin price represents a rational representation of the monetary utility of a medium of exchange, store of value or unit of account, we should observe some relationship between the Bitcoin price and the token velocity (which represents the exchange value) and the stake ratio (which represents the value of the store for long-term holders). 
  3. if Bitcoin is a valuable network, its price should be linked to its users and the hash rate

As far as point a) is concerned, all you have to do is look at the narrow dynamic between money supply and Bitcoin price growth. However, the correlation is not causal. With regard to points b) and c), a large body of academic research shows that Bitcoin prices evolve as a value network and are the cause of Granger hash rates. Similarly, Bitcoin prices are correlated with staking and velocity ratio

Socioeconomic data 

Question: Can Bitcoin become a powerful social network, even if it’s backed by limited original value? 

In fact, and contrary to popular belief, economic history shows many cases where money has been created, even in the absence of an intrinsic source of value. The vast majority of them were, in fact, born of the codification of pre-existing, shared interpretations of debt and credit relations within societies. 

Lynette Shaw’s recent work is fascinating, because it shows that, even in the absence of a shared, explicit understanding of Bitcoin’s value, the practical necessity of achieving widespread adoption has finally laid the groundwork for a confrontation with the inevitably social demands of establishing a new currency. Nor should we forget that Bitcoin is unprecedented in its leverage of digital communities, giving it global reach. This is also what my former McKinsey colleagues (Arthur Armstrong and John Hagel, 1996) had long anticipated as online communities, being the most powerful social force of the Internet

Logical proof 

Question: Can Bitcoin prices be uniquely defined as an equilibrium resulting from the behaviour of a rational agent? 

Bitcoin has been the subject of numerous economic models. One strand of research has argued that the Bitcoin community can effectively act as a rational force against discretionary government seigniorage. As a private currency, Bitcoin acts strategically as a tool to prevent governments from abusing their citizens.  

Another line of research has been to reproduce Lucas’s rational expectations equilibrium in models where the currency would be, as some claim about Bitcoin, necessarily worthless. These economists show that such a unique rational equilibrium can exist, providing theoretical support for the claim that cryptocurrencies are forms of money. Furthermore, the logic may also imply that a central bank, government or active political intervention is not required to stabilise the value of the “supposedly worthless” cryptocurrency, if the protocol can be designed to support a unique equilibrium, in effect replicating the Fisher equation of money  

What does the jury have to say? 

The above arguments in favour of Bitcoin have the merit of existing. 

But are those arguments convincing? Taken separately, they all carry caveats. For example, when it comes to logical proofs, models are just models, and Lucas’s reproduction of general equilibrium is an elegant piece of mathematical work, but it may not take into account the fact that cryptocurrencies are far more volatile than these types of models suggest. According to our own research, long-term cointegration seems to take 5-7 years, implying that equilibrium is at best latent and not visible in a dynamic market such as Bitcoin. Secondly, the use values of cryptocurrencies, such as inflation hedges, etc., always depend on the prior establishment of Bitcoin‘s value by others

In general, however, the three lenses are highly complementary and demonstrate the strength of the argument that Bitcoin is probably much more than a bubble. In particular, 

  1. cryptocurrency is just one case of blockchain. Blockchain itself is a meta-technology that could democratise finance, for example for real estate, just as cryptocurrency could do in developing markets. 
  2. more importantly, the fact that cryptocurrencies can indeed provide a counterweight to excessive state seigniorage puts governments on a quest for excellence. If this is all that Bitcoin manages to do, it does meet its original aim of ensuring that we are less at the mercy of poor inflationary policy measures. 

As an executive, the above suggests that crypto may be more than thin air. There is possibly a lot still to see in order to embrace cryptos, such as other companies have started to do for transactions. After all, those companies represent barely 0.01 per cent of total companies worldwide. Nevertheless, taking an active role in crypto may allow a company to learn and experiment with blockchain, creating a strong optionality for a future of even more decentralised ecosystems. 

 

About the Author 

Jacques Bughin

Jacques Bughin is CEO of MachaonAdvisory and a former professor of management who retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of a number of companies. 

The post Bitcoin is 15: Towards Adulthood or Speculative Bubble? appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/bitcoin-is-15-towards-adulthood-or-speculative-bubble/feed/ 0
2024: What is the Near Future of Generative AI? https://www.europeanbusinessreview.com/2024-what-is-the-near-future-of-generative-ai/ https://www.europeanbusinessreview.com/2024-what-is-the-near-future-of-generative-ai/#respond Thu, 28 Dec 2023 13:28:39 +0000 https://www.europeanbusinessreview.com/?p=198617 By Jacques Bughin Generative AI has already made a big splash, but just how wet we’re all going to get in the future is still a matter for speculation. Nevertheless, […]

The post 2024: What is the Near Future of Generative AI? appeared first on The European Business Review.

]]>
By Jacques Bughin

Generative AI has already made a big splash, but just how wet we’re all going to get in the future is still a matter for speculation. Nevertheless, as you reach for your towel, Jacques Bughin has a few solid predictions for you.  

Large language models (LLMs), exemplified by Generative Pre-trained Transformer 4 (GPT-4) from OpenAI, launched  the year 2023, ushering in a new revolution in AI and machine learning. While traditional machine learning models often rely on single-source data sets, LLMs, built on transformer neural network architectures, undergo training on an unprecedented scale of compute and data. This results in impressive capabilities, encompassing tasks that were once exclusive to humans, such as reasoning, abstraction, and projection. 

As we explore the potential of these models, it’s essential to contemplate the future. Here are five predictions. 

LLMs will become (even) better skilled 

In less than a year, LLMs have delivered an impressive evolution, expanding from text-based generative AI to incorporating voice and vision in models like GPT4.0. Looking back, the original GPT could not generally produce coherent text by 2018, while a few months later, GPT-2 could only follow simple instructions. GPT-3 and now GPT-4 now can  perform a wide range of language tasks on a par with humans. 

Other models, such as Google’s Gemini, Nvidia’s Falcon, and Claude, have also been enhancing their performance to compete with OpenAI products.  

Over the recent five years, LLMs have, on average, improved their accuracy on the multitask understanding (MMLU) scale, reaching human expert-level accuracy[1].  This  performance has been closely scaling with computational resources. Larger models, such as GPT-3, showcase significantly enhanced capabilities compared to their predecessors, leveraging approximately 20,000 more data, computation  and parameters.  

Another reason for LLM chasing size is that LLMs have demonstrated a massive burst in abilities around programming or arithmetic, for a certain threshold of models. In general, performance improves with scale roughly gradually and predictably when the basis is the knowledge or memorisation component, but can exhibit “breakthrough” behaviour when it requires much more complex reasoning. Those breakthroughs are only to be seen now, and the race for performance among large LLM providers competing for their share, from Google to Microsoft and others, suggests that the performance will continue to accelerate in the months to come. 

Performance is “more with less” 

Based on estimation by DeepMind and Meta, we estimate that the performance function increases by a factor of 10 for a model 10 times larger on both parameters and tokens. This also means that the scale needed to continue improvements is challenging, and that “big is beautiful” may need some twist going further. With this path, the cost of doing LLM is computationally costly, and inevitably raises sustainability issues, let alone that LLMs will have soon consumed all the public data to train and tailor their model.  

How then will performance increase in a paradigm shift involving “more with less”? We already see a bit of segmentation happening between some models (those above 10 billion parameters, such as Llama, Alpaca and Falcon), and the very big ones with hundreds of billions of parameters. The 10 billion parameter models are, more often than not, based on OpenSource, but are also experimenting with new methods to break away from high computing costs.  

Those models are looking at new ways around data, such as data repetition, data augmentation, synthetic data on top of  merging generative AI with reinforcement learning. Repetitive data helps LLM to learn the content structure and patterns more effectively, but there is clearly a risk that duplicate data may overwhelm the training process of LLM and degrade the model capabilities. The challenge will be thus to find the right balance between degradation and learning. 

Reinforcement data techniques have proven to be extremely successful in the past, from Tesauro’s system by 1995 that used reinforcement learning to learn how to play backgammon at a very strong master’s level, or Crites and Barto’s (1996)  techniques to optimally dispatch elevators in a multi-storey building. These are quickly being used for refining LLMs and supporting the accuracy of LLM models. The customisation of GPT is one trick to make people more likely to use LLMs and provide reinforcement clues, even if, today, the performance of the model is not necessarily better but, in the medium term, models are sure to learn extensively from the feedback loops of intensive and diversifiable usage. 

A promising capability is the use of synthetic data, which has proven its capability beside LLM in many circumstances already. A publicly known case is the Nvidia simulator application in its industrial metaverse that successfully leverages synthetic data to train robots.  In general also, synthetic data can be used to rebalance samples when the required prediction concerns rare events such as financial fraud or manufacturing defects. Generating synthetic instances of such events increases model accuracy. Watch this space, but the “more with less” is here to stay and expand. 

Cracking xAI for LLM 

Explainable AI, or xAI, is the capacity of AI models to elucidate the reasoning behind their actions in a manner comprehensible to humans. This transparency is paramount for user trust, particularly in safety-critical domains like healthcare, self-driving cars, or finance. AI providers align with the need for understanding flaws and biases in models to enhance accuracy. 

xAI has seen substantial progress outside LLMs, employing attribution techniques like gradient-based methods and SHAP values. However, challenges for LLMs are twofold. First, these techniques demand considerable computational power to explain billion-parameter LLMs. Second, the sheer volume of content absorbed by LLMs surpasses human capacity for absorption, complicated by the high non-linearity of neuronal networks. This is not an issue of “black box” for users only, but designers themselves. Remember that the model improvements could not be anticipated at the release of GPT-3; it took some time for Open AI among others to realise GPT-3 breakthrough capabilities of few-shot learning, and chain-of-thought reasoning.  

There is currently no robust technique providing a clear logic of how LLMs operate, due to the complexity of dealing with billions of connections between artificial neurons. Despite claims, recent research by OpenAI suggests that, at the level of each neuron, only 2 per cent exhibit explainable power above 70 per cent. 

The field of xAI is flourishing out of necessity for LLM survival. Various tools, including geometry approaches, aim to make LLM models more transparent. Alternative solutions involve exploring techniques beyond pure neural-network-based models, such as merging with symbolic AI or employing synthetic data

LLM goes to the edge 

Users have often  experienced latency issues with products like ChatGPT, and concerns about the privacy and security of data ingested in LLMs have often arisen due to their reliance on cloud computing. Edge AI presents a potential solution, offering connected intelligence with low latency, as well as privacy-preserving AI services at the network edge. 

A virtuous cycle emerges where LLMs optimise last-mile network consumption, benefiting telecom providers through improved AI-based scheduling and resource allocation algorithms. This results in decreased latency and enhanced spectral efficiency.  

Monetary considerations come into play as well. The value of LLMs lies in their application, not just in training sets. This suggests a shift towards smaller, versatile LLM models for simpler applications, such as human interactions and conversations. Techniques like parameter-efficient fine-tuning and QLoRA  can fine-tune smaller models to perform like state-of-the-art LLMs.  

This trend not only secures the demand side but also sees consumer tech players finding value in making their devices more intelligent, rather than relying solely on cloud providers. The intersection of 5G and LLMs is expected to create a small revolution, with companies like Apple, Samsung, and others actively working on it. 

Implications of this trend include the potential for LLM models to reside at the edge, such as on smartphones, enabling real-time conversational activities like image collection and chatbot discussions. In business contexts, tailored LLMs facilitate intelligent and private access to company-specific information, supporting the vision of widespread mobile and remote work. Additionally, robots with LLM capabilities could act in real-time collaboration with humans. 

LLM will kick off the age of AI transformation 

The hype aside, 2024 appears poised to witness the transition from testing to real competitive usage of generative AI by companies. Presently, many business users engage ChatGPT for programming and marketing support. But despite all the hype, the real industrialisation of LLM-based transformation takes time. 

However, the true industrialisation of LLM-based transformations is a gradual process. Several factors contribute to this gradual progression, with the absence of established usage policies being a prominent one. Concerns related to security, hallucination, and other issues are valid, yet the underlying reality is that enterprises possess a valuable reserve of private data. This data can fuel the development of domain-specific models, enabling the integration of AI as a strategic advantage within their enterprise. 

Major tech entities like Microsoft and Google are evidently targeting the enterprise markets, leveraging their wealth of data. Simultaneously, other players are entering the arena with new models as a service tailored for the enterprise space. Singapore-based WIZ.AI’s LLMs exemplify this trend, seizing the enterprise AI opportunity. 

The evolving value chain is swiftly organising around vertical application providers, some integrated with LLMs. Examples include Claude’s emphasis on education, FinGPT specialising in finance, Google’s focus on healthcare, Dramatron contributing to theatres and movie scripting, or Codex powering GitHub Copilot for software development. 

The era of digital transformation is now a part of history, making way for the era of AI transformation. The signs are that 2024 will be the year, with companies positioning themselves to harness the potential of generative AI, recognising it as a pivotal element in reshaping their business  landscape. 

About the Author 

Jacques Bughin

Jacques Bughin is CEO of MachaonAdvisory and a former professor of management who retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of a number of companies. 

References

  • [1] Hendrickx and colleagues (2021) creates the test based on 57 multiple-choice questions spanning topics in the humanities to mathematical sciences, and collected from a wide array of practice questions for tests such as the Graduate Record Examination, the United States Medical Licensing Examination or the Examination for Professional Practice in Psychology. According to the authors, human-level accuracy on this test varies. They report that “unspecialised humans from Amazon Mechanical Turk obtain 34.5 per cent accuracy on this test. (…)  real-world test-taker human accuracy at the 95th percentile is around 87 per cent for US Medical Licensing Examinations”. 
  • Balestriero, Randall, Romain Cosentino, and Sarath Shekkizhar. “Characterizing Large Language Model Geometry Solves Toxicity Detection and Generation.” arXiv preprint arXiv:2312.01648 (2023). 
  • Bughin, Jacques (2023, October 1). “To ChatGPT or not to ChatGPT: A note to marketing executives”. In Applied Marketing Analytics: The Peer-Reviewed Journal, Volume 9, Issue 2. 
  • Buchatskaya, E., Cai, T., Rutherford, E., de las Casas, D., Hendricks, L. A., Welbl, J., et al. (2022), “An empirical analysis of compute-optimal large language model training”. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. 
  • Chenghao Huang, Siyang Li, Ruohong Liu, Hao Wang, and Yize Che (2023), “Large Foundation Models for Power Systems.” ArXiv, December 2023. 
  • Dettmers, A. Pagnoni, A. Holtzman, and L. Zettlemoyer, “QLoRA: Efficient Finetuning of Quantized LLMs,” arXiv preprint arXiv:2305.14314, May 2023. 
  • Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., & Steinhardt, J. (2020). “Measuring massive multitask language understanding.” arXiv preprint arXiv:2009.03300. 
  • Hoffmann, J., S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. de Las Casas, L. A. Hendricks, J. Welbl, A. Clark, T. Hennigan, E. Noland, K. Millican, G. van den Driessche, B. Damoc, A. Guy, S. Osindero, K. Simonyan, E. Elsen, J. W. Rae, O. Vinyals, and L. Sifre, “Training compute-optimal large language models,” vol. abs/2203.15556, 2022. 
  • Kaplan, J. S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei, “Scaling laws for neural language models,” CoRR, vol. abs/2001.08361, 2020.  
  • Kashefi, Rojina, et al. “Explainability of Vision Transformers: A Comprehensive Review and New Perspectives.” arXiv preprint arXiv:2311.06786 (2023).  
  • Letaief, Khaled B., et al. “Edge artificial intelligence for 6G: Vision, enabling technologies, and applications.” IEEE Journal on Selected Areas in Communications 40.1 (2021): 5-36. 
  • Madaan, A., Tandon, N., Gupta, P., Hallinan, S., Gao, L., Wiegreffe, S., Alon, U., Dziri, N., Prabhumoye, S., Yang, Y. and Welleck, S. (2023). “Self-refine: Iterative refinement with self-feedback.” arXiv preprint arXiv:2303.17651. 
  • Mehra, Akshit (2023). “Data Collection and Preprocessing for Large Language Models,” June, Labellerr, accessed on 16 December. 
  • Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M., Abid, A., Fisch, A., Brown, A. R., Santoro, A., Gupta, A., Garriga-Alonso, A., et al. “Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.” arXiv preprint 2206.04615, 2022. 
  • Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Roziere, B., Goyal, N., Hambro, E., Azhar, F., et al. (2023), “LLaMA: Open and efficient foundation language models“, arXiv preprint 2302.13971. 
  • Zhao, H., Chen, H., Yang, F., Liu, N., Deng, H., Cai, H., … & Du, M. (2023). “Explainability for large language models: A survey.” arXiv preprint arXiv:2309.01029. 
  • Zhao, Wayne Xin, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, et al. “A survey of large language models.” arXiv preprint arXiv:2303.18223 (2023). 
  • Zhou, D., Scharli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q. V., and Chi, E. H. (2023) “Least-to-most prompting enables complex reasoning in large language models.” In: The Eleventh International Conference on Learning Representations, 2023. [https://openreview.net/forum?id=WZH7099tgfM

The post 2024: What is the Near Future of Generative AI? appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/2024-what-is-the-near-future-of-generative-ai/feed/ 0
Hacking Trustworthy AI https://www.europeanbusinessreview.com/hacking-trustworthy-ai/ https://www.europeanbusinessreview.com/hacking-trustworthy-ai/#respond Wed, 15 Nov 2023 02:18:37 +0000 https://www.europeanbusinessreview.com/?p=196102 By Jacques Bughin In the aftermath of the initial AI tidal wave that has rolled across business and industry, there are signs that the emphasis may now be shifting from […]

The post Hacking Trustworthy AI appeared first on The European Business Review.

]]>
By Jacques Bughin

In the aftermath of the initial AI tidal wave that has rolled across business and industry, there are signs that the emphasis may now be shifting from how best to apply the technology to how best to demonstrate its trustworthiness.

The term “artificial intelligence” (AI) was coined more than 50 years ago at Dartmouth College for a workshop proposal which defined AI as “the upcoming science and engineering of making intelligent machines”1. Since then, the field has experienced ups and downs, but re-emerged 10 years ago along with the groundbreaking advances in deep learning, and now the deployment of generative adversarial networks (GAN) , variational auto-encoders, as well as transformers (Vaswani et al., 2017; Haenlein and Kaplan, 2019).

Those various techniques are now sophisticated enough to qualify AI for the stated ambition for smart machines at Dartmouth. A case in point is Open AI. Based on transformer-based generative AI (“LLM”) models, OpenAI’s software suite of ChatGPT, DAlle-E and Codex already provide the foundations for new drug discovery, replace software engineers in writing code, conduct sophisticated spouse-based market research (Bughin, 2023), and dramatically improve customer service (Nicolescu and Tudorache, 2022). Other recent studies also demonstrate significant productivity gains from generative AI (Noy and Zhang, 2023).

The crux

With all the surrounding buzz, the many benefits of AI may arouse suspicion in some. More appropriately, Gartner’s Hype Cycle may suggest that we are entering a phase of “exaggerated expectations”. In any case, AI’s benefits must be balanced against new social, trust, and ethical challenges.

As for every technology (McKnight, et al., 2011), its pervasive use will only happen if it is beneficial to society, rather than being used dangerously or abusively. From beneficial AI to responsible AI (Wiens et al., 2019), ethical AI (Floridi et al., 2018), trustworthy AI ( EU Experts Group2), or  explainable AI (Hagras, 2018), the variants in the terminology still remind us that one needs foundational trust for AI to thrive (Rudin, 2019).

In any case, AI’s benefits must be balanced against new social, trust, and
ethical challenges.
Currently, many governments and international public organisations have developed their frameworks. Private institutions also are taking a lead. Google’s AI principles, for example, include “Be socially beneficial”, “Avoid creating or reinforcing AI bias”, “Be built and tested for safety”, and “Be accountable to people”. SAP has set up an AI Ethics Steering Committee and an AI Ethics Advisory Panel. But the key question now is how actors are translating AI principles into real organisational corporate practices. After all, most high-profile companies have come to serious challenges using AI, for example when the Microsoft chatbot may risk delivering hate speech, or Amazon online recruiting bias results in preferences towards a certain gender or race.

A glimpse at the journey

Even if getting such principles to work in practice is complex, the merit of businesses doing this translation right will be a win-win for society and their business goals. We recently worked with a major consultancy to assess how large, mostly publicly quoted global companies build their AI journey, including how they have launched organisational trustworthy practices.

Our sample is rather unique, as it is composed of more than 1,500 firms across 10 major countries, including the US, China/India, the top European countries, Brazil, South Africa, and beyond the usual suspects of high-tech companies. Five messages stand out.

table 1 trustworthy AI
  1. Companies are not standing still. About 80 per cent of companies have launched some form of trustworthy AI.
  2. Companies are, however, rarely embracing a full range of practices that are the backbone of trustworthy AI. On average, firms use four out of 10 organisational practices reviews, and only 3 per cent of companies are embracing all the practices.
  3. Among the practices considered, none has been deployed mainstream yet (table 1).
  4. Country and sector seem to be playing a role in the degree of operationalisation. Europe lags in comparison with Asia (India and Japan), Canada leads over the US, and high tech is not that far ahead of other sectors.
  5. Operationalisation of trustworthy AI is a top priority for about 30 per cent of companies, not necessarily to further expand their efforts, but to catch up on their peers.

Learning from leaders

Learning from leaders

There is to date no visible correlation between practices for responsible AI and return on investment of AI, whether one looks at revenue/investment , or AI project payback.

However, looking at the top 10 per cent of companies versus the 30 per cent not yet operationalising trustworthy AI, a glimpse of momentum is visible. Trustworthy AI is more widely implemented and, with more priority over other actions, the more firms have exploited the use of AI, the more they have spent on it. Judging by this process it appears that trustworthy AI is being implemented after AI diffusion. The next evolution to see is that trustworthy AI is a condition of good practice of AI transformation.

About the Author

BughinJacques Bughin is CEO of machaonadvisory and a former professor of Management who retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of a number of companies.

References
  1. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
  2. Ethics guidelines for trustworthy AI | Shaping Europe’s digital future (europa.eu)
  • Bughin, J. (2023a). “To ChatGPT or not to ChatGPT: A note to marketing executives”, Applied Marketing Analytics 
  • • Bughin, J. (2023b) . “How productive is generative AI?”, Medium, available at: https://blog.gopenai.com/how-productive-is-generative-ai-large-under-the-right-setting-29702eb2de89
  • Floridi, L., & Cowls, J. (2019). “A unified framework of five principles for AI in society”, Harvard Data Science Review, 1(1), 1-15
  • Haenlein, M., & Kaplan, A. (2019). “A brief history of artificial intelligence: On the past, present, and future of artificial intelligence”, California Management Review, 61(4), 5-14
  • Hagras, H. (2018). “Toward human-understandable, explainable AI” Computer, 51(9), 28-36.
  • McKnight, D.H., Carter, M., Thatcher, J.B., & Clay, P.F. (2011). “Trust in a specific technology: An investigation of its components and measures”, ACM Transactions on management information systems (TMIS), 2(2), 1-25.
  • Floridi, L., & Cowls, J. (2019). “A unified framework of five principles for AI in society”, Harvard Data Science Review, 1(1), 1-15
  • Noy, S. and Zhang, W. (2023). “Experimental evidence on the productivity effects of generative artificial intelligence”, available at SSRN 4375283.
  • Thiebes, S., Lins, S. & Sunyaev, A. (2021). “Trustworthy artificial intelligence”, Electronic Markets, 31, 447-64 (2021).
  • Rudin, C. (2019). “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead”, Nature Machine Intelligence, 1(5), 206-15.
  • Thiebes, S., Lins, S. & Sunyaev, A (2021). “Trustworthy artificial intelligence”, Electronic Markets, 31, 447-64 (2021).
  • Wiens, J., Saria, S., Sendak, M., Ghassemi, M., Liu, V.X., Doshi-Velez, F., et al. (2019). “Do no harm: A roadmap for responsible machine learning for health care”, Nature Medicine, 25(9), 1337-40

The post Hacking Trustworthy AI appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/hacking-trustworthy-ai/feed/ 0
Now It is Time for AI Transformation https://www.europeanbusinessreview.com/now-it-is-time-for-ai-transformation/ https://www.europeanbusinessreview.com/now-it-is-time-for-ai-transformation/#respond Tue, 19 Sep 2023 12:45:06 +0000 https://www.europeanbusinessreview.com/?p=191960 By Jacques Bughin and Ivan Gjepali Unsurprisingly, the focus in the application of AI in business and industry to date has been on the potential of the technology itself. However, […]

The post Now It is Time for AI Transformation appeared first on The European Business Review.

]]>
By Jacques Bughin and Ivan Gjepali

Unsurprisingly, the focus in the application of AI in business and industry to date has been on the potential of the technology itself. However, organisations seeking success in deploying AI need to understand that the technology is not the end of the story.

AI and the new promised land

The next approaching Kondratieff cycle has some as yet unknown characteristics. But, as asserted by Giulio Sapelli in “Beyond Capitalism”,1 it will entail the pervasive development of artificial intelligence technologies (AI). In using the term “AI”, we are referring to a set cutting-edge technology that can learn from data and replicate human cognitive processes, with machine learning and deep learning algorithms enabling it to become increasingly sophisticated.

An early vision of this new Kondratieff cycle is visible in a broad number of sectors already. AI has powered 80 per cent of the views on the Netflix platform, and facilitated the discovery by Moderna in no time of the ARN-based vaccine against COVID-19. Generative AI platforms such as Copilot based on Open AI’s Codex increase coding throughput of programmers by more than 50 per cent.

From an economic and managerial viewpoint, AI is thus projected to serve as a pivotal competitive asset. Thomas Davenport2 claims that AI represents the most crucial technology of our era, emphasising its capacity to have a profound impact on numerous domains and shape our collective future. Spyros Makridakis believes that the AI revolution could surpass the impact of both the industrial and digital revolutions.

Time for AI transformation if one wants to reach the promised land

AI and the new promised land

But there is a small issue: notwithstanding the hype surrounding AI,3 the return on investment for the majority of companies launching their AI investment initiatives has been close to zero.4

Reasons have been put forward to interpret this phenomenon. Management scholars such as Fountaine et al. in the Harvard Business Review uncovered the necessity to redesign the organisation in order to overcame cultural and organisational barriers related to the implementation of AI. In fact, one of the main mistakes that leaders commit is to treat AI as a plug-and-play technology with fast-yielding benefits. Thus, they wrongly assume that technology is the biggest challenge, while in fact it is culture.

Others, such as professors Iansiti and Lakhani5, pinpoint the fact that companies must transform themselves from a typical factory with fixed assets and tangible processes to a new, mostly software-based factory with AI inside.

Universal Robots, a Danish company, has used AI to design new forms of collaborative robots that augment the quality and speed of product manufacture.

Our viewpoint is that AI will be revolutionary, but there is indeed a clear-cut AI transformation effort to be made in order to benefit from its potential. We recently launched some research to analyse this claim, following our close cooperation on a recent Accenture paper, “The art of AI maturity” (2021),6 which implies that AI should be part of a major transformation in order to realise its potential. The level of transformative potential can be measured by the corporation’s level of AI maturity, which measures the degree to which organisations have mastered AI-related capabilities in the right combination so as to hope to achieve high performance.

In practice, this means two important things. First, firms have to master and make massive synergies out of three complementary domains of capabilities: foundational capabilities (data, technology), augmentation capabilities (innovation and talent), and organisational capabilities. Second, it is time to shift from digital to AI transformation. The reason is that AI must fundamentally rely on transformational change, with a more extensive augmentation layer that will scale the benefit of AI. In fact, the three domains of AI capabilities can be described and justified as follows.

Foundational capabilities.

Foundational capabilities

Regarding foundational, ICT-based capabilities, data plays a crucial role in enabling its neuronal AI potential. There is nothing new in this; remember how The Economist7 once referred to data as the new oil. But what is to be recognised is that, given the nature of AI systems, their potential is heavily dependent as much on quality and trustworthiness as on the quantity of data available for training and reinforcing the validity and accuracy of the algorithms.

The technological infrastructure is a critical component that enables organisations to effectively leverage data and implement machine learning solutions. Organisations need scalable data storage and high processing power to run complex algorithms. Particularly crucial is the availability of robust software tools and libraries for building and training AI models. MLOps, a set of practices and tools to manage and scale machine learning systems, is a crucial component of AI performance in business that entails the integration of machine learning models into the software development lifecycle and the establishment of best practices for deployment and maintenance, which can significantly improve their reliability and scalability.

Augmentation capabilities. A major debate regarding AI is its efficiency, especially when it is geared towards displacing jobs. This workless future is intimidating, but not an adequate picture of the future with AI per se.

Our viewpoint is that AI will be revolutionary, but there is indeed a clear-cut AI transformation effort to be made in order to benefit from its potential.

The main, complex-to-capture potential of AI is its augmentation potential to support more innovative ideas and more potent work. Universal Robots, a Danish company,8 has used AI to design new forms of collaborative robots that augment the quality and speed of product manufacture by more than 50 per cent, while augmenting the workforce. Aidoc is using deep learning algorithms to analyse medical images and support insights to radiologists, with a close to 40 per cent increase in the accuracy of radiologists’ diagnoses. NVIDIA GauGAN9 is a generative AI model that can take simple sketches by users and create lifelike images in powerful applications such as architecture, urban planning, and the entertainment industry.

As the fundamental nature of AI is to innovate, organisations that excel at AI typically must also have strong complementary talent management practices, including a clear talent strategy, a robust talent pipeline, and effective talent development programmes. Organisations also need to invest in building the skills and capabilities of their existing workforce to ensure that they can work effectively with AI. This requires a combination of upskilling and reskilling programmes, as well as new hiring practices to attract and retain talent with the right mix of technical and business skills.

Organisational capabilities. Organisational integration in the case of AI refers to the degree to which data and AI experts work hand in hand with the business to drive the strategic agenda of the organisation. Here, CEO sponsorship must be pervasive as it sends a clear message to the entire organisation that AI is not just a technology initiative, but a new and innovative operating model.

Showing the evidence

transformation journey

The above suggests that AI success lies into mastering all three types of capabilities that maximise the synergetic gains for AI to scale significantly as a competitive advantage.

Case studies can illustrate the above, but they lack systematic proof power. The evidence must be much broader and statistically robust. We thus have developed a machine learning algorithm on a sample derived from the Accenture Research on AI maturity, covering a few thousand of the most globalised large firms worldwide which have started to experiment with AI. We uncovered three crucial and statistically robust findings (see exhibit 1).

exhibit 1
Note: cluster 1 is the reference

Finding 1: A huge spread of AI experience

Firms’ level of AI transformation is best seen through four clusters, essentially reflecting the extent to which they have already invested in all three domains of capabilities. The difference across clusters is rather large, with one cluster already well ahead in the mastering of the portfolio of AI-based capabilities, while another is barely experimenting with AI and likely composed of frustrated companies failing to see how AI makes any difference.

Finding 2: AI transformation capability is way beyond the frontier

Two clusters are very weak, with a negative concentration of capabilities, meaning that their investments are towards components of capabilities which have the weakest impact on AI transformative potential (e.g., they have good, but unlabelled data sets; they have invested in good human skills, but lack digital, data-driven skills, etc.).

While two clusters have some solid basis for capabilities, only one (and, by the way, the smallest of all clusters, 13 per cent of companies) has the minimum scale on all three domains. Still, the cluster is at roughly 50 per cent of the total potential of AI transformation, or far from fully exploiting the entire set of capabilities to scale AI.

Finding 3: Even if off the frontier, AI is already game-changing

Even if not exploiting the maximum potential, the best clusters already use AI for 30 per cent of their revenue generation, and this effect is statistically larger than any other impact achieved by any other cluster. The growth in the top line contribution of AI of the highest-performing cluster is at least double that of the other firms, demonstrating the great importance of scale in AI.

The new AI transformation journey

Over and above the case study evidence, the above analysis adds the statistical evidence of the potential of AI. But it shows that this potential can only be captured by a new transformation that integrates three domains of technology architecture, organisational change, and augmentation of capabilities.

The importance of AI transformation is that:

  1. each domain is more complex than digital transformation (e.g., data layer and new cybersecurity architecture for foundational capabilities; software-based innovation, and work skill augmentation for augmentation capabilities; and a CEO actively embedded in this new change);
  2. all capabilities must interwork to hope to secure any of the potential of AI;
  3. it must scale rapidly to reach the critical threshold to fully reap the rewards of AI.

Hence, to fully harness the potential of AI for business purposes, it is time to aim for the top and embrace a holistic and proactive AI-driven approach. In other words, the power of AI can be unlocked only when excellence is reached simultaneously in all the AI capabilities.

About the Authors

Jacques BughinJacques Bughin is CEO of machaonadvisory and a former professor of Management, while retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of a number of companies.

Ivan GjepaliIvan Gjepali is a double degree masters student in Business Engineering at Solvay Business School in Brussels and the Polytechnic University of Milan. His interests focus on the intersection of new technologies and innovation strategies. He contributes to the acceleration of AI transformation in the Council of the EU.

References:

  1. Beyond Capitalism: Machines, Work and Property. 2019. Palgrave Macmillan Cham. https://link.springer.com/book/10.1007/978-3-030-20769-4.
  2. All-in On AI: How Smart Companies Win Big with Artificial Intelligence. 2023. Thomas H. Davenport and Nitin Mittal. https://www.tomdavenport.com/book/all-in-on-ai/.
  3. “The Business of Artificial Intelligence”. 18 July 2017. Harvard Business Review. https://hbr.org/2017/07/the-business-of-artificial-intelligence.
  4. “ROI from AI: The importance of strong foundations”. 20 October 2020. Deloitte. https://www.deloitte.com/global/en/our-thinking/insights/industry/technology/artificial-intelligence-roi.html?icid=top.
  5. “Competing in the Age of AI”. January-February 2020. Harvard Business Review. https://hbr.org/2020/01/competing-in-the-age-of-ai.
  6. “The Art of AI Maturity”. 1 January 2023. Harvard Business Publishing Education. https://hbsp.harvard.edu/product/ROT477-PDF-ENG?Ntt=.
  7.  “The world’s most valuable resource is no longer oil, but data”. 6 May 2017. The Economist. https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data.
  8. “Odense, Denmark: World´s Largest Collaborative Robots Hub”. 4 February 2020. Universal Robots. https://www.universal-robots.com/about-universal-robots/news-centre/world-s-largest-hub-for-collaborative-robots-opens-in-denmark/.
  9. “What Is GauGAN? How AI Turns Your Words and Pictures Into Stunning Art”. 1 March 2022. https://blogs.nvidia.com/blog/2022/03/01/what-is-gaugan-ai-art-demo/#:~:text=
    GauGAN%2C%20an%20AI%20demo%20for%20photorealistic%20image%
    20generation%2C,NVIDIA%20AI%20Demos.%20How%20to%20Create%20With%20GauGAN
    .

The post Now It is Time for AI Transformation appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/now-it-is-time-for-ai-transformation/feed/ 0
Does Generative AI Generate Jobs? https://www.europeanbusinessreview.com/does-generative-ai-generate-jobs/ https://www.europeanbusinessreview.com/does-generative-ai-generate-jobs/#respond Sat, 02 Sep 2023 12:48:42 +0000 https://www.europeanbusinessreview.com/?p=190894 By Jacques Bughin The evolution of AI and its quick adoption by organisations has predisposed many workers to the fear of losing their jobs to the technology. This fear might […]

The post Does Generative AI Generate Jobs? appeared first on The European Business Review.

]]>
By Jacques Bughin

The evolution of AI and its quick adoption by organisations has predisposed many workers to the fear of losing their jobs to the technology. This fear might be largely unfounded.

Are you afraid of losing your job to generative AI? The debate rages since digital citizens are rushing to use generative AI software like MidJourney or ChatGPT. Yuval Noah Harari,1 famous historian and author of popular books Sapiens and Homo Deus, recently voiced his concern that while generative AI is a game changer, the technology can develop its own content and ideas, build its own social culture, and replace an entire part of our abstract production. The media has jumped on it and CNBC recently ran a catchy headline claiming that “Thirty-seven percent of workers between the ages of 18 and 24 are worried about new technology eliminating their jobs”.2 The BBC also had its own contribution with the headline “AI anxiety: The workers who fear losing their jobs to artificial intelligence”.3

Anxiety might be present, but the disturbing truth is that media is building the anxiety, rather than developing a sound perspective on the relationship between jobs and AI.

Looking backward

Looking back on how major tech innovations played out, fear around how technology can deconstruct work/knowledge has always been present. In the Middle Ages, as reported by Le Point,4 a Benedictine monk, Johannes Trithemius, defended the work of copying texts by his peers, because the slowness of the work was synonymous with a process of contemplation and profound discovery of the contents. He opined that the printing of books was not only toxic but would trivialise and weaken learning and culture. Even much later, renowned economists, from John Stuart Mill to David Ricardo, were suspicious of the idea that technology may build jobs. John Maynard Keynes5 further coined, in 1931, the term “technological unemployment” in his essay named Economic Possibilities for Our Grandchildren, in which he predicted that due to technology diffusion, the “most pressing problem in the developed economies would be how to fill our leisure time.”

Well, we still have jobs today despite Keynes´s prediction—and the academic literature has proven with a large set of studies6 that new jobs could be created in the wake of multiple technological developments.

Looking forward

tech and employment

The key question is whether this time can be different.7 Technologies are becoming, no doubt, more powerful, and their economic scope getting larger by the day. But three important facts are usually not taken into account by techno-pessimists.

1. Technology is as much about augmentation than a substitute to jobs.

Let´s provide a few examples regarding generative AI. Consider first how Reuters and Associated Press have experimented with AI-generated news articles for tasks like financial reporting and sports recaps. Artists and designers also can leverage generative AI tools to enhance their creative process. For instance, Adobe’s Project Scribbler uses AI to transform rough sketches into polished artwork, providing inspiration and saving time. A few years back, musician and composer Holly Herndon8 released a music album, relying on an AI system trained on Herndon’s own voice and musical style. For the musician, AI complemented her exploration of new musical possibilities and pushed the boundaries of traditional composition for her own benefits instead of separating the tasks between production and performance.

The rise of social media platforms has led to the need for professionals who can manage and enhance a company’s online presence, engage with customers, and create targeted social media strategies.

Another example where generative AI has built new inroads is coding. GitHub Copilot is an AI-powered code completion tool developed to generate code suggestions and snippets based on the context and programming language being used. While this has replaced some tasks of coders, programmers are using their freed up time to do more added-value tasks; productivity follows and more jobs, no less, are being driven to complex software tasks. Finally, AI works on data, and the critical issue for many companies is not about data processing but data generation. What generative AI is good at is building synthetic data.9 Those synthetic data modelled on real data may help provide enough data for more powerful analysis, justifying the jobs of data analysts — (while providing a layer of privacy as real user data is not being utilised to power models).

2. Technology is less often used for efficiency than for higher quality and stronger innovation — and great innovation expands output and employment.

Quality is often the result of technology, and in the long term, builds wealth. Grammarly’s users, a popular AI writing assistant tool have reported a 76% increase in writing accuracy. Google Translate has improved translation quality by 60%, making it more accessible for users globally. Among marketers using generative AI, half of them report that the most important benefit is a faster creative output cycle and better creative design.

Innovation is another by-product of AI. A case in point is BioNTech, a biotechnology company that used Generative AI and machine learning techniques during the drug discovery process and developed a COVID-19 vaccine in no time. With these innovations, BioNTech expanded its workforce not by 20%, not even by 100%. In fact, its employment went 4-fold in just three years. Adobe a leading software company has integrated generative AI capabilities into its software products, such as Adobe Photoshop, Illustrator, and Premiere Pro, expanding customer support and training teams to assist users in leveraging these new capabilities effectively. Employment grew especially in areas of trainers and documentation specialists who focus on supporting and educating customers on using generative AI features.

3. Technology and employment are not only to be checked at the firm level but at the entire ecosystem.

The type of catchy media titles we see today about AI and jobs tend to be more about AI and one’s current job. But those articles miss the full macroeconomic effect. A new range of jobs will be created by virtue of technologies and other jobs will be created in the entire ecosystems, not the least as part of the gain in productivity out of technology being reinvested in the economy.

Regarding new jobs, let’s first look at digital. The rise of social media platforms has led to the need for professionals who can manage and enhance a company’s online presence, engage with customers, and create targeted social media strategies. The proliferation of mobile applications has created a demand for skilled app developers who can design, develop, and maintain mobile applications for various platforms. As generative AI becomes more prevalent, AI ethicists will explode as being responsible for examining the ethical implications and societal impact of AI systems.

We still have jobs today despite Keynes´s prediction—and the academic literature has proven with a large set of studies6 that new jobs were created in the wake of multiple technological developments in the recent past.

One other much bigger impact is the fact that technology use often overspills to the entire ecosystem or the economy. In their academic research, Arntz and her colleagues studied the venue of robotics in the German automotive industry. While it killed a few jobs at the start through automation, it made the industry able to compete with the Japanese car exporters, resulting in saving and expanding the automotive industry. The effect was a positive net employment growth out of rapid robotisation. One notable corporate example of such an overspill in ecosystems is how NVIDIA’s GANs have significantly improved the visual quality and realism of computer-generated graphics in video games, creating job opportunities for artists, designers, developers, and animators specialising in creating immersive content. NVIDIA’s GANs have been used to enhance medical imaging and generate synthetic medical data for training and research purposes. The integration of generative AI in healthcare is already increasing the demand for medical imaging specialists, researchers, and professionals with expertise in AI-assisted diagnostics and medical data analysis.

coworking ai and human

One should think twice as to whether technology kills jobs- it may kill some, but technology has shown ways to build new ones or push new labour demand as a result of major productivity gains reinvested in the economy. Technology brings about creative destruction that is possibly inevitable, but by itself, it does not necessarily paint a picture of a workless future. Maybe, it is time to use generative AI for the good and stop worrying about it.

About the Author

BughinDr. Jacques Bughin is the CEO of Machaon Advisory and a professor of Management. He retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of multiple companies.

References

  1. Yuval Noah Harari. https://www.ynharari.com/.
  2. These American Workers are the Most Afraid of A.I. Taking Their Jobs. November 07, 2019. CNBC. https://www.cnbc.com/2019/11/07/these-american-workers-are-the-most-afraid-of-ai-taking-their-jobs.html#:~:text=Thirty-seven%20percent%20of%20workers%20between%20the%20ages%20of,highest%20levels%20of%20fear%20about%20technology-caused%20job%20losses.
  3. AI Anxiety: The Workers Who Fear Losing Their Jobs to Artificial Intelligence. April 19, 2023. BBC. https://www.bbc.com/worklife/article/20230418-ai-anxiety-artificial-intelligence-replace-jobs.
  4. Intelligence artificielle : le choc des cerveaux. May 11, 2023. Le Point. https://www.lepoint.fr/sciences-nature/intelligence-artificielle-le-debat-choc-et-inedit-harari-le-cun-11-05-2023-2519779_1924.php#11 https://www.lepoint.fr/sciences-nature/intelligence-artificielle-le-debat-choc-et-inedit-harari-le-cun-11-05-2023-2519779_1924.php#11.
  5. Economic Possibilities for Our Grandchildren. 2010. Essays in Persuasion. https://link.springer.com/chapter/10.1007/978-1-349-59072-8_25/.
  6.  Technology and Jobs. November 24, 2009. Economics of Innovation and New Technology. https://www.tandfonline.com/doi/abs/10.1080/10438590802469552https://www.tandfonline.com/doi/abs/10.1080/10438590802469552.
  7. Is This Time Different? How Digitalisation Influence Job Creation and Destruction. 2019. Research Policy. https://orbilu.uni.lu/bitstream/10993/39183/1/Balsmeier_Woerter_Is%20this%20time%20different%20-%20RP%20-%202019_pub.pdf.
  8. Holly Herndon: The Musician Who Birthed an AI Baby. May 02, 2019. The Guardian. https://www.theguardian.com/music/2019/may/02/holly-herndon-on-her-musical-baby-spawn-i-wanted-to-find-a-new-sound.
  9. What is Synthetic Data? N.d. MOSTLY AI. https://mostly.ai/synthetic-data/what-is-synthetic-data.

The post Does Generative AI Generate Jobs? appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/does-generative-ai-generate-jobs/feed/ 0
How Productive Is Generative AI Really? https://www.europeanbusinessreview.com/how-productive-is-generative-ai-really/ https://www.europeanbusinessreview.com/how-productive-is-generative-ai-really/#respond Mon, 24 Jul 2023 22:02:53 +0000 https://www.europeanbusinessreview.com/?p=188333 By Jacques Bughin Generative artificial intelligence is invading the corporate suite and boardroom, surrounding the pros and cons of its use in the enterprise. The first fundamental question is whether […]

The post How Productive Is Generative AI Really? appeared first on The European Business Review.

]]>
By Jacques Bughin

Generative artificial intelligence is invading the corporate suite and boardroom, surrounding the pros and cons of its use in the enterprise. The first fundamental question is whether generative AI is truly a game changer, as evidenced by basic metrics such as productivity gains. We find that it is, but that the benefits vary considerably from case to case, suggesting that managers need to do their homework to define their most favourable position with respect to generative AI.

For Slidor, a French communication company created a few years ago, 50% of its corporate graphic marketing presentation is already done by MidJourney.1 This intensive use of generative AI is also visible in the same proportion for the codes written by Copilot within GitHub.2

Needless to say, this extensive use of generative AI is invading the enterprise world at a rapid pace. It may also have created its “I-phone moment” in the process: while most digital technologies have focused on “routine” technologies, new generative AI systems such as MidJourney, Stable Diffusion, You, OpenAI’s ChatGPT, and DALL-E are automating creative tasks such as content image generation or software coding that were thought to be largely isolated by the first generation of neuronal AI.

Technology automation is not a zero-sum game. Technology replaces tasks to improve productivity and ultimately generate a bigger pie to enjoy.

In recent scholarly articles in world-renowned journals such as the American Economic Review3 and the Journal of Human Capital4 in 2018, star economist Daron Acemoglu had already warned that the traditional assumption that “high-skilled workers are protected from automation because they specialise in more complex tasks requiring human judgement, problem-solving, and analytical skills” might be a dubious narrative. Powerful generative writing tools like ChatGPT could replace certain types of writing (such as press releases and blogs), could easily translate into multiple languages, engage in powerful dialogues with customers, perform medical diagnostics, or debug software code.

Does this mean the end of work, even for highly skilled workers? We doubt it for a variety of reasons.5 But the most fundamental reason is that technology automation is not a zero-sum game. Technology replaces tasks to improve productivity and ultimately generate a bigger pie to enjoy. In fact, several studies have concluded that technology often creates more jobs6 than it eliminates and that companies that adopt technology can end up growing faster than their competitors.7 In other words, technology absorption is ultimately a win-win for workers and shareholders.

Despite all the hype around GPT chat, there are few studies8 today that look at productivity improvements in companies that use these technologies. Here, we report on one such possible study and draw important insights for executives considering whether to invest in generative AI.

The Experiment 

productivity gains

The experiment is based on the use of an external tool, e.g. ChatGPT for context-based information/dialogue (nlp) and Dall-E or Stable Diffusion for content generation. By focussing on some of the major tools, we neutralise differences in tool choice as a factor in productivity differences. We also examined three contexts: coding (with sub-activities such as re-coding, debugging, and documentation), content generation (for media and advertising), and customer interactions (social media, blogging, email, and customer service). According to various observations,9 these activities are the most used ones so far (except for activities such as medical diagnosis, others such as translation, customer research, etc. are not yet common), so we should see some benefits from using generative AI.

We test the productivity gain compared to NOT using generative AI. We also collect data on the level of experience in using these tools, age, occupation, perception of these tools (“enriches work”, “impoverishes work”, “neutral”), and the reason for use (“curiosity”, “peer pressure”, “fun”, “efficiency”).

Three Main Insights

generative AI

Three main results stand out:

1.Usage. “Business use is already relatively high and is beginning to take hold.”

Approximately 26% of respondents report using the technology as part of their job, and about 16% already use it routinely, at least once a day. Finally, 3% of respondents report having used and abandoned it.

Let’s not forget that these technologies are relatively new and this pace of use, at the enterprise level, is relatively strong — in the range of three to five times faster than the previous wave of so-called Enterprise 2.0.10 Second, the conversion to an enterprise habit (16% to 26%) is also very strong in a matter of months, which took years for most Enterprise 2.0 technologies, except for messaging and some collaboration tools.11

Adoption leaders. “Curious digital natives are the drivers of adoption.“
We correlated various indicators collected on whether workers used the tool or not. Although we can only explain less than 50% of the adoption rate, we found that the drivers of adoption are (a) age, (b) occupation, (c) curiosity, and (d) efficiency, in order of explanatory power.

Media and software professionals are more likely to benefit from the technology, as these industries have a history of disruption through digitisation and are also the most exposed to the threats and benefits of generative AI, in all likelihood. Let’s not forget that while low-code has simplified coding, coding is still a complex activity that requires time and effort. AI has been used for many years to try to automate code generation, from Amazon’s CodeWhisperer to IBM’s Wisdom. OpenAI is the next evolution for building simple applications from natural language commands.12

Curiosity drives experimentation and then use, while more conservative workers tend to oppose technology, both because of the learning burden and the fear that technology could negatively impact their work and status.

Age is negatively correlated with usage, as it is with technology in general,13 while curiosity is a significant driver of usage. The latter two factors are more important than the scope of generative AI and are not without consequences. This means that a digital divide can develop between young, digital natives and other workers — and the psychographic characteristics of workers: curiosity drives experimentation and then use, while more conservative workers tend to oppose technology, both because of the learning burden and the fear that technology could negatively impact their work and status.

2. Productivity gains. “They are real, but they take time to unfold, both because of learning and incentives.”

The productivity impact is the product of (a) the users of the technology, (b) the share of activities where AI is applied, and (c) the productivity gains in those activities.

For (a), our results show that 16% of workers use it daily — and that the momentum for expansion is significant — and faster than for any previous technology. Regarding b), we find that AI generative tools converge to account for about 1/3 of the activities of workers who use the technology every day in their workflow. In particular, this rate ranges from 12% to 44% when it comes to marketing activities (including blogging and social media communication); it is about 23% to 36% when it comes to coding (including documentation, code testing, and debugging; incidentally, the usage rate matches the GitHub report that “46% of coding in Copilot-enabled languages is done by the AICodex wizard“).14 Lastly, we find that the time spent on content production is between 23% and 41% (especially for special effects, branded content, etc.).

Finally, we compile for (c) that the time reduction for AI-assisted tasks is in the range of 30% to 60%, which is consistent with some general experiments,8 but also with some specific studies around GithubCopilot.15

Taking these three elements together, total productivity at the firm level is between 1% and 4% at a minimum in the occupations covered by the survey. This may not sound like much, but again, it depends on how usage develops, and this productivity gain is already higher than the average labour productivity growth in Europe in recent years.16

Technology should augment human labour but is still far from replacing it — and that the quality and proper use of these technologies must be worked out at the organisational level.

Outside of usage, the difference in productivity found ranges from 1 to 4, and correlation analysis suggests that productivity depends on the nature of the task (e.g., debugging versus writing new code for software, a special background effect for content versus creating full content, etc.). In addition, productivity accumulates over time — it takes an average of 6 to 8 weeks to achieve stable productivity gains using these tools (so-called learning effects), while productivity is higher for workers associated with companies that have been promoting the use of AI for some time, including generative AI today.

3. Getting Started with Generative AI

We are in a time of ownership of generative AI in the enterprise. Some leaders are deliberately choosing to limit its use, while other companies believe it should be unleashed and tested among employees (see for example JPMorgan vs. Morgan Stanley).

While this is one of the first (relatively basic) studies reporting on productivity related to generative AI, it confirms early studies of different types of AI17 that AI can be particularly powerful and adds to the parallel evidence that generative AI can provide significant gains, at least in some industries and for some tasks. As such, these technologies can deliver gains that make the returns on investment attractive enough to be considered part of any company’s technology portfolio.

However, as in Gartner’s growth cycle, the future may have some setbacks, so companies need to weigh their play. We see at least three unapologetic moves. The first is experimentation, as there is a learning effect to get the right benefit from technology. The second is to study use cases that are not problematic. These technologies can be used to improve the efficiency of internal human resources communication, to better predict customer reactions in commerce, to speed up information retrieval in service contracts, for the virtual representation of an architectural project, an advertising campaign, or for the redesign of a new website, etc. The third is to work on the right framework for using these technologies: they are still not transparent, they are not always accurate, and they may present biases and risks of copyright infringement. All of this suggests that technology should augment human labour but is still far from replacing it — and that the quality and proper use of these technologies must be worked out at the organisational level. Finally, as general-purpose technologies have shown, technologies can disrupt workflow productivity. Companies must study the disruptions and reinvent themselves accordingly.

This new AI moment may seem a bit chaotic, but the evidence suggests that the companies that are early adopters aren’t necessarily taking risks — theyre aware of the technology’s limitations, they’re working on more explainable AI and source transparency, and they’re working internally to comply with the recent AI law in Europe. They are also preparing for new competition and advantages — see how Salesforce launched EinsteinGPT as part of its data cloud, improving the business insights provided to its customers, and reacting to Microsoft’s investment in OpenAI.

About the Author

Jacques BughinJacques Bughin is the CEO of Machaon Advisory and a former professor of Management. He retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of multiple companies.

References

  1. CISO From ChatGPT to Midjourney, Generative Artificial Intelligences Are Taking Holding in Companies. April 25, 2023. Le Monde. https://www.lemonde.fr/economie/article/2023/04/25/de-chatgpt-a-midjourney-les-intelligences-artificielles-generatives-s-installent-dans-les-entreprises_6170873_3234.html.
  2. GitHub Copilot X: The AI-powered Developer Experience. March 22, 2023. GitHub. https://github.blog/2023-03-22-github-copilot-x-the-ai-powered-developer-experience/.
  3. Modeling Automation. May 2018. American Economic Association. https://www.aeaweb.org/articles?id=10.1257/pandp.20181020.
  4. Low-Skill and High-Skill Automation. 2018. The University of Chicago Press Journals. https://www.journals.uchicago.edu/doi/abs/10.1086/697242.
  5. Generative AI and Future of Work: Estimated Impact Distribution. 2023. https://www.linkedin.com/feed/update/urn:li:activity:7046096413016883201/.
  6. Labour-saving Technologies and Employment Levels. January 14, 2022. Organisation for Economic Co-operation and Development. https://www.oecd.org/publications/labour-saving-technologies-and-employment-levels-9ce86ca5-en.htm.
  7. Why AI Isn’t the Death of Jobs. May 24, 2018. MIT Sloan Management Review. https://sloanreview.mit.edu/article/why-ai-isnt-the-death-of-jobs/.
  8. Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence. March 6, 2023. Social Science Research Network. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4375283.
  9. Analytics and Data Science Career on LinkedIn. https://www.linkedin.com/groups/6744146/?highlightedUpdateUrn=urn%3Ali%3AgroupPost%3A6744146-7056580542439256064&q=highlightedFeedForGroups.
  10. Enterprise 2.0 Adoption in Italian Companies: Analysis of the Maturity Level. Politecnico Di Milano. https://www.politesi.polimi.it/bitstream/10589/6269/1/Thesina_thomas.PDF.
  11. The Rise of Enterprise 2.0. January 04, 2008. Journal of Direct, Data and Digital Marketing Practice. https://link.springer.com/article/10.1057/palgrave.dddmp.4350100.
  12. AI Rewrites Coding. n.d. Communications of the ACM. https://dl.acm.org/doi/fullHtml/10.1145/3583083.
  13. The Drivers of Employees’ Active Innovative Behaviour in Chinese High-Tech Enterprises. May 27, 2021. Sustainability 2021. https://www.mdpi.com/2071-1050/13/11/6032.
  14. GitHub Copilot now has a better AI model and new capabilities. February 14, 2023. GitHub. https://github.blog/2023-02-14-github-copilot-now-has-a-better-ai-model-and-new-capabilities/.
  15. Research: Quantifying GitHub Copilot’s Impact on Developer Productivity and Happiness. September 7, 2022. GitHub. https://github.blog/2022-09-07-research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/.
  16. European Union Labour Productivity Growth. n.d. CEIC Data. https://www.ceicdata.com/en/indicator/european-union/labour-productivity-growth.
  17. A Future that Works: AI, Automation, Employment, and Productivity. June 2017. McKinsey Global Institute Research. https://www.jbs.cam.ac.uk/wp-content/uploads/2020/08/170622-slides-manyika.pdf.

The post How Productive Is Generative AI Really? appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/how-productive-is-generative-ai-really/feed/ 0
Looking beyond ChatGPT: Why AI Reinforcement Learning is Set for Prime Time https://www.europeanbusinessreview.com/looking-beyond-chatgpt-why-ai-reinforcement-learning-is-set-for-prime-time/ https://www.europeanbusinessreview.com/looking-beyond-chatgpt-why-ai-reinforcement-learning-is-set-for-prime-time/#respond Fri, 21 Jul 2023 19:37:21 +0000 https://www.europeanbusinessreview.com/?p=181302 By Jacques Bughin Companies must master AI—and all techniques, whether it is (un)supervised learning or reinforcement learning as is set to revolutionise predictive powers and maximise chances of success in […]

The post Looking beyond ChatGPT: Why AI Reinforcement Learning is Set for Prime Time appeared first on The European Business Review.

]]>
By Jacques Bughin

Companies must master AI—and all techniques, whether it is (un)supervised learning or reinforcement learning as is set to revolutionise predictive powers and maximise chances of success in sports and other fields.

ChatGPT has revolutionised the field of conversational artificial intelligence (AI). Beyond its ability to generate effective human-like responses, the breakthrough has been to integrate big data (large language) models with deep reinforcement learning techniques enhanced by human feedback (RLHF).1 That is, reinforcement learning is an AI technique whereby the agents in the model learn from their actions through rewards and penalties in function of good/bad actions—but in the case of Open AI’s Chat GPT and also other high profile AI applications like DeepMind’s Sparrow),2 rewards are specifically provided by a set of human interventions, allowing machines to grasp elements of decision-making distinctly embedded in human experience.

With big data, the best sport AI analysts would now ingest thousands of games, with hundreds of features to refine the predictive power of their analytics..

This part of feedback, even if proven to be effective, may however be subject to biases if human interventions tend to cater to certain preferences. Still, the key generative AI technology here is reinforcement learning which might become a critical technique of machine learning. Further, the use cases can be enlarged by adding gamification,3 where the reinforcement will come from the winning rewards for instance in education, training, and many other cases.

Beyond ChatGPT: the value of reinforcement learning in sport analytics

sports analytics

In order to understand the value of reinforcement learning, a case in point is big data sports analytics. Money in sports has become really big, and any possibility to leverage data to better predict a team´s or player´s relative performance to its competitors may guarantee large payoffs.

In the 1980s, Bill Benter became one of the most profitable gamers of all time by leveraging his own built statistical prediction model for horse racing. Twenty years later, using stats in baseball, coach Billy Beane managed to win the series with the Oakland Athletics,4 building a team with cohesive strength, which however was not necessarily seen as the most performing in the sport by the not analytically trained baseball insiders. What AI and replacement learning further bring beyond those first data-predictive successes, is a much broader set of big data and flexible predictive models that ultimately may provide even more powerful insights as to who will be winning a horse race or which team will win a championship.

Take football for example as the number one global sport. The issue with football is that the average number of goals per game is typically low (it was 2.6 for the most recent World Cup in Qatar) as is the percentage of time “creating or conceding a goal” (in the range of 2% of the total time of the game). Faced with this infrequency, it is rather difficult to predict the odds of performance unless based on past performance, or the prevalence of known scorers, for example. But with big data, the best sport AI analysts would now ingest thousands of games, with hundreds of features to refine the predictive power of their analytics. These days, one can collect data on pretty much anything, from the location of players, the chain of their moves, timing of actions, type of plays (defence/offence), features such as shots and passes, the direction of play (lateral, backward, forward), the velocity of actions, and many more. All those actions that contribute the bulk of time of the game beyond scoring, should bring significant insight for better prediction and for maximising the chance of success.

Such reinforcement learning tools are now becoming the reference, being used in ice hockey, rugby, basketball, and American football(gridiron football), for a variety of uses such as player scouting or valuation, or field strategies. An extra decade after Beane’s success story, the director of data research and analytics of Liverpool Football Club convinced the leadership to acquire both Sadio Mane and Mohamed Salah each for less than 40 million pounds. Those two players were instrumental in making the club win the Champions League, and are still today perceived as top players in the UK and European leagues.5 Needless to say, both players are likely now worth a few multiples of their original transfer price.

AI in SportsAs an example of insights, consider a model that evaluates each player´s on-ball action based on its probability of creating and conceding a goalscoring opportunity in the context in which the action occurred. This framework6 is much more complete than looking at goals only as it considers all types of technical actions like passes, crosses, dribbles, take-ons, shots, interceptions, and tackles. The framework is also only possible because of data tracking and sophisticated machine learning tools that can assess all complex combinations of actions among different players. Looking at model results for the UK league, for example,7 a ton of new insights can be obtained, such as:

  1. 10% of Premier League players have negative value, that is, their actions help the competing team. Not good for sure.
  2. On average, the most valuable player is the keeper, not the central forward.
  3. Left-position players tend to be slightly more valuable than right players. Not sure why but the wisdom is that left players have higher velocity in sports (tennis is another example).
  4. Value goes slightly up the more forward a player is (remember that this is not tautological as each action is weighted by its probability of creating/conceding a goal).
  5. There is a real trade-off between quantity and quality of actions.
  6. But a group of 20% of players is also able to deviate from this trade-off and boost both the quantity and quality of actions. Needless to say, those are the most interesting players.
  7. In general, the value of a player varies from 1 to 2, for each position. This is really significant: imagine you have 11 players at the top end of the
    value range; you are sure to win.
  8. The most valuable player in the league is a Belgian midfield player (Kevin De Bruyne). He also has value statistics up to 5 times the average value of his competitive peers with the same midfield position.
  9. Liverpool and Manchester City have the largest pool of most valuable players.
  10. 95% of on-ball actions do not directly change the score but influence the game indirectly. This deep data underground is where the value of AI reinforcement learning lies.

Reinforcement learning will become mainstream, get ready to use it

Companies must master AI—and all techniques, whether it is (un)supervised learning or reinforcement learning. Besides cases of sports/games and chatbots, here are a few examples that prove its wide applicability. In healthcare, reinforcement learning has been used for lung cancer and epilepsy and the use of erythropoiesis-stimulating agents (ESAs) in patients with chronic kidney disease. In industries, a large set of manufacturing companies are propelling the automation of their factories by using deep reinforcement on robots to learn how to optimise tasks for the best efficacy, speed, and precision. In retail, the personalisation of product promotion is based largely on reinforcement learning algorithms.

We are just at the start of the AI revolution but managers should urgently be aware that new algorithms and techniques such as reinforcement learning are now set for prime time.

This article was originally published on May 4, 2023.

About the Author

Jacques BughinJacques Bughin is CEO of MachaonAdvisory, and a former professor of Management while retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of multiple companies.

References
1 Learning from Human Feedback: Challenges for Real-World Reinforcement Learning in NLP. 2020. Challenges of Real-World RL Workshop at NeurIPS 2020. https://research.google/pubs/pub49732/.
2 Perspectives on the Social Impacts of Reinforcement Learning with Human Feedback. 2023. ArXiv. https://arxiv.org/pdf/2303.02891.pdf.
3 onvergence of Gamification and Machine Learning: A Systematic Literature Review. 2021. Tech Know Learn. https://link.springer.com/article/10.1007/s10758-020-09456-4.
4 The Lessons of Moneyball for Big Data Analysis. DataCenter Knowledge. 2011. https://www.datacenterknowledge.com/archives/2011/09/23/the-lessons-of-moneyball-for-big-data-analysis.
5 Evaluating Soccer Player: from Live Camera to Deep Reinforcement Learning. 2021. ArXiv. https://arxiv.org/abs/2101.05388.
6 Actions Speak Louder Than Goals: Valuing Player Actions in Soccer. 2018. ArXiv. https://arxiv.org/abs/1802.07127.
7 Bringing objectivity and predictability to one of the most diverse and opiniated sports in the world by leveraging data. 2022. Repositório Universidade Nova. https://run.unl.pt/handle/10362/142482.

The post Looking beyond ChatGPT: Why AI Reinforcement Learning is Set for Prime Time appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/looking-beyond-chatgpt-why-ai-reinforcement-learning-is-set-for-prime-time/feed/ 0
ChatGPT Goes (not yet) to Hollywood https://www.europeanbusinessreview.com/chatgpt-goes-not-yet-to-hollywood/ https://www.europeanbusinessreview.com/chatgpt-goes-not-yet-to-hollywood/#respond Sun, 02 Jul 2023 23:31:04 +0000 https://www.europeanbusinessreview.com/?p=186119 By Jacques Bughin Will ChatGPT solve all of your organisation´s problems? Here are points to consider before you implement the seemingly magical AI tool. Since its official launch at the […]

The post ChatGPT Goes (not yet) to Hollywood appeared first on The European Business Review.

]]>
By Jacques Bughin

Will ChatGPT solve all of your organisation´s problems? Here are points to consider before you implement the seemingly magical AI tool.

Since its official launch at the end of 2022, ChatGPT has demonstrated how artificial intelligence (AI) systems have drastically improved. There is much excitement about the technology — which we definitely share– but it remains important to get clear on what it does/does not. Here is a list of important considerations an executive should think about before implementing such a tool.

1. How strong is the performance curve?

performance curve ai

While the combination of big data with AI led to major advancements in deep machine learning, it took only a matter of one decade for AI to perform at about human capabilities for image, writing and speech recognition. What ChatGPT further demonstrates is that the next step, reading and language understanding, could match human capabilities only in a matter of a few years.1 In fact, beyond the anecdotes, recent academic studies such as the one led by Choi and colleagues in late January 2023,2 blindly rated ChatGPT answers on real exam questions from the University of Minnesota Law School achieving a low C but passing grade in all four courses. And this was GPT 3.5, not the new version GPT4.

That level of conversational quality for Large Language Models (LLMs) such as ChatGPT does not come for free. ChatGPT had been trained on billions of data points, implying very large training costs. But here as well, things are quickly changing, with the training cost of GPT 3 equivalent going down by more than 80% in 2.5 years.

ChatGPT is a predictive model. Its accuracy is not perfect and may fall quickly if it did not get enough training data around it.

Furthermore, shortcuts are being tested very successfully to democratise 3 the cost of training a more limited LLM model. For example, a colleague of mine noted to me that Stanford researchers have built a much narrow parameters conversational model, which has been reinforced by a series of prompts asked in parallel to OpenAI’s GPT, with surprisingly good results and for a cost of less than one thousand dollars. While this is to be further checked, this implies a cost of 1000 times lower than a typical enterprise model which will use ChatGPT directly.

2. Are all use cases/domains possible with ChatGPT?

performance curve

One of the first applications of ChatGPT has been its rival use to search queries. The battle is on between Microsoft and Google.

This is not to say that Google is not ready with LLMs. The danger for Google is disruption — Google´s dominance in search obliges the company to have a new perfect LLM to blend with search queries. But to date, chat queries are costing much more than search and can eat Google´s comfortable margins. Microsoft, on the other hand, can have an inferior (but already fascinating) product like ChatGPT to integrate into its search, Bing. ChatGPT, it is hoped, is a clear way to rebalance the flow of queries to its advantage.

Besides this evident case affecting tech superstars, other cases may abound for ChatGPT and other types of LLMs to be used in enterprises. One case is education and information intelligence, aggregated from digital sources such as the web, and which are typically not yet structured for direct valuable insights (which ChatGPT will then deliver). Another case is virtual assistance for managerial organisational tasks or even creative tasks like developing a marketing tagline or building up IT codes.

Still, one thing must remain clear. ChatGPT is a predictive model. Its accuracy is not perfect and may fall quickly if it did not get enough training data around it. As a statistical model, it also may not deliver the same answer to the same prompt. The model is as good as the data it has collected, so that it should be constantly retrained to be real-time accurate. Finally, even if it is trained on billions of data, a large part of data remains strictly private- so ChatGPT is blind around enterprise closed doors.

Those are rather critical limitations that should be clearly taken into account when using GPT. For example, in a sector like private equity where I advise (Antler and FortinoCapital), ChatGPT may have a hard time getting a proper deal flow of newbie companies, if not trained on real-time data. Private sources may also limit the capacity of finding interesting bootstrap companies for instance. Likewise, the answers provided may not be fully perfect (so-called hallucination).4

3. Is Artificial intelligence really human intelligence?

ai human intelligence

Finally, artificial intelligence does not mean that AI, under its current zoom of language model, matches all tasks of human intelligence, especially reasoning. The shortcut made by some5 is a false logic that claims that ChatGPT may have acquired simple reasoning from learning from a massive amount of real-world data. OpenAI itself is aware of many limitations of ChatGPT as posted on its website and as recognised in public by OpenAI’s CEO.

While reinforcement learning techniques would make LLMs better in reasoning, it is not there yet for a large number of reasoning tasks.

In fact, in line with Open AI cautions, and despite those drumbeat claims, most of the recent works testing ChatGPT reasoning performance demonstrate it remains rather dumb. A recent study by Bang and colleagues6 shows that ChatGPT is 63.41% accurate on average in 10 different reasoning categories under logical reasoning, non-textual reasoning, and commonsense reasoning. While reinforcement learning techniques would make LLMs better in reasoning, it is not there yet for a large number of reasoning tasks.

Finally, and not least, the question is not that AI has yet to prove strong reasoning capabilities. What can be missing is the prevalence of data bias, unethical use, and more. The genius is there, but this is not yet Artificial General Intelligence. While potentially powerful, though, we are also to understand the conditions, such as jailbreaking, where LLMs can be harmful too.

About the Author

Jacques BughinJacques Bughin is CEO of MachaonAdvisory, and a former professor of Management while retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of multiple companies.

References

  1. Bang, Y., Cahyawijaya, S., Lee, N., Dai, W., Su, D., Wilie, B., … & Fung, P. (2023). A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. arXiv preprint arXiv:2302.04023.
  2. Choi, Jonathan H. and Hickman, Kristin E. and Monahan, Amy and Schwarcz, Daniel B., ChatGPT Goes to Law School (January 23, 2023). Minnesota Legal Studies Research Paper No23–03, http://dx.doi.org/10.2139/ssrn.4335905
  3. Douwe Kilea et al, 3021, Dynabench: Rethinking Benchmarking in NLP. ”Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
  4. Li, Belinda, Implicit Representations of Meaning in Neural Language Models, [http://arxiv:2106.00737/]arXiv:2106.00737,] [http://%20https/doi.org/10.48550/arXiv.2106.00737]https://doi.org/10.48550/arXiv.2106.00737]
  5. Smith, Craig, 2023, Hallucinations Could Blunt ChatGPT’s Success, IEEE Spectrum, March 13
    Wang, James; 2020, Improving at 50x the Speed of Moore’s Law: Why It’s Still Early Days for AI, Ark Investments.

The post ChatGPT Goes (not yet) to Hollywood appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/chatgpt-goes-not-yet-to-hollywood/feed/ 0
Resilience Is More than Being Able to Rebound: It Should Be Used As a Competitive Advantage https://www.europeanbusinessreview.com/resilience-is-more-than-being-able-to-rebound-it-should-be-used-as-a-competitive-advantage/ https://www.europeanbusinessreview.com/resilience-is-more-than-being-able-to-rebound-it-should-be-used-as-a-competitive-advantage/#respond Thu, 29 Jun 2023 21:58:31 +0000 https://www.europeanbusinessreview.com/?p=182375 By Jacques Bughin While they may not be able to prevent pending economic crises such as those resulting from the COVID-19 pandemic, businesses can use them as launch pads to […]

The post Resilience Is More than Being Able to Rebound: It Should Be Used As a Competitive Advantage appeared first on The European Business Review.

]]>
By Jacques Bughin

While they may not be able to prevent pending economic crises such as those resulting from the COVID-19 pandemic, businesses can use them as launch pads to growth by becoming resilient and doubling down on growth instead of cutting costs.

Worried about a crisis? You might think it’s coming if you add in the Ukrainian war, inflation, a remnant COVID-19 pandemic, and burgeoning private debt. But in reality, there is also a bright picture to contrast with this bleak outlook in that the war may soon be over, China has reopened its economy, and inflation is levelling off slightly in some parts of the world.

Rather than spending too much time guessing at the next crisis, its size, nature, and timing, business leaders should instead teach their organisations to become resilient. In analysing the numbers from numerous crises, including research conducted in cooperation with Accenture Research at the time of the COVID-19 pandemic, we have discovered three important elements for leaders to take notice of.

In fact, the idea is that resilience is a strategic complement to do better than before the crisis: what academics call bouncing “forward”.

The first is that resilience, or the ability to bounce back, is both a rare and long process. Brands like Hertz, JCPenney, and J.Crew went out of business in the first few months of the COVID-19 pandemic. In previous crises, 17% of publicly traded companies have gone public, either because they went bankrupt, went private again, or were bought out. And while most firms survived, it took between 1.5 and 3 years for firms and economies as a whole to recover the losses incurred during a major shock.

The second insight is that crises often redefine the status quo, with new winners emerging, and old winners becoming new losers. The falling angels are numerous, about 25% of total companies, but new rising stars are also visible. In fact, the idea is that resilience is a strategic complement to do better than before the crisis: what academics call bouncing “forward”. Resilient companies are those that use the crisis as an opportunity. Remember Andy Grove, then CEO of Intel, when he said that “crises make great companies better? At the time, Intel nearly collapsed because of a bug in the Intel Pentium processor. In fixing the bug, Intel also radically reinvented its partner ecosystem, while developing its Intel Inside Program that allowed the company to rebound and dominate the semiconductor market for years.

A final point concerns the ingredients for resilience and performance. Many studies, including consulting firms, will preach the virtue of agility, the ability to innovate or, the need to digitise. But the reality is more subtle than that. If a leader wants resilience to drive the new trajectory of his or her company, that same company will need to invest in the entire portfolio of capabilities (agility, innovation, digitisation, sustainability, and flexible work practices). And the best time to do so is during a crisis –precisely when rivals are scared, retreating, and overly focused on survival, instead of preparing for the next competitive battle. As Winston Churchill once said, “a good crisis should not be wasted.”

In practice, however, the typical company has a narrow set of capabilities and tends to retrench during a crisis. Winners are already preparing for the next crisis, and are eager to invest in difficult times when rivals have morphed into victims of turbulence. Best companies are not necessarily good at predicting crises, rather they focus on excelling in rising, and not falling, when the crises hit. Are you that breed?

This article is originally published on May 16, 2023.

About the Author

Jacques BughinJacques Bughin is CEO of machaonadvisory, and a former professor of Management while retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of multiple companies.

References

  • Barnichon, R., Matthes, C., and A. Ziegenbein (2018). The Financial Crisis at 10: Will We Ever Recover? FRBSF Economic Letter, 19.
  • Bughin, J., Berjoan, S., Hintermann, F. and Y. Wong, (2021), Is this Time Different? Corporate Resilience in the Age of COVID-19, In-cite, working paper, Solvay Business School, June.
  • Gulati, R., N. Nohria, and F. Wohlgezogen (2010), Roaring Out of Recession, Harvard Business Review, March.
  • Hirsch, S. (2018). Succeeding in the Long Run: A Meta-regression Analysis of Persistent Corporate Earnings. Journal of Economic Surveys, 32(1), 23-49.
  • Mann, M., & Byun, S. E. (2017). Retrenchment or Investment? Recovery Strategies in a Recession. Journal of Business Research, 80, 24-34.
  • Maury, B. (2018). Sustainable Competitive Advantage and Profitability Persistence: Sources versus Outcomes for Assessing Advantage. Journal of Business Research, 84, 100-113.
  • Ollagnier, J.M, Berjoan, S., Bughin, J. and Xiong, Y. (2021) Why Fixing the Planet is Also About Seizing Business Opportunities, European Business Review, March.
  • Romer, Ch. and D. Romer (2017), New Evidence on the Aftermath of Financial Crises in Advanced Countries. American Economic Review 107(10), pp. 3,072-3,118.
  • Teece, D., Peteraf, M., and S. Leih, (2016). Dynamic Capabilities and Organizational Agility: Risk, Uncertainty, and Strategy in the Innovation Economy. California Management Review, 58(4), 13-35.

The post Resilience Is More than Being Able to Rebound: It Should Be Used As a Competitive Advantage appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/resilience-is-more-than-being-able-to-rebound-it-should-be-used-as-a-competitive-advantage/feed/ 0
Why Fixing the Planet is also about Seizing Business Opportunities https://www.europeanbusinessreview.com/why-fixing-the-planet-is-also-about-seizing-business-opportunities/ https://www.europeanbusinessreview.com/why-fixing-the-planet-is-also-about-seizing-business-opportunities/#respond Fri, 26 May 2023 00:13:14 +0000 https://www.europeanbusinessreview.com/?p=111715 By Jean-Marc Ollagnier, Sybille Berjoan, Dr Jacques Bughin and Yuhui Xiong Companies’ efforts to become more sustainable can often be driven by regulation or government incentives. We propose a different […]

The post Why Fixing the Planet is also about Seizing Business Opportunities appeared first on The European Business Review.

]]>
By Jean-Marc Ollagnier, Sybille Berjoan, Dr Jacques Bughin and Yuhui Xiong

Companies’ efforts to become more sustainable can often be driven by regulation or government incentives. We propose a different approach – one that harnesses digital technologies to drive growth.

The first question CEOs should be asking about their sustainability efforts is not “Are we compliant?” but rather, “What value are we creating?”

Consider: Energy and sustainability policies around the world have long been a mix of regulation and stimulation. Compliance is critically important. And recent pandemic- related increases in funding1 offer an additional push for top executives to strive for increased sustainability. But do companies gain market share, revenue growth, or increased profits because they’ve taken actions in the interests of compliance, tax breaks, or subsidies?  Not nearly to the extent they could. When compliance and/or incentives are the primary drivers for “green work”, it’s easy to miss opportunities to strengthen performance.

We think that executives should consider sustainability through a different lens. Specifically, we propose that they invest in sustainability as a means to expand, move into new markets, or even create new markets. Our recent research, which included a global, multi-dimensional study of more than 4050 companies (and incorporated random forest machine learning) bears out the potential of this approach. We studied four explicit time periods: 2016 – 2020; January 2020 – June 2020; July 2020 – December 2020; and 2021 (expected). For more detail, please see the full report2.The critical caveat – and the key to succeeding – is blending the adoption of advanced digital technologies with sustainable growth interests. In this way, technology becomes the tool that both enables and augments sustainability.

Many companies – regardless of regulatory requirements and incentives – see the value in becoming more sustainable businesses. At the same time, most companies are investing in digital technologies. It’s the linking of those two streams, an approach we call a “twin transformation3, that makes the difference.  Specifically, we found that companies that pursue sustainability as a digital growth path are more than 2.5 times likely to be among tomorrow’s leading organizations. (Our criteria for a leading organization rests on an ability to restore and sustain positive operating growth as COVID effects continue; detail can be found here.)4 Twin transformers, regardless of their industry, expect to be among those posting profitable growth by the end of 2021, even as the effects of the pandemic persist (and as some industries thrive and others struggle in its continuing wake).

Companies that pursue sustainability as a digital growth path are more than 2.5 times likely to be among tomorrow’s leading organizations.

Interestingly, twin transformers were not necessarily companies leading on digital adoption or on the sustainability front. In Europe, for example, just 22% of the companies that were leading in either digital or sustainability were also Twin Transformers. We found similar percentages in North America. In the Asia Pacific region, we found more: 34% of those leading in digital or sustainability were also Twin Transformers. (Our study focused on large companies; there are more twin transformers among smaller digital natives.)

What does it take to combine a company’s digital and sustainable transformations, with innovation and value creation the objective? Based on our survey, a series of follow-up interviews, and our own client experience, we have identified five crucial activities.

Five activities frame twin transformation

Twin transformers differentiate beginning with the way they set their direction and carrying through to the ways in which they engage with their workforce.

1. Setting direction

Twin transformers do not layer sustainability efforts or digital adoption onto existing busines models (though prior to starting their twin transformation journey, they may have done so). Instead, they integrate scenario thinking into their strategy development process with the explicit purpose of identifying high potential models that will serve combined growth and sustainability interests.

Twin Transformers also explore ecosystem plays as a matter of course, seeking opportunities for faster, further scaling of their busines models, as well as deeper sustainability impact. In this way, they conceive of ways to gain traction with new strategies as they shift gears and pull away from the old.

Following through, they learn through incubator programs that convene an ecosystem of partners. These pilots revise and test their business models for viability and impact. A majority of twin transformers in our study (61%) already generate more than 10% of their revenues through ecosystem plays, and nearly 80% expect to do so within three years. 

Schneider Electric5 offers an example. This global company based in France, convened with the New Energy Opportunities (NEO) Network6 to explore growth models in just this way. The NEO network is a global community and online market platform of more than 300 corporate renewable energy purchasers and providers, supported by leading market analytics that serve to match supply with demand. Schneider teamed up with Walmart to use this network to support Walmart’s “Project Gigaton,”7 which aims to avoid a gigaton of CO2 emissions that would otherwise have been created through Walmart’s global value chain by 2030.

2. Combining resources

Twin transformers’ resource allocation explicitly reflects the understanding that sustainability and technology are not separate priorities.  Instead, twin transformers direct innovation investment to initiatives that bring together sustainability impact and the power of technology. Some do so by earmarking a specific share of R&D investment to that combination. Others set up dedicated innovation entities whose mandate is to develop, test, and scale business ideas that deliver sustainability impact through technology. All twin transformers convene diverse innovation teams that bring together technology and sustainability expertise.

With such blended priorities, Christian Hansen8, a bio-engineering company, created SweetyR Y-1,the first patented probiotic culture that can reduce added sugar in yogurt; the invention won the World Innovation Award9 for best new dairy ingredient at the 2019 Global Dairy Congress in Lisbon. In 2020, 82% of the company’s revenues were associated with activities devoted to enabling sustainable agricultural practices, reducing food waste, and improving customer health. And, as part of its 2025 strategy (ending with the close of the 2024/2025 fiscal year), Christian Hansen will invest heavily in R&D to innovate natural and microbial nutrition solutions supported by digital technologies such as AI, machine learning, digital twins, and automation.

3. Combining financial and non-financial KPIs to create organization-wide ownership.

Twin transformers identify and assign key performance indicators (KPIs) that go beyond financial results; often, these are linked to executive compensation. As a result, managers have clear structural support for decisions that support blended digital and sustainability goals. These KPIs might include progress on emissions reduction, share of products with positive societal impact, or share of resources procured from sustainable sources.

To gain a big-picture view of impact beyond financials, twin transformers also measure progress on factors such as environmental impact, employee wellbeing, and consumer experience. Many have developed custom methodologies and tools to do so, complementing traditional ESG metrics with measurements of the business impact of sustainable practices.

Kering, the French luxury goods company, measures and quantifies its environmental impact through an environmental profit and loss10 (EP&L) account. The EP&L has helped the company shift to a sustainable business model by making environmental impacts visible, quantifiable, and comparable. Through a digital platform and tool, the company also provides public access to the open data behind the EP&L. Kering has also convened hackathons with developers, tech experts, and sustainability specialists to create apps and digital solutions aimed at reducing the impact of the fashion industry on the environment.

4. Aligning partners for sustainable product lifecycles and improved traceability

Twin transformers proactively monitor the other companies in their value chains, often screening them against sustainability factors. To raise their value chain’s overall sustainability, they also collaborate with suppliers, for example by offering training in sustainable practices. Additionally, many deploy blockchain technology and digital sourcing platforms in collaboration with partners to improve the resource and product traceability. Doing so builds the foundation for new, circular business models. These activities also build trust with consumers, by helping them increase their knowledge of what goes into the goods they buy.

L’Oréal, for instance, engages in a traceability program focused on palm oil (a common cosmetics ingredient), in partnership with its suppliers. All of the palm oil the company uses meets the high standards set by the Roundtable on Sustainable Palm Oil.11 L’Oréal has also designed a tool to evaluate sustainability progress, measuring improvements in its packaging, the footprint of its formulas, ingredient sourcing, and the social benefits of its products.

Similarly, Deutsche Post DHL Group developed a “GoGreen Carbon Dashboard12 that clients can use to view analyses of carbon emissions associated with their shipments. The dashboard enables business customers to map emissions across their supply chains and develop viable reduction strategies.

5. Facing the skills challenge

Finally, twin transformers are more likely to take responsibility for building and nurturing talent than others. Sixty-one percent of twin transformers believe they are responsible for the continued employability of their people, compared with 44% of other companies.

Vodafone’s ReConnect initiative13, launched in 2016 in the UK, helps women who have been out of the workforce to rejoin it in a way that positions them for sustained success. Specifically, it provides training, coaching, flexible working options and an induction program focused on digital talent and skills for the future.

Twin transformers are focused on realizing a better future for their businesses and for the planet. This focus is positioning them to lead in the post-COVID world. The good news is that there’s no great secret to their current and anticipated successes. With commitment and the right processes, your company can also become a twin transformer.

This article is originally published on March 26, 2021.

About the Authors

Jean-Marc Ollagnier

Jean-Marc Ollagnier is the chief executive officer of Accenture in Europe, with management oversight of all industries and services in Europe. He is also a member of Accenture’s Global Management Committee.

Sybille Berjoan

Sybille Berjoan leads the Accenture Research European team and drives the European Thought Leadership agenda.

 

 Dr Jacques BughinDr Jacques Bughin is a strategic advisor to multiple companies and boards; he is retired from a 28-year career as senior partner and director of the McKinsey Global Institute.

 

Yuhui XiongYuhui Xiong, research manager of the Economic Modelling and Data Sciences team of Accenture Research, is responsible for driving and developing data-driven analysis for thought leadership projects.

 

References

  1. https://www.euronews.com/2021/02/10/eu – multi – billion – pandemic – recovery – fund – gets – go – ahead – from – european – parliament
  2. https://www.accenture.com/us-en/insights/strategy/european-double-up
  3. https://www.accenture.com/us-en/insights/strategy/european-double-up
  4. https://www.accenture.com/us-en/insights/strategy/european-double-up
  5. https://www.se.com/ph/en/
  6. https://neonetworkexchange.com/landing_page/main
  7. https://corporate.walmart.com/newsroom/2020/09 / 10 / walmart – and – schneider – electric – announce-groundbreaking-collaboration-to-help-suppliers-access-renewable-energy
  8. https://www.chr-hansen.com/en
  9. https://www.chr-hansen.com / _ / media / files / chrhansen / home /sustainability/reporting-and-disclosure/2018-19/chr-hansen-sustainability – report – 2018 – 19 . pdf
  10. https://www.kering.com/en/sustainability/environmental-profit-loss/
  11. https://inside-our-products.loreal.com / ingredients / palm- oil #:~:text = Palm%20oil %20is%20a%20vegetable , their % 20 emollient % 20 or % 20 foaming % 20 properties.
  12. https://www.dhl.com/global-en/home/logistics-solutions/green-logistics.html
  13. https://www.vodafonereconnect.com/

The post Why Fixing the Planet is also about Seizing Business Opportunities appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/why-fixing-the-planet-is-also-about-seizing-business-opportunities/feed/ 0
The Rise of the Intelligent Enterprise https://www.europeanbusinessreview.com/the-rise-of-the-intelligent-enterprise/ https://www.europeanbusinessreview.com/the-rise-of-the-intelligent-enterprise/#respond Fri, 24 Mar 2023 02:37:48 +0000 https://www.europeanbusinessreview.com/?p=177503 By Jacques Bughin, Philippe Roussière, and Praveen Tanguturi As more traditional companies are investing in AI, we explore best practices used by leaders to transform those investments in a new […]

The post The Rise of the Intelligent Enterprise appeared first on The European Business Review.

]]>
By Jacques Bughin, Philippe Roussière, and Praveen Tanguturi

As more traditional companies are investing in AI, we explore best practices used by leaders to transform those investments in a new organizational capability of intelligence.


KEY TAKEAWAYS

  • Intelligent enterprises are those that use data and technology to drive innovation, optimize operations, and create value for customers and stakeholders.
  • To become an intelligent enterprise, companies must embrace digital transformation, adopt a data-driven mindset, and invest in analytics, AI, and automation.
  • Intelligent enterprises prioritize customer experience, agility, and sustainability, and use technology to empower employees and create a culture of innovation and continuous improvement.

Artificial Intelligence (AI) underwent a new revolution a decade ago, fuelled by the rise of big data and the unique performance of neural network-based machine learning (ML) algorithms. AI continues to make major progress, especially with the recent adoption of generative AI such as ChatGPT or DALL-E by OpenAI.

As with any technology paradigm shift, digital native companies have been the most likely to adopt it. But in recent years, a major tipping is appearing:1 many large traditional companies have doubled their investments in AI technologies, now devoting a significant portion (about 25%) of their technology budgets to AI, and planning to continue to spend more on it in the coming years.

Investments by incumbents imply that companies are foreseeing strong returns from AI. But as we have known from the past, those returns are not automatic. In recent research2, we sought to understand the economics of AI for large companies around the world that already have an AI strategy in place, and especially what will make AI investments worth their spending. Here is what we have found:

A tech foundation: necessary but not sufficient

One well-established assertion is that returns to technology start by mastering the technology stack. But despite this truism, many companies still struggle with it. In the case of AI, the tech stack must have a sufficient volume and variety of data, a cloud and data lake infrastructure, and an AI technology platform that enables high-performance machine learning techniques. Only a minority of large, well-known global companies, have this stack in place.

From knowledgeable to intelligent enterprise

The tech platform mantra is not new. It was already clear during the computer revolution3, followed by the digital revolution.4 What our results add to this mantra is that the technology platform contributes disproportionately to the transition of AI from experimentation to production.

The first feature of this new intelligence is based on the fact that firms think ahead of time- they are using AI as an ability to better predict business outcomes.

But it does not contribute much to scaling AI within organisations. Scaling requires more. And the driver of major rewards to AI is whether a corporate masters the new dynamic capability of intelligence.5 Generic dynamic capabilities, as defined by their pioneers, David Teece and Gary Pisano6, are the ability to detect, exploit, and transform key business resources. Translated in this case, AI critically offers the opportunity to transform the typical enterprise knowledge resources into a new capability of intelligent insights. Strikingly, we find that over 50% of AI returns comes from this intelligent capability.

What are the features of this new intelligence? The first feature of this new intelligence is based on the fact that firms think ahead of time- they are using AI as an ability to better predict business outcomes. The second feature of this intelligence is that companies are shifting from thinking in terms of pure efficient division of labour to embedding workers into new innovations. For them, thanks to AI, data experimentation has become a routine and the basis for new innovations. For them too, smart machines and people find ways to cooperate intelligently and flexibly such as AI supporting diagnosis in healthcare, augmenting physical tasks with AI, or creating immersive digital twins for experienced learning. The third feature, and the biggest game changer, is that intelligent companies have radically transformed their human resource organisations such that AI is not confined to only data science experts, but is the baseline of all the organisation which itself understands AI and perceives it as a constant source of delight and support for more intelligent tasks.  

Best practices, anyone?

Intelligent bot for best practice

This transformation to a collective corporate intelligence is only getting started. In practice, out of looking at those already migrating to this intelligence platform; we could derive five best practices that enable this new organisation revolution. Here they are:

  1. Institutionalise collaboration between an internal data science talent pool and external collaboration networks.
  2. Foster a permanent balance in the AI talent pool between data scientists and behavioural, ethical, and social scientists.
  3. Mandate AI-specific training for senior executives to effectively support the journey to an AI-centric business model.
  4. Stimulate a variety of skills for the entire workforce (data, coding, ML, translation skills) that are bundled with a data mart and/or data interface to support data-driven decision making as the de facto standard in the organisation.
  5. Maintain an ongoing partnership between AI and ML experts and business domain experts to facilitate information flow across the organisation. 
    A rare minority of companies are currently mastering those best practices, are you one of those? Our research suggests that the future of corporate AI will reside in new institutionalised intelligence – the journey has just begun.



About the Author

Jacques BughinJacques Bughin is CEO of machaonadvisory, and a former professor of Management while retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of multiple companies.

Philippe RoussierePhilippe Roussière leads Innovation and AI at Accenture Research. He is a seasoned leader and innovator with over 25 years serving in a variety of research leadership roles on client-focused and thought leadership projects. He advises on using innovative methods like machine learning/NLP, economic modelling, data visualisation, and hybrid experiential research platforms.

Praveen Tanguturi-Praveen Tanguturi is the global applied intelligence research lead at Accenture and was the research director for the aforementioned “Art of AI maturity” report.


References

  1. Crisis? What Crisis? Why European Companies Should Double Down on AI Now. 9 January 2023. The European Business Review. https://www.europeanbusinessreview.com/crisis-what-crisis-why-european-companies-should-double-down-on-ai-now-2/
  2. The transformative potential of artificial intelligence. 10 December 2021. Futures. https://www.sciencedirect.com/science/article/pii/S0016328721001932
  3. The Return on Information Technology: Who Benefits Most? 24 November 2020. Information
  4. Systems Research. https://pubsonline.informs.org/doi/abs/10.1287/isre.2020.0960
  5. Big data, Big bang? 7 January 2016. Journal of Big Data. https://journalofbigdata.springeropen.com/articles/10.1186/s40537-015-0014-3
  6. Beyond artificial intelligence: why companies need to go the extra step. 2020, Journal of Business
  7. Strategy. https://www.emerald.com/insight/content/doi/10.1108/JBS-05-2018-0086/full/html
  8. Dynamic capabilities and strategic management. 4 December 1998. Strategic Management Journal. https://onlinelibrary.wiley.com/doi/10.1002/%28SICI%291097-0266%28199708%2918%3A7%3C509%3A%3AAID-SMJ882%3E3.0.CO%3B2-Z

The post The Rise of the Intelligent Enterprise appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-rise-of-the-intelligent-enterprise/feed/ 0
Crisis? What Crisis? Why European Companies Should Double Down on AI Now https://www.europeanbusinessreview.com/crisis-what-crisis-why-european-companies-should-double-down-on-ai-now-2/ https://www.europeanbusinessreview.com/crisis-what-crisis-why-european-companies-should-double-down-on-ai-now-2/#respond Mon, 09 Jan 2023 03:34:51 +0000 https://www.europeanbusinessreview.com/?p=171126 By Jacques Bughin, Francis Hintermann, and Philippe Roussière While in practice, it takes time for a tech revolution to unfold, multiple signs show the AI revolution is now mainstream, including […]

The post Crisis? What Crisis? Why European Companies Should Double Down on AI Now appeared first on The European Business Review.

]]>
By Jacques Bughin, Francis Hintermann, and Philippe Roussière

While in practice, it takes time for a tech revolution to unfold, multiple signs show the AI revolution is now mainstream, including more recently the deployment of generative AI powered by powerful systems such as GPT-3, DALL-E and others.  Combined with a looming economic crisis, a rampant battle against covid-19 viruses, nagging wars, and major economic headwinds, companies should marry the massive innovative power of AI with its automation capabilities to survive, and roar out of this new business realities.

Whether it’s a global pandemic, a major geopolitical conflict nearby, or now a rise in inflation that slowly resembles that caused by the oil shock of the 1970s, this significant turbulence points to a higher risk of economic recession. The emerging consensus among economists is that the odds of a recession are overwhelming in the eurozone by the end of 2022. Although the United States may be able to escape with a serious slowdown, the China growth engine has never been so fragile.

Roaring out of crisis

The risk of recession is a clear warning that companies need to prepare for much greater resilience in the months ahead. But while recessions often hurt, it’s often because companies are ill-prepared. Previous crises actually teach us an important lesson, which is that a crisis is also a unique time to reset the clock; and for the best-prepared firms, it is a strategic opportunity to weather the storm and quickly rebound beyond the previous performance trajectory1.

Systematic studies, such as that conducted by Professor Gulati of Harvard Business School, have analysed no fewer than three global recessions (the 1980 crisis, the 1990 downturn, and the bursting of the dot-com bubble in 2000) and found that recession can make most companies underperform. A small group of companies goes bankrupt, is sold off, or goes back into private ownership. But, at the other extreme, an equal proportion of companies emerge from the crisis generating higher profitability than their peers and stronger than their pre-crisis performance. A recent study2 some of us conducted a few months back to assess how firms came out of the pandemic also confirms that roughly in five companies has managed to navigate superbly through the pandemic-induced crisis and, barely two years after the start of the pandemic, is now exhibiting a performance record significantly better than pre-crisis.

How AI is creating crisis-proof companies

Emerging from a crisis stronger than before is usually linked to a common set of capabilities such as strong agility and ability to innovate3. But one capability of special relevance, in order to almost literally “roar out of the recession4”, is the right investment in, and exploitation of, (IT and digital) technology.

It is well known that investment in digital technology was a powerful business response during the pandemic. But the rationale may have been specific to the crisis, i.e., a solution to circumvent social distancing. Here, we’re talking about a more fundamental and more general reason, namely that technology allows companies to reduce costs, while fostering new ways to compete and rebuild a new competitive advantage.

Previous crises actually teach us an important lesson, which is that a crisis is also a unique time to reset the clock; and for the best-prepared firms, it is a strategic opportunity to weather the storm and quickly rebound beyond the previous performance trajectory.

In addition to this dual benefit of digital, a new set of technologies, artificial intelligence, has been anticipated to provide material revenue growth and cost reduction. From venture capital firms doubling their funding on AI to a major boost in the number of deep machine learning patents5 in recent years and the production of thousands of AI case studies, this prediction is proving to be solid, as AI expands efficiencies widely (e.g., through work automation), prunes inefficiencies in operations besides typical G&A costs, and creates major new opportunities, whether it is new drug innovation in the pharmaceutical sector, autonomous driving in the automotive sector, or intelligent operations management.

We have recently analysed our firms that were using artificial intelligence6 and taking into consideration the risk of recession. We are finding that AI technologies do the jobs that are sorely needed in times of crisis. For large companies vested in AI, the share of companies’ revenue, as well as costs, that is “AI-influenced” more than doubled between 2018 and 2021 and is expected to roughly triple between 2018 and 2024. Under a few reasonable assumptions, this AI diffusion translates into a profitability gain per year of 2-3 per cent (see table 1), or a rate at least twice as fast as the annual rate of firm productivity in the European Union in the last decade7.

It is not only that this rate is three times the average contribution of traditional IT to total productivity growth witnessed in the developed economies in the last twenty years. It is also, to put it another way, that the most talented AI-using company is showing twice as much profit after 10 years compared to its peers. And finally, the 2 per cent growth momentum is typically the level needed to hedge against the long-term adverse effects of crises8.

The central role of AI in this productivity surge is also quite special. Regarding cost efficiency, for example, AI drives automation as a selective way to substitute repetitive tasks by more effective technology. This process is thus not one of laying off employees across the board, but rather it is an optimisation of tasks that leads to better-quality jobs. Likewise, AI technologies support superior asset performance management, such as preventive maintenance, that reduce cash costs to new investment. Regarding revenue enhancement, AI is especially powerful as a support to a strategic and innovative/disruptive approach. Most AI-enabled revenue is concerned with new innovations in business models, a new shaping of the ecosystem, and new ways to compete. AI is especially powerful when it comes to supporting customer intelligence, customer servicing, or customer sales and marketing personalised recommendations. Most of those cases are even more attractive in times of crisis, because crisis makes customers’ commitment more volatile, or makes it critical to extend the useful life of assets rather than spending cash on failing assets.

The European opportunity

The European opportunity

The “icing on the cake” is also that a few companies are pushing the boundaries of what AI can do for business performance. Not only are these companies combining innovative strategies based on a robust AI architecture, strong AI culture, and workforce training, but they are also using 30-40 per cent more AI use cases than their peers. According to our calculations, the consequence of better technology utilisation and more use cases allows these companies to achieve twice as much productivity gain as the average company.

While European companies are currently the most exposed to an economic downturn, the above implies that they have a vested interest in leveraging AI for resilience. In the past, Europe has lagged behind the US and China in AI, but Europe is taking off. Startups and entrepreneurs are catching up. As one example, Snowflake, Europe is responding with UiPath; in one battle on AI, Europe is leaping forward with a lead vision on Quantum Computing9; and Europe is also setting a regulatory framework that pushes the critical relevance of responsible AI10.

Further to this, we can add that large, European-headquartered companies are also as good as peers from Asia and North America in mastering the full potential of AI. In fact, the estimated productivity gains from using AI at scale by top US and European companies is roughly the same level, at about 4.5 per cent per year.

While European companies are currently the most exposed to an economic downturn, the above implies that they have a vested interest in leveraging AI for resilience.

Those AI leaders are an elite, but not a small group, for example about 12 per cent of total AI-engaged firms. More importantly, they are emerging as the same proportion of AI users as in the US in many sectors. In particular, Europe shows the way in sectors such as automotive, retail, life science, and energy. Other European companies should take note and copy their AI journey for further performance and for the additional benefit of hedging against the odds of a possible recession.

About the Authors

Jacques BughinJacques Bughin is a professor of Management, Chaire Gillet of Management Practice, at the Solvay Brussels School of Economics and Management at Université libre de Bruxelles (ULB) and, among others, a former Director of McKinsey and of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of multiple companies.

Francis HintermannFrancis Hintermann is Global Managing Director of Accenture Research. Accenture Research is an Accenture entity that identifies and anticipates game-changing business, market, and technology trends through thought leadership and strategic research.

Philippe RoussierePhilippe Roussière leads Innovation and AI at Accenture Research. He is a seasoned leader and innovator with over 25 years serving in a variety of research leadership roles on client-focused and thought leadership projects. He advises on using innovative methods like machine learning/NLP, economic modelling, data visualisation, and hybrid experiential research platforms.

References

  1. “Organizational Resilience: A Valuable Construct for Management Research?”, 16 August 2020, International Journal of Management Reviews doi: 10.1111/ijmr.12239
  2. “Breaking Down Business Resilience, Post-Pandemic: The Data Reveal Surprises – and a Blueprint for Moving Forward”, 29 October 2021, The European Business Review, https://www.europeanbusinessreview.com/breaking-down-business-resilience-post-pandemic-the-data-reveal-surprises-and-a-blueprint-for-moving-forward/
  3. “Better than before: the resilient organisation in crisis mode”, 15 January 2018, Journal of Business Strategy, https://doi.org/10.1108/JBS-10-2016-0124
  4. “How IT Can Help Businesses Roar Out of a Recession”, 9 June 2020, eWeek, https://www.eweek.com/innovation/how-it-can-help-businesses-roar-out-of-a-recession/
  5. “WIPO Technology Trends 2019 Artificial Intelligence”, 2019, WIPO Technology Trends
    2019: Artificial Intelligence. Geneva: World Intellectual Property
    Organisation. https://www.wipo.int/edocs/pubdocs/en/wipo_pub_1055.pdf
  6. “The art of AI maturity”, Accenture https://www.accenture.com/us-en/insights/artificial-intelligence/ai-maturity-and-transformation
  7. “European Union Productivity, Trading Economics”, https://tradingeconomics.com/european-union/productivity
  8. “Effects of Financial Crises on Productivity, Capital and Employment”, 12 August 2016, Review of Income and Wealth, https://doi.org/10.1111/roiw.12253
  9. “Path to European quantum unicorns”, 23 February 2021, EPJ Quantum Technology, https://doi.org/10.1140/epjqt/s40507-021-00095-x
  10.  “EU Artificial Intelligence Act: The European Approach to AI”, 7 October 2021, European Commission, https://futurium.ec.europa.eu/en/european-ai-alliance/document/eu-artificial-intelligence-act-european-approach-ai?language=en

The post Crisis? What Crisis? Why European Companies Should Double Down on AI Now appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/crisis-what-crisis-why-european-companies-should-double-down-on-ai-now-2/feed/ 0
“Net better Off?” Why companies should scale new, tech-based flexible work practices https://www.europeanbusinessreview.com/net-better-off-why-companies-should-scale-new-tech-based-flexible-work-practices/ https://www.europeanbusinessreview.com/net-better-off-why-companies-should-scale-new-tech-based-flexible-work-practices/#respond Mon, 26 Sep 2022 11:54:19 +0000 https://www.europeanbusinessreview.com/?p=161673 By Jacques Bughin, Sybille Berjoan and Yuhui Xiong By 2022, Shopify, a well-known global next-generation e-commerce platform had secured a perennial flexible work arrangement scheme across its entire workforce, centred […]

The post “Net better Off?” Why companies should scale new, tech-based flexible work practices appeared first on The European Business Review.

]]>
By Jacques Bughin, Sybille Berjoan and Yuhui Xiong

By 2022, Shopify, a well-known global next-generation e-commerce platform had secured a perennial flexible work arrangement scheme across its entire workforce, centred around the possibility of working from home (WFH). MyRyan is another HR -based programme built by Ryan, a global tax firm, which allows their employees to work remotely from anywhere. Accenture, home to two of the co-authors, has an effective flexible work policy, with nearly 100 per cent of employees working remotely.

For employees, at least in part, working from home is seen in the same light as that of companion research, which found that work welfare is closely linked to meeting six fundamental human needs through work.2

But despite the large quantity of management books on the value-added of giving responsibility and freedom to workers3, the harsh reality is that there are only a few Shopifys and Ryans among the many companies stuck in the old Taylorism-like paradigm of work organisation. There, workers have often been assigned to a specific rigid posture in a hierarchy which in turn provides the orders for a job done full-time and on-premises.

One peculiar expectation of new work organisations was remote work, but this has failed to scale as fast as anticipated. Looking at the period pre-COVID, WFH was limited, used by barely 15 per cent of the European population.4 The COVID-19 pandemic obviously boosted WFH out of necessity, because of lockdown rules. Forty per cent of workers were working from home in both the US and Western Europe in the first months of the pandemic, or more than twice the level pre-crisis. But it remains to be seen whether WFH will stick as the “new normal” or was just a tactic for coping with the pandemic specificities.

office work

The failed launch of new, tech-based work organisation

One reason for poor usage of new, flexible HR practices may have resided in the ambiguity of the true productivity benefits of flexibility, as well as the risk of technology substitution for the workers. Regarding productivity growth, it is fair to say from a variety of academic studies that the productivity change from the WFH boost during COVID-19 was at best neutral. For example, studies had shown that only a minority of UK workers could complete as much work during the first wave of the pandemic as during pre-COVID. In Japan, a study by Morikawa (2020) suggested that WFH productivity was only two-thirds of the level achieved at the workplace in Japan. Similar analysis for Europe suggests a material productivity gap attached to WFH during the pandemic, whereby people welcome the public financing of work, but without fulfilling their part of the social contract to remain as productive as before the pandemic.

Technology, on the other hand, may have had a bad press, especially because its current form, digitisation, has too often materialised in job restructuring, and especially during major economic crises, when workers suffer the most.5 A powerful study conducted by Jaimovich and Siu (2020) has, for example, demonstrated that the secular decline in routine employment, which has been visible for about 50 years, is in fact a set of long stabilisation periods, corrected at each recession by a permanent employment loss, with more firms embracing major automation.

During the first wave of COVID-19, automation boomed. Investment in robotics to replace workers’ tasks under lockdown also increased by more than 20 per cent during the pandemic in the US (Chernoff and Warman, 2020).

New-tech work practices rebooted (the COVID-19 sequel)

The best-performing firms in terms of revenue growth were 30 per cent more likely to have a holistic well-being approach to human resources.

But something may perhaps have changed during the pandemic. Instead of digitisation being leveraged mostly/only as efficiency response to crises by companies, digitisation has also brought major support to a workforce in fear of being contaminated on-premises, and has made companies conscious that HR practices must indeed evolve.

A lot of surveys in the first wave of the COVID-19 pandemic have referred to the “FOG” syndrome (for “fear of going back to work”) as a new psychological challenge for workers. This challenge was sufficiently material that active labour participation shrank by more than 10 per cent in 2020 alone. And with the stabilisation of the pandemic, firms have also seen that people were not necessarily coming back to work, calling for a solution of the dilemma of better welfare for workers and better productivity.

Some of us have already extensively studied the importance of flexible forms of work and workers’ well-being through worker surveys leading at the end to a virtuous cycle of labour productivity.6 In this mirror research (see sidebar), which directly focuses on the view of large multinational firms, we find that some companies are cracking the code and turning the dilemma into a strategic advantage.
Here are the main findings.

work virtual meeting

Turning tech-based practices into a strategic advantage

New, tech-based work practices can be productivity-enhancing.

In general, the difference between using and avoiding fully flexible work has been associated in the last three years with 3.1 points of extra revenue growth annually. This is rather a large productivity gain, equivalent to about one-third of the revenue growth observed among multinationals, that is linked to the use of flexible work practices.

Of course, a very large proportion of companies have used WFH and other practices during COVID, but the difference between the bottom and top 25 per cent of firms still leads to 1.3 points of difference in annual growth, or more than 15 per cent of shareholder value premium.

A lot of surveys in the first wave of the COVID-19 pandemic have referred to the “FOG” syndrome (for “fear of going back to work”) as a new psychological challenge for workers.

We have further analysed whether flexible work practices were only due to the pandemic, in which case they may be only tactical, and organisations may revert to the old work practices when pandemic has fully disappeared, and despite workers expressing their preference for remote working, etc. The evidence is that two-thirds of the effect of WFH on revenue growth was already apparent pre-COVID, and the effect of WFH is highly persistent. What COVID-19 has triggered is a broader race to experiment with those practices; but the best companies were already mastering those practices with productivity advantages. Those companies simply used COVID-19 to scale those advantages further.

placeholder

Three ingredients help to crack the code for better productivity

The best companies are making flexible work practice a success by augmenting HR practice with adequate complementary capabilities. The first two of these that we find really drift the productivity up are not people-related; they are organisational and technological support.

Among organisational practices, the best companies have developed new leadership behaviours across all types of work practices, built new community rituals, supervision, and coaching to best engage workers in a hybrid form of work. Needless to say, these companies are some of the most agile and innovative, and are applying these capabilities into every corner of their operations, including work practices.

Likewise, we find that those companies with more flexible HR practices are not only more digitally mature than their peers, but have more consciously invested in specific, new technologies to support more flexible WFH practices, rather than for efficiency. This includes a large suite of tech-based enterprise collaboration tools.

But one other important aspect of productivity gain is when companies “put humans at the centre”. We see that the best companies are making a conscious mindset shift towards considering workers as an efficient factor of production, to be a source of cooperative talent with management, especially if they feel part of a cohesive and meaningful culture.

We have built up an index of human care, based on how workers crucially felt that their emotional and relational creativity were taken into account in their daily job.7 We have found that this caring index has as much weight in predicting new tech-based employee usage and revenue uplift than technology.

A recipe for everyone?

One might argue that the above recipe may have different levels of success, depending on company features, workers’ mix of tasks, and many others. For example, it cannot be denied that a large proportion of tasks may not be ready to be performed remotely (Dingel and Neiman (2020), Boeri et al. (2020))8, or that productivity is often enhanced by frequent and close team interactions, which may limit the potential of WFH (Battiston et al. (2017) and Etheridge et al. (2020)).

Still, if our analysis shows some difference by industry or country, the positive returns of flexible work practices remain large for any sector, and the key differences are really among how companies are successfully engaging at scale for this beneficial HR transformation.

This HR transformation is a step to take in the war for talent, and will need to include all technologies, such as AI, which also may lead to an important skill shift, as highlighted elsewhere by one of the authors.

About the Research

The research is based on an executive survey of more than 4,000 multinationals stratified so as to be representative of the industry mix in the US, the main European countries, and China. Company performance for 2021 was expected to be 9 per cent revenue growth, with a margin of about 10 per cent. The COVID-19 pandemic has radically changed work practices. One-third of companies report less than 10 per cent of workers using flexible work practices before the pandemic. This shrank to only 10 per cent by end of 2021. Still, only 2 per cent of companies by the same time report that three-quarters of their workforce use flexible work practices. The best-performing firms in terms of revenue growth were 30 per cent more likely to have a holistic well-being approach to human resources. Relative to other performance drivers, such as innovation, agility, etc., the well-being of the workforce drives performance especially in sectors such as software, health, and automotive. We use regression techniques to assess how the difference in performance by industry can be linked to the use of tech-based practices of work, controlling for business segments, company size, and location. We use an error-correction model to separate structural versus short-term effects linked to COVID. A performance link between flexible work practices and human resources well-being practices has been confirmed by machine learning techniques (Random Forests) leading to a predictive accuracy of more than 80 per cent.

About the Authors

Jacques BughinJacques Bughin is a professor of Management, Chaire Gillet of Management Practice, at the Solvay Brussels School of Economics and Management at Université Libre de Bruxelles (ULB) and, among others, a former Director of McKinsey and of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of multiple companies.

Sybille BerjoanSybille Berjoan leads the Accenture Research European team and drives the European Thought Leadership agenda.

 

Yuhui XiongYuhui Xiong is the research manager of the Economic Modelling and Data Sciences team in Accenture Research. She is responsible for driving and developing data-driven analysis for thought leadership projects.


References

  1.  https://www.accenture.com/ro-enhttps:/www.accenture.com/ro-en/insights/future-workforce/employee-potential-talent-management-strategy/insights/future-workforce/employee-potential-talent-management-strategy
  2. https://www.accenture.com/ro-en/insights/consulting/future-work
  3. https://www.bbvaopenmind.com/en/articles/new-ways-of-working-in-the-company-of-the-future/
  4. https://www.eurofound.europa.eu/sites/default/files/ef_publication/field_ef_document/ef20059en.pdf
  5. https://www.linkedin.com/pulse/hr-management-must-fix-its-covid-19-practices-jacques-bughin/
  6. https://www.accenture.com/us-en/insights/consulting/future-work
  7. https://www.accenture.com/_acnmedia/Thought-Leadership-Assets/PDF-3/Accenture-Care-To-Do-Better-Report.pdf#zoom=50
  8. The first authors used the O*NET database to estimate that 37 per cent of US jobs can be performed from home. Boeri et al. (2020) using European data, found that 24-31 per cent of jobs can be performed at home in major European countries.

The post “Net better Off?” Why companies should scale new, tech-based flexible work practices appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/net-better-off-why-companies-should-scale-new-tech-based-flexible-work-practices/feed/ 0
Crisis? What crisis? Why European Companies Should Double Down on AI Now https://www.europeanbusinessreview.com/crisis-what-crisis-why-european-companies-should-double-down-on-ai-now/ https://www.europeanbusinessreview.com/crisis-what-crisis-why-european-companies-should-double-down-on-ai-now/#respond Fri, 23 Sep 2022 21:22:41 +0000 https://www.europeanbusinessreview.com/?p=162449 By Jacques Bughin, Francis Hintermann and Philippe Roussière Whether it’s a global pandemic, a major geopolitical conflict nearby, or now a rise in inflation that slowly resembles that caused by […]

The post Crisis? What crisis? Why European Companies Should Double Down on AI Now appeared first on The European Business Review.

]]>
By Jacques Bughin, Francis Hintermann and Philippe Roussière

Whether it’s a global pandemic, a major geopolitical conflict nearby, or now a rise in inflation that slowly resembles that caused by the oil shock of the 1970s, this significant turbulence points to a higher risk of economic recession. The emerging consensus among economists is that the odds of a recession are overwhelming in the eurozone by end of 2022. Although the United States may be able to escape with a serious slowdown, the China growth engine has never been so fragile.

Roaring out of crisis

The risk of recession is a clear warning that companies need to prepare for much greater resilience in the months ahead. But while recessions often hurt, it’s often because companies are ill-prepared. Previous crises actually teach us an important lesson, which is that a crisis is also a unique time to reset the clock; and for the best prepared firms, it is a strategic opportunity to weather the storm and quickly rebound beyond the previous performance trajectory.

Systematic studies, such as that conducted by Professor Gulati of Harvard Business School, have analysed no fewer than three global recessions (the 1980 crisis, the 1990 downturn, and the bursting of the dot-com bubble in 2000) and found that recession can make most companies underperform. A small group of companies goes bankrupt, is sold off, or goes back into private ownership. But, at the other extreme, an equal proportion of companies emerge from the crisis generating higher profitability than their peers and stronger than their pre-crisis performance. A recent study some of us conducted a few months back to assess how firms came out of the pandemic also confirms that roughly one in five companies has managed to navigate superbly through the pandemic-induced crisis and, barely two years after the start of the pandemic, are now exhibiting performance records significantly better than pre-crisis.

How AI is creating crisis-proof companies

Emerging from a crisis stronger than before is usually linked to a common set of capabilities such as strong agility and ability to innovate. But one capability of special relevance, in order to almost literally “roar out of the recession”, is the right investment and exploitation of (IT and digital) technology.

It is well known that investment in digital technology was a powerful business response during the pandemic. But the rationale may have been specific to the crisis, i.e., a solution to circumvent social distancing. Here, we’re talking about a more fundamental and more general reason, namely that technology allows companies to reduce costs, while fostering new ways to compete and rebuild a new competitive advantage.

In addition to this dual benefit of digital, a new set of technologies, artificial intelligence, has been anticipated to provide material revenue growth and cost reduction. From venture capital firms doubling their funding on AI to a major boost in the number of deep machine learning patents in recent years and the production of thousands of AI case studies, this prediction is proving to be solid, as AI expands efficiencies widely (e.g., through work automation), prunes inefficiencies in operations besides typical G&A costs, and creates major new opportunities, whether it is new drug innovation in the pharmaceutical sector, autonomous driving in the automotive sector, or intelligent operations management.

We have recently analysed our firms that were using artificial intelligence and taking into consideration the risk of recession. We are finding that AI technologies do the jobs that are sorely needed in times of crisis. For large companies vested in AI, the share of companies’ revenue as well as costs that is “AI-influenced” more than doubled between 2018 and 2021 and are expected to roughly triple between 2018 and 2024. Under a few reasonable assumptions, this AI diffusion translates into a profitability gain per year of 2-3 per cent (see table 1), or a rate at least twice as fast as the annual rate of firm productivity in the European Union in the last decade.

It is not only that this rate is three times the average contribution of traditional IT to total productivity growth witnessed in the developed economies in the last twenty years. It is also, to put it another way, that the most talented AI-using company is showing twice as much profit after 10 years compared to its peers. And finally, the 2 per cent growth momentum is typically the level needed to hedge against the long-term adverse effects of crises.

The central role of AI in this productivity surge is also quite special. Regarding cost efficiency, for example, AI drives automation as a selective way to substitute repetitive tasks by more effective technology. This process is thus not one of laying off employees across the board, but rather it is an optimisation of tasks that leads to better-quality jobs. Likewise, AI technologies support superior asset performance management, such as preventive maintenance, that reduce cash costs to new investment. Regarding revenue enhancement, AI is especially powerful as a support to a strategic and innovative/disruptive approach. Most AI-enabled revenue is concerned with new innovations in business models, a new shaping of the ecosystem, and new ways to compete. AI is especially powerful when it comes to supporting customer intelligence, customer servicing, or customer sales and marketing personalised recommendations. Most of those cases are even more attractive in times of crisis, because crisis makes customers’ commitment more volatile, or makes it critical to extend the useful life of assets rather than spending cash on failing assets.

Table 1
Source: Accenture Research survey, authors

The European opportunity

The “icing on the cake” is also that a few companies are pushing the boundaries of what AI can do for business performance. Not only are these companies combining innovative strategies based on a robust AI architecture, strong AI culture, and workforce training, but they are also using 30-40 per cent more AI use cases than their peers. According to our calculations, the consequence of better technology utilisation and more use cases allows these companies to achieve twice as much productivity gain as the average company.

While European companies are currently the most exposed to an economic downturn, the above implies that they have a vested interest in leveraging AI for resilience. In the past, Europe has lagged behind the US and China in AI, but Europe is taking off. Startups and entrepreneurs are catching up. As one example, Snowflake Europe is responding with UiPath; in one battle on AI, Europe is leaping forward with a lead vision on Quantum Computing; and Europe is also setting a regulatory framework that pushes the critical relevance of responsible AI.

Further to this, we can add that large, European-headquartered companies are also as good as peers from Asia and North America in mastering the full potential of AI. In fact, the estimated productivity gains from using AI at scale by top US and European companies is roughly the same level, at about 4.5 per cent per year.

These AI leaders are an elite, but not a small group, for example about 12 per cent of total AI engaged firms. More importantly, they are emerging as the same proportion of AI users as in the US in many sectors. In particular, Europe shows the way in sectors such as automotive, retail, life science, and energy. Other European companies should take note and copy their AI journey for further performance and for the additional benefit of hedging against the odds of a possible recession.

About the Authors

Jacques Bughin

Jacques Bughin is a professor of Management, Chaire Gillet of Management Practice, at the Solvay Brussels School of Economics and Management at Université libre de Bruxelles (ULB), and, among others, a former Director of McKinsey and of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of multiple companies.

Francis Hintermann

Francis Hintermann is Global Managing Director of Accenture Research. Accenture Research is an Accenture entity that identifies and anticipates game-changing business, market, and technology trends through thought leadership and strategic research.

Philippe Roussière

Philippe Roussière leads Innovation and AI at Accenture Research. He is a seasoned leader and innovator with over 25 years serving in a variety of research leadership roles on client-focused and thought leadership projects. He advises on using innovative methods like machine learning/NLP, economic modelling, data visualisation, and hybrid experiential research platforms.

The post Crisis? What crisis? Why European Companies Should Double Down on AI Now appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/crisis-what-crisis-why-european-companies-should-double-down-on-ai-now/feed/ 0
The Imperative of the Vigilant Corporation https://www.europeanbusinessreview.com/the-imperative-of-the-vigilant-corporation/ https://www.europeanbusinessreview.com/the-imperative-of-the-vigilant-corporation/#respond Sun, 10 Jul 2022 15:05:59 +0000 https://www.europeanbusinessreview.com/?p=154843 By Jacques Bughin After COVID-19, now comes the appalling Ukrainian-Russian war, and the threat of stagflation. But if turbulent times rule –doesn’t it pay to be vigilant? Contrary to what […]

The post The Imperative of the Vigilant Corporation appeared first on The European Business Review.

]]>
By Jacques Bughin

After COVID-19, now comes the appalling Ukrainian-Russian war, and the threat of stagflation. But if turbulent times rule –doesn’t it pay to be vigilant? Contrary to what many believe, the recent shocks are significantly disruptive and frequent enough to consider them as black swans; they imply a mandate – and opportunity – to be vigilant.

If asked whether the recent months have been “turbulent”, the odds are high that you agree. After a global and ongoing COVID-19 pandemic, a major war has broken out, in which Russia has invaded Ukraine, as if in a large-scale remake of the invasion of Crimea about eight years ago. And if the risks of terrorism peaked in 2021, the jihadist ideology is still much in play in countries such as Iraq, Somalia and Afghanistan. 

Minor turbulences are a fact of life, but major turbulences are clearly a different species. They are, obviously, highly disruptive, but also much more frequent than many believe. In consequence, companies would be well advised to anticipate the onset of turbulent periods as best they can, and to be agile enough to display strong resilience. 

If a lot has been said about resilience – or the ability to adjust when large crises hit – much less is known about “sensing” major turbulence. However, our work suggests that vigilance is a critical capability to have, on top of resilience, in order to last and thrive.

Why the vigilance imperative

Turbulence needs to be sensed for the reason that a)  it is highly disruptive, and b) turbulent events arise more often than pure black swans. 

High disruption

Disruption from turbulent events can be assessed on multiple dimensions, such as their impact on lives and livelihood.

a) With regard to lives, the death toll from war, pandemic and terrorism rarely represents more than a few per cent of the total of millions of deaths that happen every year. But the crucial fact is that these deaths are not natural or planned. In effect, the COVID pandemic has taken the largest toll on lives, in the millions in the last two years, because of its global nature. Last year, terrorist attacks killed a few thousand (1), while the existing conflict in Ukraine may build into the tens of thousands of casualties in a few weeks. 

Nevertheless, if one focuses on the affected country, the picture may be worrisome. The current daily casualty rate from the war is twice as big as the daily COVID casualties. And Ukraine is not alone. When Islamic insurgents invaded the city of Palma, in Mozambique, 150 were killed in 15 days; in comparison, 75 people lost their lives from COVID in the same time span of two weeks. The airport bombing in Kabul, Afghanistan, killed 183 people on 26 August 2021, compared with about 15 people officially dying every day from COVID-19.

b) The impact on livelihood is also tangible, especially on socio-economic activity. Disruptions include breaks in supply chains (COVID), large migration in the millions (Ukraine), and significant volatility in commodity and energy prices (Ukraine). 

War may have a comparable depressive effect to the COVID-19 pandemic, depending on the duration and intensity of the conflict. A multi-year conflict may shrink GDP by more than 5 points, and recovery may take many years. 

In the most extreme cases, high turbulence can shrink GDP and stock markets by up to 10 percentage points. Remember that COVID-19 led to a shrinkage of more than 20 per cent of economic activity in the first months after the announcement of the pandemic, essentially driven by the imposed lockdown. But stock markets also crashed by same amount in the first week, even if they managed to recover quickly in a matter of weeks.

War may have a comparable depressive effect to the COVID-19 pandemic, depending on the duration and intensity of the conflict. A multi-year conflict may shrink GDP by more than 5 points, and recovery may take many years. In the last century, threats of war with a direct effect on the US decreased the S&P 500 by an average of 3.5 per cent. In 1941, the Pearl Harbor attack on the US by Japanese aircraft led to a dip of the S&P by about 4.4 per cent on the day, while the Cuban Missile Crisis shrank the index by 2.7 per cent (2).

The 9/11 terrorist attack on the US led to a drop of 7 per cent of the S&P on the day in the US, and cost the country about $40 billion. Iraq has suffered the most from terrorist attacks in the last 15 years, with an estimated economic depletion of 10 per cent of GDP every year from 2010 to 2018 (3).

Rare but not a black swan

Black swans are extreme events, possibly 10 times the standard deviation to the mode of event distribution (3 times still happens in 0.1 per cent of cases; ten times is so infrequent that it is impossible and too costly to plan).

Are turbulent events so rare? Contrary to popular belief, our computations suggest otherwise. The probability of an event like COVID happening in a given year may be up to 1.7 per cent, according to a recent study published in the Proceedings of National Academic Science (4). Likewise, 1.9 per cent of all violent conflict events kill as many people as the five major causes of death nowadays, and still happen with a frequency of 0.5 per cent a year. Jumps above 3 per cent in the stock markets that are linked to war and major terrorist events have been limited, but still account for about 0.3 per cent a year (5).

Becoming a Vigilant Firm

What are the features of companies that excel in vigilance?

Best practices

Day and Schumaker, in a recent issue of the Sloan Management Review (2020) (6), remark that top vigilant performers stay ahead of others during highly turbulent times by knowing where to look for warning signs and how to explore their environment. Best practices include:

a) Scoping the environment beyond comfort zones. For instance, the risk of war in Ukraine, just launched by Russia, was hinted at a few times, from the early invasion in 2014 to the modus operandi of Russia in other conflicts, and a large set of prescient statements inserted in interviews made by the Russian president around Ukraine in recent years.

b) Building a playbook of opportunities linked to those turbulent events. The watchword is not to waste a good crisis. The most vigilant companies can build an opportunity playbook by simulating the possible risk scenario and thus derive possible opportunities from disruption. The scenario of a COVID-19 pandemic could signal the importance of digitisation, due to social distancing, and the further scaling up of automation as an extra gain of innovation, and a limit on cost inflation.

The watchword is not to waste a good crisis. The most vigilant companies can build an opportunity playbook by simulating the possible risk scenario and thus derive possible opportunities from disruption.

c) Critical analysis of weak signals. The fact that Russia interfered so as to influence the US election, the fact that cybersecurity threats were exploding, and that Russia was testing some modes of war in Syria were sufficient signals, which, although weak, in aggregate represented a mass of signals leading to a plausible scenario of a coming war.

Going forward

Companies that are hyper-vigilant are clearly following the above-mentioned practices. But we have found that they also often broaden their scope to new types of forthcoming turbulences. 

It could be not only a major pandemic, or political tensions and war.  We bet that the most vigilant companies are also now the ones that are building their vigilance linked to important topics such as ethics, cybersecurity or, now, sustainability and a green planet.  

About the Author

Jacques Bughin is a professor of Management, Chaire Gillet of Management Practice, at the Solvay Brussels School of Economics and Management at Université libre de Bruxelles (ULB), and among others, a former Director of McKinsey and of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the Board of multiple companies.

References
(1) Miller, E. (2016), Patterns of Islamic State-Related Terrorism. START Background Report, as well as  Terrorism in the EU: terror attacks, deaths and arrests in 2020 | News | European Parliament (europa.eu)
(2) See Mueller, H. (2013). The economic cost of conflict. Work. Pap., Int. Growth Cent., London. See also Cutler, D. M., Poterba, J. M., & Summers, L. H. (1988). What moves stock prices?, National Bureau of Economic Research
(3) Bardwell, H., & Iqbal, M. (2021). The economic impact of terrorism from 2000 to 2018. Peace Economics, Peace Science and Public Policy, 27(2), 227–261.
(4) Intensity and frequency of extreme novel epidemics | PNAS
(5) Baker, S. R., Bloom, N., Davis, S. J., & Sammon, M. C. (2021). What triggers stock market jumps? (No. w28687). National Bureau of Economic Research.
(6) Day, G. S., & Schoemaker, P. J. (2020). How vigilant companies gain an edge
in turbulent times. MIT Sloan Management Review, 61(2), 57–64.

The post The Imperative of the Vigilant Corporation appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-imperative-of-the-vigilant-corporation/feed/ 0
Inside the Journey of a Decacorn https://www.europeanbusinessreview.com/inside-the-journey-of-a-decacorn/ https://www.europeanbusinessreview.com/inside-the-journey-of-a-decacorn/#respond Thu, 19 May 2022 15:37:01 +0000 https://www.europeanbusinessreview.com/?p=148207 By Jacques Bughin Start-ups – and their unicorn, and now decacorn, darlings – are critical both for value creation as well as for economic growth and job creation.  Regarding value, […]

The post Inside the Journey of a Decacorn appeared first on The European Business Review.

]]>
By Jacques Bughin

Start-ups – and their unicorn, and now decacorn, darlings – are critical both for value creation as well as for economic growth and job creation.  Regarding value, the US Private Equity sector  has generated 1.8 times more returns from its startups financing than what the S&P 500 has generated  over the last 20 years1. Regarding economic growth,  new US firms (those surviving their first 5 years of operations), have been able to deliver more than twice the productivity growth of the US economy under the same time frame. 

The start-up star of the last decade: the Unicorn

Some start ups have been even more stellar than the average. About ten years ago, the buzz came about being a “unicorn”3, – i.e. a start-up which was privately valued by venture capitalists and other private  investors at more than USD 1 Billion.   On top of being a major milestone in valuation, those start ups were exceptional on multiple counts. While being only a few (less than 50 by 2013), they were planting the seed of new household names such as Uber, Zynga or Skype;  had reached  valuation of magnitude 6 times the original capital raised4 generating IRR >65% for investors, and profit size that was often enough to return the full portfolio hurdle rate promised by their VC investors. 

The new kid on the block: the Decacorn

Since the early spot of the first list of 50 “unicorns” ten years ago, a lot has happened.  Quite a few went the IPO route (often generating a extra public value premium of 35% in the first month of their debut), or got acquired by major strategic investors (such as Skype and LinkedIn by Microsoft).

LinkedIn

The short group of elite start ups has also expanded significantly. Even if they represent still less than 1% of the 100,000 of prevailing start ups which have received funding to date, more than 1000 of them exist today, meaning that the specie has been able to expand by  50% a year.5

What’s next, then?  The new phenomenon is another exclusive premium club, called the “decacorns”6, welcoming companies valued at more than USD 10 Billion.   This club includes companies such as from Databricks, Stripe and ByTeDance (home to TikTok), – companies which add to the early exceptional “decacorns” of the past ten years ago, such as Facebook and Palantir, (both  having since exited the club through IPO).

By late 2021, this cumulative set of decacorns will soon be one hundred, – or just below 1/10 of  the population of unicorns. This size is noticeable as this current club of decacron is now  bigger than  the original number of unicorns7, when buzz emerged about them.

Spotting a decacorn: Five common features

At the speed at which the club of decacorns expands, it might  become mainstream – with about 800 of them  by end of the decade.   One possible hypothesis is that this club of decacorn is simply the club of unicorns that is becoming older. So why should we bother. But our research demonstrates that a decacorn is not an older unicorn, but has rather unique features that also set the decacorn apart from the old star of unicorns. We uncover five:

1. Decacorns emerge out of a virtuous cycle of value creation. 

Pending the year analysed,  approximately 9% of unicorns have become decacorns in the last decade. Looking billion by billon to reach the decacorn status,  each extra billion has only a 65% chance to bring the next one closer to the decacorn club.  What is remarkable for the decacorn is that it  has managed to double the success rate from the low side valuation to the valuation close to reach the threshold of decacorn.

Quantitatively, a decacorn has had a 35% conversion rate to next billion when valued between 1-3 billion, but increases it to 80% when it passes the 5 billion valuation. The higher the valuation, the more likely the next billion will come to reach the decacorn status.

2. Decacorns grow business as easily as early start ups, – but with long term resilience.   

The above shows that decacorns are not just unicorns at a 10X valuation, 10 years later. In fact, the average decacorn is 10.5 years old, making a 10X growth in valuation in just half the duration (3,5 years versus 7 years) of  the unicorn.

High growth may arise for a short time, as a burst of energy to take-off and succeed.  Think like the gazelles and the cheetahs in their run for life (survival or food).

This rapid valuation boost implies that, on average, a decacorn company has been doubling its growth every year for over a decade. Most early start ups should have this growth ambition at their early days, to hope to secure early funding, and a lot of start-ups have not achieved that. Furthermore, high growth may arise for a short time, as a burst of energy to take-off and succeed.  Think like the gazelles and the cheetahs in their run for life (survival or food).

Yet the challenge is to get this energy to last. Decacorns sustain this pace for a decade on average. This is resilient speed on steroid.

3. Decacorns are born global. 

Decacorns sustain this massive value growth because of their vision of the “world is flat”. The first ever European Decacorn, Uipath, born out of Romania, designed a global growth strategy from its onset, so that the company could cover as much of the robotic automation market as possible. It designed a global distribution presence, from New York to the rest of the world in no time, and borrowed scale from its ecosystem partners.8 Another example is ByTeDance which quickly realized that it needed to scale globally as a blowback from China, (his CEO was reported in 2016 to say that global was a must, as China was only 1/5 of internet usage and this that it would be impossible to compete “without resources allocation on global level and scale effect on the other 4/5”  of the digital universe).9  ByteDance kicked off its  globalization pace 5 years ago, including competitors acquisition eg in Indonesia and the US to establish a quick leadership presence across the globe. By now, TikTok has representative offices in close to 200 offices wordwide.

Further, as Stripe for secure payments, or  BioNtech for Covid vaccines demonstrate, the massive growth of decacorns can be facilitated by a global appeal of products/ applications.  Decacorns also rely on early technologies with vast opportunities, e.g. genomics in the case of BioNtech, AI for ByTeDance and DataBricks, or still blockchain crypto for FTX.

4. Decacorns are more capital efficient. 

We have estimated that, for the top 25 most valued decacorns to date, the ratio of value to capital injected is close to 10 versus about 6 for unicorns.  This ratio even goes to 12 when we take out decacorns that rely on infrastructure-based business models that require major investment, such as SpaceX, or F&T Express.

5. Everywhere can be a place to become a decacorn.   

The wisdom has been that Silicon Valley is the place for success, at least for tech start ups. The world of unicorns, when spotted 10years ago, was concentrated in Silicon Valley, and today, still about 50% of unicorns are born out of the US, and 20% of those US unicorns are managed by Indian founders,10 who had decided to try their luck in what they claim is the most world successful statrtup ecosystem. 

By current trends, decacorns seem more often to be emerging from all over the world. From Celonis to Klarna or Checkout, a promising number of them has even been appearing in Europe. Europe’s decacorns conversion rate from to unicorns in Europe reaches 13%, or 65% better rate then the US unicorns. More specifically the UK, Germany, and Sweden – have experienced a real surge in decacorns, with more than 10 new ones in 2021 alone.   

About the Author

Jacques Bughin

Jacques Bughin is a professor of Management, Chaire Gillet of Management Practice, at the Solvay Brussels School of Economics and Management at Université libre de Bruxelles (ULB), and among others, a former Director of McKinsey and of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the Board of multiple companies.

References

  1. https://www.cambridgeassociates.com/private-investment-benchmarks/
  2. C:\Users\bughi\Downloads\Decker, R. A., J. Haltiwanger, R. S. Jarmin, and J. Miranda (2017):Caldwell, C. 2009. Identity, self-awareness, and self-deception: Ethical implications for leaders and organizations. Journal of Business Ethics, 90: 393-406.
  3. https://www.researchgate.net/publication/305755434_What_Big_Companies_Can_Learn_from_the_Success_of_the_Unicorns
  4. https://www.researchgate.net/profile/Ken-Wiles-2/publication/343778343_The_Growing_Blessing_of_Unicorns_The_Changing_Nature_of_the_Market_for_Privately_Funded_Companies/links/5f6a0dc1299bf1b53ee9aed9/The-Growing-Blessing-of-Unicorns-The-Changing-Nature-of-the-Market-for-Privately-Funded-Companies.pdf
  5. https://www.fortinocapital.com/blog/how-to-become-unicorn
  6. https://news.crunchbase.com/news/decacorn-startups-2021-global-record-data-charts/
  7. https://news.crunchbase.com/news/decacorn-startups-2021-global-record-data-charts/
  8. https://www.uipath.com/hubfs/idceconomicimpact.pdf
  9. https://news.cgtn.com/news/2020-08-08/ByteDance-an-algorithm-backed-firm-with-globalization-ambitions–SNbGeeF0pq/index.html#:~:text=ByteDance%20started%20its%20global%20development%20strategy%20in%202015%2C,app%20Flipagram%20and%20its%20news%20product%20New%20Republic.
  10. https://www.indiatimes.com/worth/news/ninety-founders-amongst-five-hundred-usa-unicorns-are-india-born-559560.html

The post Inside the Journey of a Decacorn appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/inside-the-journey-of-a-decacorn/feed/ 0
AI Inside? Five Tipping Points for a New AI-based Business World https://www.europeanbusinessreview.com/ai-inside-five-tipping-points-for-a-new-ai-based-business-world/ https://www.europeanbusinessreview.com/ai-inside-five-tipping-points-for-a-new-ai-based-business-world/#respond Wed, 02 Mar 2022 12:03:06 +0000 https://www.europeanbusinessreview.com/?p=141580 By Duco Sickinghe and Jacques Bughin The “softwarisation” of our economies has continued unabated since Marc Andreessen’s note in its article in 2011 “Why software is eating the world” interview1 […]

The post AI Inside? Five Tipping Points for a New AI-based Business World appeared first on The European Business Review.

]]>
By Duco Sickinghe and Jacques Bughin

The “softwarisation” of our economies has continued unabated since Marc Andreessen’s note in its article in 2011 “Why software is eating the world” interview1 within the Wall Street Journal. 

Along this softwarisation trend, major underlying changes have also been underway. Changes of high visibility include the simplification trend in hardware, the shift to cloud, or the rise of data. Another one is the significant increase in the use cases of SaaS. Today, a typical 100- 500 employees company in advanced economies may have more than 100 applications running on their IT budget, either on-premises or, (more and more), on the cloud; in fact some have calculated that for the firm benchmark above, the cost of SAAS per employee is today higher than the cost of a typical PC hardware. 

But another more and more crucial trend is the rise of artificial intelligence2, that is becoming more and embedded as the key differentiator for firms to compete. Here is what we have observed: 

Five AI tipping points

Why AI is at a tipping point is based on the following observations:

1. “AI radical economics”

One usual debate is that Artificial intelligence is not human intelligence. In fact, the computational power of our brain3 is estimated at between 1018-25 FLOPS, -and possibly below one peta (1015) FLOP) for simple activities- which could then suggest that super neuronal based computers can soon match the basic functional capacity of the human brain.” Assuming that computer cost keep the same decline of the past, or roughly a factor 10 every 4 years, some infer4 (but with high range of uncertainty) that the cost of matching basic brain functionality could cost less than 100 dollars a hour, in less than a decade. 

Thus, the question of AI is not that it will replace us all soon from work – as economics are still complex, but rather that AI is now augmenting work abilities at increasingly better economics. The evidence there is getting exciting: academic evidence is now solid about the material differences in productivity between AI-patenting and AI-virgin firms. This difference in the range of 5% a year, is about two- to three-times what has been observed in past technologies5. Even better, recent studies suggest that the uplift in performance thanks to AI is especially favourable to small- and medium-sized enterprises6.

AI

Second, the cost/performance of AI has significantly increased in the recent years. Consider, for instance, image recognition. Using ImageNet as a case study7, image detection has grown from 80% accuracy in 2015 to 99% today. That is a nearly perfect match and at least better than human eye. Meanwhile, machine learning training time has gone down from an average of about 6 minutes in Q4 of 2018 to less than one minute by Q1 2020. while the training cost for deep learning algorithms has decreased from more than $1,000 5 years ago to less than $10 today, — or a no-barrier for anyone willing to perform most models of predictive analytics in the enterprise. 

One issue may be that those performance improvements may be applicable to more restrained hardware environments (think about how communication hardware have migrated from the large mainframe to laptops and now smartphones, and how new software will be embedded in miniature robots, or cameras/glasses). But there as well, the performance has continued to progress at large rate, more by 20% a year, by developing and integrating more innovative AI based accelerators. 

2. “AI is a GPT” 

A GPT is a Global Purpose Technology, or one that can affect all industries. In the recent past, most of successful AI cases were both concentrated in a few sectors, such as high-tech services or finance, -as well as in a few firms, such as the FAANgs.

There have been significant breakthroughs in healthcare and pharmaceuticals where they have had a dramatic effect on the economic structure of those sectors.

The last two years have made clear of the GPT power of AI. For instance, there have been significant breakthroughs in healthcare and pharmaceuticals where they have had a dramatic effect on the economic structure of those sectors. In healthcare, diagnostics and smart automation have the potential to stop the sector’s secular inflation in our economies without restricting access to services. In pharmaceuticals, it has proven to have a major impact on drug innovation8, let alone the time to market for vaccines.

AI is also being absorbed by many companies with success. Manufacturing.net9 reports how Nissan has been running an AI Predictive maintenance platform for RUL prognosis and managed to reduce unplanned downtime by half. Coca-Cola In Asia10 has reported gain of more than 1 point of market share in a few months time by using AI based assortment reallocation techniques from mobile pictures of stores shelves . Finally, AI is disruptive even to the FAANGs. In a sector like social media, ByTeDance successfully managed to enter and later dominate the global market for short videos, and this despite the prevalence of major players such as Facebook or Google’s YouTube.

3. “A worldwide AI opportunity”

At present, AI supply is concentrated with the US and China each counting about 30% of worldwide AI enterprises. Still AI is spreading as a industry opportunity elsewhere- taking Europe as an example, Europe has more high-value-added B2B and manufacturing assets, Europe also has a comparative advantage in robotics and automation (24% of patents and firms) as in AI services patents (22%). Finally, the European AI talent pool is not only growing strong but is already seen as a clear advantage in the eyes of AI-tech giants such as Google, Microsoft, or IBM, who have all invested in their AI-based laboratories in Europe. Currently, large, US, AI-based firms have located 20% of their centres in Europe11, for only about 1/3 in the US.

AI

4. “AI based unicorns ” 

Large tech firms are indeed holding a large concentration of AI resources12 , but as it so happens AI start-up funding has continued to expand aggressively, while we calculate that, by last year, 20–25% of unicorns were AI-based start-ups. This is a remarkable multiple relative to the share of start-ups being created, bearing in mind that AI technologies only saw their major breakthrough 3 to 5 years ago, or less than the average time it takes for a company to become a unicorn.

Also, as not all companies will have the capabilities to build and manage their AI factories, a market is being built upon the delivery of automatic AI solutions. This market may hinge upon large, open-source libraries such as the Google-powered TensorFlow, but the market is also being served by a broad array of successful start-ups, like DataBricks, Snorkel, or H20.ai13.

5. “Software 2.0” 

At its most extreme, if AI is indeed pervasive and disruptive, can it also “eat” software, making a large part of software start-ups obsolete?One vision is the so-called “Software 2.0”, wherein data and neural network architecture machine learning models would replace human coding as the source code. But even if this trend is already somewhat visible, most Software 2.0 projects are yet to demonstrate consistent success. This is consistent with our claim earlier that AI is not there as substitute economics, but rather as complement- even in software making, AI will likely act more as a turbocharger of better software — e.g., by automating code generation for some modules, or automated debugging and intelligent testing.

The new emerging world

Putting those changes together, we foresee a world where AI- based software will become mainstream, with cost/performance ratio that leads to significant productivity upside. But along a larger pie for our economies, comes a world of new competition, and new forms of organizations. 

Digitization has boosted the world of start-ups; we just put evidence herebefore that AI is likely to scale this trend. Digitization has led to major disruptions; AI impact is likely to accelerate this dislocation: think about how AI is scaling the entire transformation of automotive, while digitization had barely affected industrial companies. 

But digitization has also created new work types (who had known about SEO before Google), new business models (“platforms beat product”) and new darlings ( the FAANGs)—In those extensions, AI is on the verge to boost those evolutions too. With AI comes the need for larger cybersecurity, the evolution of enhanced automated workplace, and a new set of AI darling firms (Nvidia and others). We observe this evolution first hand in the Fortino portfolio. Take, for example, ReaQta, who specializes in Cybersecurity and named as a Gartner Cool Vendor14 for their innovative AI/Maching Learning approach. Or Oqton which got, recently acquired by 3D Systems15, and specifically build a AI-enhanced approach to manufacturing.

In general, AI is here to say, – and paraphrasing Julie Sweet, Chair and Chief Executive Officer of Accenture16, we also believe that in this world, “cloud is the enabler; data is the driver; and A.I. is the differentiator” of business value. 

About the Authors

Duco Sickinghe

Duco Sickinghe founded Fortino Capital in December 2013 and has overseen Fortino’s growth to a recognized technology VC firm. Before Fortino, Duco was CEO of Telenet, INED of CME and is currently Chairman at KPN. Other positions Duco held are General Manager at Wolters Kluwer, founder of Software Direct, Product & Channel Manager at HP and VP Marketing & General Management at NeXT Computer; when he was exposed to Steve Jobs. He holds a degree in Civil and Commercial Law and obtained an MBA from Columbia Business School.

Jacques Bughin

Jacques Bughin is a professor of Management, Chaire Gillet of Management Practice, at the Solvay Brussels School of Economics and Management at Université libre de Bruxelles (ULB), and among others, a former Director of McKinsey and of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the Board of multiple companies.

References

The post AI Inside? Five Tipping Points for a New AI-based Business World appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/ai-inside-five-tipping-points-for-a-new-ai-based-business-world/feed/ 0
How Should a Business Self-Cannibalize During Digital Transformation? https://www.europeanbusinessreview.com/how-should-a-business-self-cannibalize-during-digital-transformation/ https://www.europeanbusinessreview.com/how-should-a-business-self-cannibalize-during-digital-transformation/#respond Fri, 03 Dec 2021 15:52:09 +0000 https://www.europeanbusinessreview.com/?p=128633 By Jacques Bughin and Nicolas van Zeebroeck Engaging in digital transformation is a dilemma. Either the transformation is aggressive enough to scale,- but brings the risk of excessive self-cannibalisation-, or […]

The post How Should a Business Self-Cannibalize During Digital Transformation? appeared first on The European Business Review.

]]>
By Jacques Bughin and Nicolas van Zeebroeck

Engaging in digital transformation is a dilemma. Either the transformation is aggressive enough to scale,- but brings the risk of excessive self-cannibalisation-, or the transformation is designed to be protective, but runs the risk to be too narrow, with business being aggressively cannibalized by disruptive attackers.  If, at current, both sides win, intentional cannibalization can become a winning strategy if incumbents pick the right bundle among five proven practices.

With time and shocks such as the covid 19, digital transformations have continued unabated with roughly 90% of large incumbent companies worldwide being engaged in some forms of digital transformation. The few laggards are smaller companies, in aspiring markets, competing in B2B industries.   

But despite digital transformation being mainstream, a large part of incumbents’ transformation has appeared to be painful– with a majority not returning a return above its cost of capital.

One main culprit of poor returns may be that engaging in aggressive digitization process often means that companies will have no choice but to intentionally cannibalize their own businesses. Of course, this implies a major dilemma: without intentional cannibalization, the digital transformation bet is that defending legacy revenue will be enough to protect against business stealing from digital disruptors. However, this bet is risky, as the evidence suggests digital natives are often successful in stealing business across many sectors. If the path is then to intentionally cannibalize, through aggressively transforming to build new opportunities, the risk is to cannibalize oneself too strongly and too quickly.

The academic consensus is that deliberate cannibalisation is an important strategic practice  (see Chandy and Tellis 1998), especially in the case of rapid technology changes, such as the advent of digital technologies (Haynes, Thompson, and Wright 2014). Apple is known to have risked and cannibalized its successful iPod lines when introducing the iPhone and has systematically continued to play an intentional cannibalization strategy for its different product lines. 

Intentionally cannibalize, through aggressively transforming to build new opportunities, the risk is to cannibalize oneself too strongly and too quickly.

Still, the Apple case can be the exception. After all, Apple is a global brand, and its unique products are so wanted that Apple could always time cannibalization to limit excess, while creating a new major sparkling boost in demand that more than compensates any product starting to become mature.   

Surprisingly, -to our knowledge at least-, the question of the importance of business stealing versus self-cannibalisation has not been systematically assessed in detail in the context of digitization. 

Our recent research has just done that. Leveraging a panel composed of about 12,000 companies worldwide, we conducted an online survey among top executives regarding their digital performance and the link with cannibalization. With an answer rate of 10%, we have collected a very informative sample worldwide. 

Using machine learning techniques to link digital revenue generation by incumbents and top/bottom line growth, we also have been able to estimate the payoff of intentional cannibalisation in the context of digitization. We have found five factors that optimise the payoff of the cannibalization posture. 

The anatomy of cannibalisation in digital transformation 

Our research shows that the baseline of the average digital transformation is not the one of Apple, and intentional cannibalisation is barely compensating from disruption.

One reason for this “so-so” result is that digitization has indeed created above average cannibalization: the perception is that about 40% of established revenue streams could be facing the risk of third-party cannibalization through digitization. Perception, by the way, may fit reality, e.g an industry well attacked by digital is the media industry, and digital media  has been cannibalizing old media in print for example, at  36% of books buy, (http://www.serci.org/congress_documents/2018/Reimers.pdf) and up to 45% in music sales. 

This  cannibalization effect is on the high side;  it is  above the traditional range found out of radical innovations, with an effect in the 20-25%; as an example, the range of how new car categories self-cannibalise old models has historically been 26- see the Lexus RX300 revenues drawn from Lexus’ luxury sedan sales, (https://www.semanticscholar.org/paper/Estimating-Cannibalization-Rates-for-Pioneering-Heerde-Srinivasan).

This current level of revenue contestability is not only high, but it is to be coupled with the fact that the scale of success of digitization by incumbents remains limited. Among large incumbent firms, digital currently account  23% of their revenues, or only 60% of the original top line being recovered.  Clearly, this means a gap of 40% and the dilemma is on. 

Five best practices for returns to intentional cannibalisation

Not every company is Apple, but not every company is born to fail either, -our research  has identified five practices to close the 40% gap, –and even generate attractive net returns to cannibalization.

The high-level idea is to blend a strategy that offers a) scale, b) legacy differentiation, and c) speed. Scale will boost digital revenues while speed will make companies get the scale rewards against peers; differentiation allows to limit substitution with legacy and to preserve a larger part of the incumbent revenue flow. The strategy pay-off, we find, is also made attractive, because of digitization. 

1. Launch radical innovations.

Digitization has favoured the development of platforms and ecosystems, and the rise of digital versioning of products and services. In return, incumbent companies vested in new business model innovation – especially platform-based, and/or companies who leverage their digital transformation to generate entirely new digital products and services – have made the intentional cannibalization strategy more compelling than average; platform-play extends by 22%=(92%/75%) the average potential while new products boosted revenue by 10%. Cost cutting strategies is the least amenable posture for complete recovery (see Figure 1).

figure 1
Source: Top executives survey, 1070 firms wordwide

Incumbent executives can also be better off by considering to blend those archetypes for better pay off, eg platform play with new services increases revenues by 32%. As an example, John Deere developed additional product/services offering to its IOT platform play—John Deere managed to grow twice the rate of the traditional equipment play, at 12% versus 6% a year in the recent years. Part of the success of Lego growth is that, on top of its classical brick toys, Lego expanded to movies, video and mobile, while creating a support service tool platform such as the  Lego digital designer to allow its users to make construction design guides and share creations with other fans. Digital natives also use those archetypes to grow faster. Amazon has long diversified from its e-tailing to a marketplace and cloud services platform, on top of products such as twitch for example.  Product/platform extension grew a revenue stream at 40% a year during the last 5 years, or 60% faster than original retailing.

2. Be a first mover; do not play the wait option.

The rationale of an intentional cannibalization strategy is to avoid being blinded by competition. This means proactiveness pays better than reactiveness. The key managerial insight from our research is that managers should look at adding multiple forms of first moves. It includes not only launching new products and innovations before competition. But it includes adopting frontier technologies earlier at scale than others, as this provides the time advantage to understand how new technology may support new market opportunities and new extra revenues. Domino’s Pizza was very keen to develop its ‘AnyWhere’ ordering platform to expand its commerce revenue, using long before others tools such as tweets and emojis as interface tools to customers.

3. Use extent of cannibalisation, not as a failure, but as a driver of digital transformation success.

Our research discovers that digital transformations meet their goals, half of the time. But success requires persistence. 50% were transformations that originally were not succeeding and got relaunched to finally scale and succeed. Companies, such as Best Buy in the US, that took a second chance at digital transformation, accurately shifted their mindset, regarding cannibalisation, from a risk, to become a key leading indicator of scaling and success.  

4. Size turbulence.

Companies, such as Best Buy in the US, that took a second chance at digital transformation, accurately shifted their mindset, regarding cannibalisation, from a risk, to become a key leading indicator of scaling and success.

Within the same industry, cannibalization risk perception can vary significantly—but only the ones that perceive the risk as high are usually acting faster in their intentional cannibalisation and have been generating a higher payoff. Executives should be obsessed by adequately sizing turbulence. Gilead’s hepatitis C blockbuster, Harvoni, had intentionally launched to cannibalize one of its star product, Sovaldi, which the firm introduced only months earlier, so as to avoid being blinded by competition (https://journals.sagepub.com/doi/full/10.1177/1069031X19866832)

5. Go beyond natural boundaries.

New products/services are making the pay-off of intentional cannibalisation better (see Figure 1), yet a large portion of companies think about those diversifications often as an extension of the current offering, or a broader scope within their natural industry.  We found that entirely new products and services outside the industry boundary get twice the payoff. Executives should be well advised to bundle their intentional cannibalisation posture with a goal to go beyond original market boundaries,- and where they could tap into an extra possible pool of revenues and profit. Netflix footprint went global, instead of US regional, Ping An added classifieds to its financial platform, or Vodacom added M-pesa payments on top of traditional telco service. 

How to win an intentional cannibalization strategy?

We have laid out 5 ways to play the intentional cannibalization game. The good news is that companies may noy have to play all five to close the 40% gap.  Successful incumbents manage cannibalisation to their benefits by choosing 2 or 3 of the practices above.

This also means that the opportunity set is not as limited as incumbents tend to believe. And cannibalisation may be a risk that could be neutralized in-, or even used as an impetus to push- many digital transformations.  

About the Authors

Jacques Bughin

Jacques Bughin is a professor of Management, Chaire Gillet of Management Practice, at the Solvay Brussels School of Economics and Management at Université libre de Bruxelles (ULB), and among others, a former Director of McKinsey and of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the Board of multiple companies.

Nicolas van Zeebroeck

Nicolas van Zeebroeck is a professor of digital economics and strategy at the Solvay Brussels School of Economics and Management at Université libre de Bruxelles (ULB). He serves on Belgium’s High Council for Employment and as Advisor to the President and Rector of ULB for IT and Digital.

The post How Should a Business Self-Cannibalize During Digital Transformation? appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/how-should-a-business-self-cannibalize-during-digital-transformation/feed/ 0
Breaking Down Business Resilience, Post-Pandemic: The Data Reveal Surprises—and a Blueprint for Moving Forward. https://www.europeanbusinessreview.com/breaking-down-business-resilience-post-pandemic-the-data-reveal-surprises-and-a-blueprint-for-moving-forward/ https://www.europeanbusinessreview.com/breaking-down-business-resilience-post-pandemic-the-data-reveal-surprises-and-a-blueprint-for-moving-forward/#respond Fri, 29 Oct 2021 12:49:13 +0000 https://www.europeanbusinessreview.com/?p=130120 By Jacques Bughin and Francis Hintermann The global pandemic, in addition to generating massive harm to people’s health and communities, split the business world in two. Many companies soared to […]

The post Breaking Down Business Resilience, Post-Pandemic: The Data Reveal Surprises—and a Blueprint for Moving Forward. appeared first on The European Business Review.

]]>
By Jacques Bughin and Francis Hintermann

The global pandemic, in addition to generating massive harm to people’s health and communities, split the business world in two. Many companies soared to new heights; others struggled against forces well out of their control. 

A superficial reading of markets might suggest that it was often a matter of being in the right place at the right time. Of course, such reasoning goes, if you were in health or life sciences, or digital products and services, you were on the right path to higher growth and profitability.  

It’s not that simple. True, some companies charged into the opening created by the new environment, providing home delivery of food and other goods; enabling people to work from home and find new entertainment choices; or, yes, developing a vaccine that would take down Covid. When we tried to figure out with our own research ( see end of text, “about the research”),  those businesses were a small subset: in our sample of 4,100 companies of the largest companies around the world, they represented just above 5% of the total. 

Big pandemic hit, big consequence 

What does the data tell us about the rest?  This is a rather critical question as great companies are not only those which can boost shareholder value; they are also those which have learned to last and rebound forward from major turbulences.    

When we examined the other 3,755 companies, we saw that for many, the road to recovery from such a big pandemic hit is not an easy or quick one: 78%, as of the end of 2020, said they did not foresee a return to their pre-crisis profitability before April of this year (or one year after covid-19 hit), and 70% believed that this would still be true in September 2021, even if vaccinations provide more room for recovery.    

Looking at the mirror, predictions may have been overly cautious. After all, managers were  surprised at the extent of the shock induced by the pandemic. GDP activity dropped more than in other previous crises; US bankruptcy filing increased by 200% for large corporations, and especially it affected virtually all firms worldwide in contrast to other recent pandemics like H1N1 that were more regional, and died out faster than covid.  But when we match those predictions with the S&P 500 earnings evolution, expectations were actually in the right blue-park. US S&P earning for instance just delivered same level as pre-covid by June 2021, in line with the surveyed US firm executives who were anticipating recovery just  a few months later by sept 2021. Also, this prediction is not at odds with other major economic crises, -such as the 2000 internet bubble burst, or the 2008 financial sub-prime collapse-, which have revealed that matching the same profitability takes time- around 18-24 months.   

What those crises have also taught us, is that the most complex challenge is recovering from the missed growth opportunities during the collapse; in effect, the financial crisis has only closed the activity gap after 10 years.  In the covid case, while many firms—almost half—were performing solidly before the pandemic, generating on average close to $1 billion of operating profit a year, the pandemic wiped out the same amount of profit in only a matter of six months into the pandemic. And the key issue is how covid-19 has built a profit bifurcation between the 30% recovered and the other 70% of firms.   

For those performing firms, going back to the same level of profit, (as the S&P 500 earnings have shown), has taken about 1,5 years. Those companies,- typically of smaller size, pinpointing to size reducing agility and resilience- expressed confidence that, absent a third wave of the pandemic, they would surpass pre-Covid levels of profit by end 2021, -catching up on the same long-term trajectory growth of the path, with profit growing at about 10% a year.  

But contrast this with the other 70%. The typical company stated that it will  be barely reaching the bottom after 1,5 years—in the most optimistic case that the rebound has just been shifted, the company might be back on the same pre-covid trajectory by the last months of 2022; in a more probable case that the full recovery cycle will be delayed, the pre-covid trajectory may only be back by end of 2023. The worst case is that it may take up to after 2025, if one extrapolates the recovery based on the glimpse of expectations for the end of 2021.  

Uncovering the ingredients of resilience  

Evidently, the distribution of recovery is likely to span over the three scenarios, and the main  message is the likely important bifurcation between the have (rebounded) and the have not.  How can then one firm avoid to be on the wrong side of the bifurcation 

A common theme of those able to reboot and boost operating profit is that they are not necessarily laggards in terms of “twin transformation,” that is the ability to engage in both digital transformation and sustainability practices  (Accenture research). Another commonality is one of organizational (i.e. their ability to reshuffle resources) and technical (i.e, faster speed to actions) agility. Otherwise stated: a fast metabolic rate of shifting course is paramount to resilience.  

Besides these two common themes, we also found two other factors that play a significant role, although with different weights for each company. The first theme is the perennial theme of innovativeness—firms that have the ability to be innovative, preferably in a disruptive way, and activate the capabilities to develop new products and markets, often have been the long-term winners out of crisis. The same seems to play out for this pandemic. The second but new theme, is the ability to compete through ecosystem plays. From the Amazon e-commerce marketplace to the Lego Ideas platform, ecosystems are loose networks of multiple companies that complement each other to provide a new form of offering- while not new, ecosystems have come out of age, especially with the development of digital technologies, that make ecosystems more easy to scale. A fortiori, ecosystem makes a more agile way to evolve when major turbulence such as a pandemic occurs.  

Four Corporate Paths 

Given the four ingredients (innovation, ecosystem play, twin transformation, and agility), they might also be different ways to combine and manage for resilience.  Our research has found four statistically distinct clusters of resilient companies.  

  • First, “right time, right place” does explain just above 13% of companies on the profitable side of this equation. This includes a large part of companies that often operate in consumer and health-life science services—and managed to benefit from the right tailwind out of the pandemic. Another type of companies appears to be pushing more than luck – they are leveraging ecosystem partnerships for their acceleration. A company like BioNTech, building on its investments in technology and mRNA expertise over the previous decade, partnered with Pfizer to create the first mRNA Covid vaccine, and then rapidly retooled its production process and retrained staff at its facilities to produce millions of vaccines. 
  • Second, another 36% in our analysis focused on generating disruptive innovations. Remarkably, more than half of this group of companies suffered from performance below peers pre-covid. Rather than “let the crisis go to waste,” they significantly invested while others were retrenching and plan to further continue their investment to support their pivot to possibly non-incremental growth areas. Industrial company Honeywell is one example: at the outset, it worked quickly to produce tens of millions of N95 masks for frontline workers. Later, it introduced an ultraviolet treatment system so that airlines could quickly and effectively disinfect air cabins. It also has developed products aimed at creating healthy buildings, using technology to measure and improve indoor air quality and filtration.  
  • Third, 21% of companies in our sample, often found in the tech sector and manufacturing, aggressively boosted their ecosystem play as a way to increase their profit. Those companies often use their role as orchestrator platforms to shape their momentum. A classic example at the intersection of tech and automotive is the mobility platform. Companies such as GoTo Global or GreenCar have seen major increases in users and trips and have significantly beaten their goals of revenue growth and profitability. Most of those mobility platforms reconfigured their trip offerings for grocery delivery and will now expand this opportunity.  
  • A fourth group of 30% of companies is possibly the most interesting of all. They are fully loaded: rather than pick a path, this group is not choosing between ecosystem and innovation: it does both. Such companies are not laggards in twin transformation and in agility, but rather, they push the frontier to excel at both.  And our data also show that those companies have found many synergies in the combination of agility, innovation, ecosystem, and twin transformation. Schneider Electric’s ecosystem activities related to twin transformation are telling. The company has set up and is coordinating multiple digital business ecosystems with the joint aim of better business performance and positive environmental impact. The Schneider Electric Exchange, launched less than two years ago (at Hannover Messe 2019), is core to its overall business strategy. It aims to deliver benefits to all stakeholders driving worldwide economies of scale for IoT solutions. Schneider is using published datasets and SaaS from the Schneider Electric Exchange partner Senseye, a technology company in predictive maintenance (UK), in one of its Smart Factory manufacturing plants, Le Vaudreuil. Likewise, Schneider is co-innovating a digital energy forecasting service offer for retail with the company Predictive LayerIt has also joined forces with Danfoss, and Somfy to create the Connectivity Ecosystem that will boost the adoption of connectivity technologies in the home for better and more sustainable living and working experiences for people. 

What’s Right for Your Company?   

Clearly, the first thing a manager must recognize is that the effect of sudden crises can be such that every company may have to develop enough organizational ability to ensure they can quickly reallocate resources. The game is more and more about agility, or multiple usage of sunk resources. Likewise, twin transformation has become table stakes, and corporate can’t afford to lag behind. 

Second, regarding the other mix of ingredients, it is important to realize that each ingredient carries a different premium according to industry and market context. In automotive, for example, an emerging dominant play is the development of a mobility ecosystem around sustainability, for both people and goods; in utilities, the major play is more about twin transformation, as utilities have been lagging digitization and are now on the hot spot to pivot to much stronger sustainability practices.  

Third, the “fully loaded” strategy observed by 30% of resilient companies suggests and indeed brings, a higher payoff than the others, in the range by end of 2021 of 5 extra more points of profit rate, or a few hundreds million added to the bottom-line.  Yet, one word of caution: this is, by design, a complex play and require that capabilities to succeed have long been established as routines. Indeed, this play is done 50% more often by companies that had acquired those capabilities before the crisis. Hence, each company must take the step to understand where it stands in relation to the four capability ingredients. Where are its strengths, and where are gaps? This diagnosis will help a company prioritize its cluster play. 

More than that, the execution must be done superbly, if companies are to emerge from the pandemic stronger than before. While not exhaustive, here are a few key questions. If the company should decide to be leading with “twin transformation,” ask: Are we increasingly investing to scale frontier technology? To what extent do we deploy technology to enable and scale our sustainability agenda? If the focus is innovation, ask: Are we increasing investment in innovation to create new disruptive growth opportunities (as opposed to realizing incremental improvements)? When ecosystem is the critical area, ask: To what extent to can we position ourselves as major players or orchestrators of ecosystems in our key markets? 

Finally, in general, the synergies among all those ingredients are the little secret for outsized success: digitization has facilitated the emergence of platform-based ecosystem; innovative products have made sustainability a profitable path, and agility has been boosted by digital protocols such as DevOps and others. The play is as much about excellence in capabilities as in excellence in the best capability portfolio play. 

About the research 

The research is based on questions launched online towards top management of large global companies worldwide (revenue above $5 billion) by the end of 2020, including the two main waves of the covid-19 pandemic. The sample was stratified to be representative of industry mix in core countries such as US, UK, France, Germany or China. The final sample includes 4100 respondents (one per firm), of which 1/3 is located in Europe, as well as in the US.  

A special care was made to secure a representative non-biased sample. For this aim, tests of common variance have not shown any common answers bias, and survey responses were tested against aggregate key statistics. For example, about 40% of companies are not returning profit in our sample in line with World Bank that less than half of companies should be profit making from the impact of covid-19 in 2020. The S&P 500 aggregate earning just fell short of reaching the same level as made pre-covid by June 2021, while average profit recovery happens one quarter later for our global sample. By the mid-year of 2021, The sample reveals that managers from industries such as utilities and natural resources, or automotive and transport were not expecting to be in full recovery mode, as opposed to respondents from industries with favourable tail-winds include food retail, software, and pharmaceuticals.  This picture follows the same industry pattern as in other industry research.  For additional test of sample representativeness, see Bughin et al. (2021).  

The analysis from which this research is based on is also unique in scope,  and uses some of the most sophisticated data techniques available to date.  The model on how corporate capabilities affect corporate revenue and profit dynamics, either directly or through amplification responses by firms during the pandemic has been identified thanks to a meta-analysis of the academic and management literature. The resilience drivers as well as the segmentation of resilient firms in the text comes from applying advanced machine learning techniques such as Random Forest and statistical clustering. Random Forest resilience prediction accuracy was more than 80%, and higher than prediction based on traditional regression techniques. Using parametric technique, each resilient driver and each factor within a cluster is statistically significant, with more than a 99/100 chance of being accurate. Finally, the base line model was tested for robustness on multiple dimensions, -industry versus all sample, profit recovery distribution shifted by +/-10%, removal of top 5% outliers, etc. Results remain qualitatively the same. 

About the Authors 

Jacques Bughin

Jacques Bughin is Professor, Chaire Gillet of Management Practice, at the Solvay Business School at the Université libre de Bruxelles (ULB). He is also the CEO of MachaonAdvisory, a top management strategy consultancy. He serves as the Senior Advisor for Fortino Capital and Antler, Knowledge Board Member at Portulans Institute, and Accenture Research, and STOA European Parliament. He retired from McKinsey & Company as senior partner and director of the McKinsey Global institute.

Francis Hintermann

Francis Hintermann is Global Managing Director of Accenture Research. Accenture Research is an Accenture’s entity which identifies and anticipates game changing business, market and technology trends through thought leadership & Strategic Research. 

The post Breaking Down Business Resilience, Post-Pandemic: The Data Reveal Surprises—and a Blueprint for Moving Forward. appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/breaking-down-business-resilience-post-pandemic-the-data-reveal-surprises-and-a-blueprint-for-moving-forward/feed/ 0
Will China lead the twin transformation post-covid? https://www.europeanbusinessreview.com/will-china-lead-the-twin-transformation-post-covid/ https://www.europeanbusinessreview.com/will-china-lead-the-twin-transformation-post-covid/#respond Mon, 02 Aug 2021 10:02:21 +0000 https://www.europeanbusinessreview.com/?p=120691 By Jacques Bughin and Sybille Berjoan The Covid-19 pandemic has led many corporations to retrench in the short term—but the saying goes that no matter large are the turbulent times, […]

The post Will China lead the twin transformation post-covid? appeared first on The European Business Review.

]]>
By Jacques Bughin and Sybille Berjoan

The Covid-19 pandemic has led many corporations to retrench in the short term—but the saying goes that no matter large are the turbulent times, companies must re-relaunch their going concern fast – and in particular, their innovation pipeline if they want to pre-empt long-term performance erosion.  

This narrative of « agile innovation » has proven right in previous crises such as the 2000 internet bubble or the 2008 financial crisis. It also holds at covid time, but with a twist: it has become second to the benefits of scaling a twin transformation of digitization and better sustainability practices.  

A fringe of resilient corporations is already operating at the frontier of this twin transformation and as a sign of the time, are more often witnessed among Chinese than European or US companies.

On top of health issues, the covid-19 pandemic has brought along lockdown measures that badly hit economic activity. No company or industry has remained unaffected. Some companies had the chance of good tailwinds– in healthcare or technology sectors, for instance-. Others were less fortunate, and faced major headwinds, – in sectors like consumer goods, or automotive.   

Faced with sudden crisis, corporations often pick a strategy, from exiting the market, retrenching from some activities, persevering through debt financing, or innovating. A set of factors guide companies’ choice. For example, SMB’s are notoriously financially fragile, with a few months of cash to finance their operations, limiting their choice of sticking to a persevering strategy for long. Companies built up on strong organizational agility are much quicker than others to pivot towards innovations.  

Covid-19 should not be an exception to this picture, as per our recent research. The dominant strategy when the Covid-19 pandemic first hit was about retrenching. A large portion of companies reduced their innovation spending, and R&D spent went down by about 10%.  Among the 45% of the best global firms (growing their profit at an annual rate above 5% during pre-Covid), 20% went to shrink and another 20% was able to maintain its revenue trajectory of pre-covid. This segment quickly re-accelerated spending on innovations by end of 2020, and now anticipates to expand trajectory higher than pre-covid, for this year.

But the Covid-19 has also brought its own peculiarities. In particular, Covid-19 has led to a twin transformation: on one hand, in line with the work by Perez that major economic crises often give birth to a major technology shift, it has led to acceleration of digital technology adoption. One obvious example is the diffusion of technologies to support work-from-home. On the other hand, corporations have been accelerating their ESG initiatives, with some clear success. Those actions follow the strategic priorities, set up by 40% of large corporations, of sustainable development and digital transformation acceleration to re-build competitive advantage. 

One critical element for this twin transformation to succeed is that companies can operate the twin transformation at the frontier. The frontier, in this case, is defined by the extent to which companies have adopted a wide scope of sustainability projects and digital technologies, and have scaled those twin initiatives, versus simply experimenting or piloting before scaling.  

Our research estimates that the average twin score remains low, at about 55%, highlighting that companies are only at the start of this type of transformation, — and also implying in passing that this low maturity may lead to the minimal business upside. 

Yet, we also find that a set of “pioneers” (roughly the top 20% of firms exhibiting the most mature sustainability and technology scores above 7,5/10) is showing a rather new story. 75% of them have already rebounded above and beyond their long-term profit trajectory of pre-covid time. This contrasts with the other companies, reaching resilience in only 25% of cases.   

Of notice, this delivery of twin transformation programs has been the primary cause of fast rebound for those pioneer corporations in all sectors of manufacturing and service industries altogether. This strategic focus has proven to more than double resilience than if companies would have followed only the traditional recipe of accelerating innovation.  

China has been getting out of the Covid-19 pandemic faster than elsewhere, for multiple reasons. One which has been overlooked is the leadership taken by Asian (and mostly Chinese) firms in engaging into this twin transformation. According to Arabesque S-Ray data, large Chinese firms have already a sustainability score in-between Europe and US, and are tapping already extensively in new digital technologies such as Artificial Intelligence, RPA and others. As Chinese firms are foreseeing large synergies between digitization and sustainability, the portion integrating both practices, happens to be larger in China than in Europe and US. Even better, the majority of large Chinese twin transformers firms ( 55% versus barely 1/3 only for US and European ones) is a pioneer, that is, exploits those new practices at scale

What can one thus conclude from this twin evolution? Business-wise, the future post-covid must include the twin transformation; geopolitically-wise, this future is getting shaped especially by China, and the rest of the world is taking notice. In our survey, a large part of European companies clearly saw how a crisis may change the world—while, at the start of the pandemic, only 12% thought Chinese firms will emerge more competitive after, than before, the covid-19, this belief is now shared by 4 times as many firms than one year ago. 

Be the new world start. With the hope that Europe takes notice and agile at scale actions on twin transformation.

 

About the Authors

Jacques Bughin

Jacques Bughin is Professor, Chaire Gillet of Management Practice, at the Solvay Business School, Free University of Brussels, and among others, a former Director of McKinsey and of the McKinsey Global Institute. 

Sybille Berjoan

Sybille Berjoan leads the Accenture Research European team and drives the European Thought Leadership agenda.

The post Will China lead the twin transformation post-covid? appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/will-china-lead-the-twin-transformation-post-covid/feed/ 0