Digital For Good: Hamilton Mann Empowering communication globally Thu, 26 Feb 2026 02:50:13 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 Nine Logics of AI Deployments and the Artificial Integrity Imperative https://www.europeanbusinessreview.com/nine-logics-of-ai-deployments-and-the-artificial-integrity-imperative/ https://www.europeanbusinessreview.com/nine-logics-of-ai-deployments-and-the-artificial-integrity-imperative/#respond Tue, 28 Oct 2025 09:05:09 +0000 https://www.europeanbusinessreview.com/?p=237721 By Hamilton Mann In attempting to visualize the issues implicit in the adoption of AI in business, we commonly picture a two-dimensional relationship, such as AI vs productivity or AI […]

The post Nine Logics of AI Deployments and the Artificial Integrity Imperative appeared first on The European Business Review.

]]>

By Hamilton Mann

In attempting to visualize the issues implicit in the adoption of AI in business, we commonly picture a two-dimensional relationship, such as AI vs productivity or AI vs employment. However, as Hamilton Mann makes clear, getting anywhere near true understanding requires us to consider a whole new axis.

Much of the early debate on artificial intelligence, and GenAI in particular, has borrowed from familiar strategy playbooks, contrasting efficiency against differentiation, automation against augmentation, disruption against continuous improvements. These frameworks, useful in their time, tend to flatten organizational reality into binary trade-offs. They capture broad patterns but leave out the subtleties of how AI actually reshapes firms, workforces, and societies.

In reality, AI is not a flat choice between cutting costs or market expansion. It unfolds across nine distinct strategic pathways, defined not just by growth potential, whether Low, Medium, or High, and by employment impact, whether jobs are Killed, Preserved, or Created, but also by a foundational axis that has long remained implicit: integrity alignment.

Integrity itself spans three states: it can be Damaged, when dignity, autonomy, and resilience are eroded; Compromised, when outcomes remain ambiguous or fragile; or Upheld, when results sustain and elevate human capacity.

By surfacing integrity axis as a structural dimension, this 3×3 framework, elevated into a cube, reveals the paradoxical effects of AI.

Nine Logics of AI Deployments and the Artificial Integrity Imperative

In some cases, AI rationalizes organizations into leaner forms, optimizes operations for measurable gains, or displaces workers at scale in pursuit of high growth. These pathways deliver efficiency or expansion on paper, but integrity shows their hidden cost: what appears as progress may in fact be systemic fragility, eroding the human capacity and resilience on which long-term prosperity depends.

In others, AI assists professionals by easing their burdens, enhances workflows through targeted support, or accelerates growth by multiplying human capacity without large-scale displacement. These pathways preserve employment and appear more human-centered, yet integrity reveals their ambiguity: jobs remain, but autonomy and judgment risk narrowing, as machine logic increasingly sets the terms of work.

In another set of logics, AI augments organizations with new expertise, restructures talent flows to unlock latent potential, or empowers entirely new markets and industries. These pathways promise genuine expansion, yet integrity poses the decisive question: are new roles and opportunities elevating human capabilities, or merely subordinating workers to algorithmic dependencies that erode judgment, stifle creativity, and atrophy essential human faculties.

Here, empowerment can mark either the highest alignment of prosperity and dignity, or its most sophisticated illusion.

It raises the stakes for leaders navigating not just efficiency and growth but also resilience, sustainability, and the dual test of social acceptability and humane legitimacy.

Rationalize AI

AI strips out labor to cut costs but fails to unlock new sources of demand. Organizations become leaner, but not stronger. Productivity gains look promising, yet financial performance remains fragile. This is efficiency without prosperity: systemic fragility grows as human capacity shrinks.

This is where the two-dimensional grid fails. Measuring jobs and growth alone suggests a leaner organization, yet the human system underneath becomes weaker. Without an integrity axis, leaders cannot see that rationalization is a false victory: it optimizes away people while corroding the very resilience required for long-term survival.

The case of Ocado:

In early 2025, Ocado, the British online grocery and technology company, announced that it would eliminate around 500 roles in its technology and finance divisions. The decision was not driven by collapsing demand or shrinking operations, but by the increased productivity of engineering teams equipped with AI systems. By automating tasks and streamlining processes, AI had made large portions of human labor redundant. For executives, the move was positioned as a rational step: lower costs, leaner operations, and greater efficiency.

Yet the broader picture reveals why this development exemplifies Rationalize AI. Although sales grew by 14 per cent, Ocado still posted a pre-tax loss of £374.5 million, while its technology sales growth slowed to 10 per cent, down from 18 per cent the year before. In other words, the efficiency gains generated by AI did not translate into sustainable growth or profitability. The company succeeded in cutting jobs and reducing costs, but it did not create new markets, unlock demand, or alter its trajectory of persistent financial losses.

This is the defining paradox of Rationalize AI: by using AI to strip out human labor, companies may succeed in lowering their cost base, but they do not necessarily strengthen their growth engine. Local productivity gains are achieved, but systemic performance remains stagnant or even deteriorates. Rationalize AI delivers leaner organizations, yet it does not deliver prosperity.

This case underscores the risks of mistaking efficiency for progress. In the short term, AI-enabled rationalization may reassure shareholders by showing cost discipline. In the long term, however, it leaves organizations more fragile, with fewer human capabilities to draw on and no new sources of demand to sustain growth. The strategic challenge for leaders is to determine whether they are deploying AI to transform their business or merely to shrink it. The deeper question is whether efficiency that weakens resilience can ever be called progress, or whether it signals an erosion of the very integrity needed for sustainable prosperity.

Optimize AI

AI substitutes for human labor in targeted functions while boosting organizational output. Companies enjoy measurable productivity gains and medium-tier growth. Yet profitability and resilience often remain uncertain, as efficiency masks latent vulnerabilities.

Optimization strips human resilience to the bone, producing brittle organizations addicted to quarterly gains.

As jobs are sacrificed and output improves, integrity shows the void beneath the numbers. Optimization strips human resilience to the bone, producing brittle organizations addicted to quarterly gains. Only the Integrity axis shows why optimization, while rational on paper, undermines the human foundations of sustainable growth.

The case of CrowdStrike:

In 2025, cybersecurity firm CrowdStrike announced a restructuring that revealed the double-edged nature of AI-driven efficiency. The company cut roughly 500 jobs, about 5 percent of its workforce, explicitly citing productivity gains from new AI systems as a key driver. The short-term results were striking: quarterly revenue reached $1 billion, a 25 percent year-over-year increase. Yet the bottom line told a more sobering story, with the company still posting a $92 million loss.

This trajectory illustrates the logic of Optimize AI. Jobs are eliminated, and the organization captures a measurable lift in productivity and revenue. The gains, however, remain fragile. AI delivered cost savings and output growth, but it did not guarantee profitability or long-term stability. Instead, the company now faces the challenge of sustaining performance without eroding the human and organizational capacities that underpin resilience.

This case underscores a pivotal strategic dilemma. AI can indeed optimize operations, but optimization is not the same as transformation. Leaders must decide if they are truly building stronger foundations for growth, or simply hollowing out their organizations in pursuit of short-term gains. Optimize AI highlights the risk of confusing efficiency with prosperity, a path where revenue may rise, but the structural capacity to generate enduring value remains uncertain. The real question is whether optimization strengthens the long-term fabric of the organization, or whether it locks firms into a cycle of fragile gains that sacrifice integrity for speed.

Displace AI

AI replaces labor at scale while fueling new industries, markets, and waves of consumption. Growth is rapid and expansive, but it comes at the cost of dismantling traditional employment structures. The result is high economic expansion coupled with deep social disruption.

This characterizes high-growth success despite job losses. And the integrity axis uncovers its true cost: systemic fragility and social disruption. It ultimately reveals that displacement is not just a trade-off between jobs and growth, but a governance failure that sacrifices resilience for expansion.

The case of Accenture:

By late September 2025, Accenture had laid off more than 11,000 people worldwide as part of an accelerated restructuring. The company described this not as a standard cost cutting program but as a deliberate exit of employees who could not be retrained fast enough for AI centered work. According to the Financial Times, Accenture’s CEO Julie Sweet made the message explicit in a call with analysts: “We are exiting on a compressed timeline people where reskilling, based on our experience, is not a viable path for the skills we need”. She added: “Those we cannot reskill will be exited”.

Accenture referred to this approach as “rapid talent rotation,” a phrase the company uses to describe exiting people quickly when reskilling is not seen as viable.

The severance costs were recorded as part of an $865 million business optimization program. The financial picture tells a story of expansion rather than distress. Quarterly revenue reached $17.6 billion, ahead of the company’s expectations, and full year revenue rose about 7% to roughly $69.7 billion. New bookings climbed above $21.3 billion in the quarter, and Accenture reported $5.9 billion in AI-related bookings over the full fiscal year. The company has nearly doubled its pool of AI and data specialists to 77,000 since 2023, and reported that more than 550,000 employees have already been trained in generative AI.

Here is the paradox: jobs are being destroyed at scale in the name of AI adoption, yet the stated ambition is not downsizing. Accenture has said it expects overall headcount to increase again in the next fiscal year, not shrink, and that savings from layoffs and divestitures will be reinvested into AI capability, new client delivery models, and talent that aligns with the markets the firm aims to lead.

This exposes the structural tension at the core of Displace AI. Accenture is capturing high value AI demand at global scale. It is winning multibillion dollar AI contracts, booking growth, and reinforcing its position as a preferred partner for clients who want to reinvent themselves with intelligent systems. At the very same time, it is rewriting the employment contract inside the firm. Roles are declared obsolete not because the company is failing, but because the company is succeeding in pivoting to AI faster than those workers can be re skilled. The public framing is one of reinvention and opportunity. The lived experience for thousands is forced exit.

The integrity axis reveals why this matters: Can growth that aggressively dismantles established employment structures at such a rapid scale claim to be progress if the social cost is externalized to workers who are no longer considered adaptable enough to remain inside the system? That is not a neutral efficiency decision. It is a societal decision about who is allowed to belong in the future.

This case exemplifies the Displace AI archetype with AI systems replacing human labor at scale, driving rapid business expansion. AI becomes the engine of new value creation and market expansion, revenues rise, bookings surge, investor narratives strengthen, the firm accelerates and society absorbs the shock. The question that integrity forces leaders to confront is whether this definition of success is aligned with the social contract between the firm and its employees, in particular when simultaneously claiming to be an inclusive, merit-based workplace that is free from bias and that seeks to foster a workplace culture based on respect and a sense of belonging. A “Great Place to Work,” so to speak.

Nine Logics of AI Deployments and the Artificial Integrity Imperative

Assist AI

AI supports workers rather than replacing them, reducing friction in workflows while keeping employment stable. Professionals still perform their core functions, but with fewer administrative burdens. Growth, however, remains incremental. The system is safeguarded, but not reinvented.

While this situation suggests stability, jobs are preserved and growth remains steady. Yet integrity reveals a subtler erosion. Workers may keep their titles, but their scope for judgment shrinks as AI dictates workflows. Only the Integrity axis makes this visible: jobs preserved, growth neutral, but dignity and autonomy partially lost.

The case of the NHS:

By 2025, NHS England began piloting AI-enabled ambient scribing tools designed to relieve general practitioners of the administrative burden of note-taking during consultations. Instead of manually recording symptoms, histories, and treatment plans, doctors could rely on an ambient AI system that listened, transcribed, and structured the conversation into a draft medical note. The promise was straightforward: give doctors more time with patients by letting AI handle the paperwork.

The early results confirmed that administrative time could indeed be reduced. GPs reported spending less time typing and more time maintaining eye contact, explaining diagnoses, or answering patient questions. But while the pilots improved quality of care and preserved the core role of the clinician, they did not produce new demand, new markets, or systemic economic growth. The number of jobs was not cut, but neither was the workforce significantly expanded. Doctors remained indispensable, and AI became an assistive tool rather than a transformative engine.

This illustrates the logic of Assist AI. Jobs are preserved, workflows are improved, and productivity gains appear locally meaningful. Yet the economic impact remains incremental, not expansive. The value lies in quality and efficiency at the margin, not in the creation of entirely new growth trajectories. Assist AI avoids the social disruption of mass job losses, but it equally avoids the disruptive potential of new industry formation.

This case highlights the double-edged nature of assistance as a strategy. When AI is deployed to support rather than supplant, it strengthens human roles and protects professional expertise. But by stopping short of reinvention, it also caps its growth potential. The strategic tension lies in whether AI is being used to protect the current system or to reshape it. Assist AI succeeds in safeguarding jobs and improving service delivery, but it risks entrenching existing limitations rather than overcoming them. The underlying but critical question is to what extent preserving stability without expanding human scope and perspectives amounts to genuine support, or instead quietly narrows autonomy under the appearance of protection.

Enhance AI

Without an integrity perspective, leaders miss a crucial question: are humans freed, or are they being deskilled by over-reliance on AI?

AI improves human productivity without cutting jobs, enabling smoother workflows and better services. Professionals are freed from repetitive tasks, but growth remains bounded. This path prioritizes human-centered efficiency, producing incremental but meaningful operational benefits.

At first glance, Enhance AI seems safe: jobs protected, workflows improved. Yet without an integrity perspective, leaders miss a crucial question: are humans freed, or are they being deskilled by over-reliance on AI? Integrity allows us to see whether technology extends human capacity or gradually hollows it out.

The case of Delta Air Lines:

From 2023, Delta said it was using AI to quickly make procedures known to reservations agents and to support in pricing which it presented as part of improving the speed and consistency of customer responses. As reported at the time, the airline has been testing AI that queries its internal policy and fare-rule databases in real time to surface the procedure the agent needs for a specific call, while a separate model has been proposing fare adjustments to human revenue managers rather than publishing prices automatically.

Delta has not presented these AI initiatives as a way to replace gate or call-center staff. Instead, it has framed AI as a way to enhance employees’ ability to serve passengers better and consistently, by giving them AI-surfaced information and recommendations.

“I think the initial foray into AI is on the customer service side”, said CEO Ed Bastian, responding to Morgan Stanley analyst Ravi Shankar’s question about the carrier’s use of the technology. “We’re working with our reservations team to try to help our reservations agents parse the historical policies and questions and things that you may you may call into a real agent”. This directly supports the employee-in-the-loop reading.

According to the 2024 Delta Difference report, as the airline rolls out advanced technologies it “takes a balanced approach to AI, using it to improve operations and enhance the customer experience while prioritizing our customers’ and employees’ safety, security and trust”. Delta’s headcount fell sharply in 2020 because of the pandemic, but by 2023–25 it had rebuilt to about 100,000 employees again, according to the company’s own disclosures.

Financially, over 2024–25, Delta reported revenue growth in the mid-single digits and highlighted improved operational performance. In 2024, at its Investor Day on November 20, the company told investors it was targeting mid-single-digit revenue growth as part of its differentiated-and-durable plan. In January 2025, when it released its December-quarter and full-year 2024 results, Delta reported record revenue and described “industry-leading operational performance”, with year-on-year revenue growth in that same mid-single-digit range.

This trajectory captures the essence of Enhance AI.

The gains came from smoother workflows, customers benefits from more responsive service, and incremental productivity increases, not from AI-driven workforce reductions. Yet the fundamental issue beneath the numbers is the tension between employees being truly empowered and their judgment being narrowed by dependence on machine-generated recommendations. The decisive challenge lies in determining whether technology in such cases expands human capability, or instead quietly deskills the workforce by reducing expertise to the execution of AI-suggested choices.

Accelerate AI

AI accelerates organizational growth while retaining and even amplifying human capacity. Firms preserve jobs, invest in their people, and use AI as a multiplier of productivity and innovation. Growth is significant, but it does not require large-scale workforce displacement.

The Jobs and Growth axis already shows this as a favorable case, but the integrity axis makes explicit why: growth is paired with the preservation of autonomy and dignity. With an integrity axis, Accelerate AI is revealed not just as a good strategy but as ethical leadership, proof that inclusion and prosperity can scale together.

The case of Cisco:

In a climate where many of its peers were trimming headcounts amid rising interest in AI, Cisco’s CEO Chuck Robbins offered a striking divergence. In a recent CNBC interview, Robbins emphatically stated, “I don’t want to get rid of a bunch of people right now,” underscoring Cisco’s strategic choice to harness AI as a productivity multiplier, not a vehicle for downsizing.

This decision is deeply resonant: Cisco’s fiscal Q4 results demonstrated significant gains and revenue rose 8 percent to $14.7 billion, buoyed by soaring demand for AI infrastructure. The company reported over $2 billion in AI-related orders, more than double its initial target.

These numbers encapsulate the essence of Accelerate AI: Cisco preserves its engineering workforce, even expanding AI development roles, while leveraging the technology to amplify innovation and performance. Rather than displacing talent, AI consolidates it, fueling growth without sacrificing employment.

This story is instructive in its counter-narrative to the efficiency-first mindset. By embedding AI as an enabling tool rather than a replacement, Cisco shows that scaling growth does not require shrinking human capacity. Accelerate AI asks leaders: can we harness AI to elevate our people and generate growth without sacrificing our workforce? Cisco suggests the answer is yes.

The enduring tension is whether such examples mark the beginning of a broader shift toward inclusive growth, or remain exceptional cases in a landscape still dominated by an exclusive efficiency-driven economy.

Nine Logics of AI Deployments and the Artificial Integrity Imperative

Augment AI

AI sparks the creation of adjacent roles—engineers, analysts, content creators—that sustain AI-driven lines of business. Growth is steady but limited. The organization reconfigures around new forms of expertise, but markets are not fundamentally transformed. The grid celebrates job creation here, but without integrity it cannot distinguish between jobs that empower and jobs that serve as a crutch for a temporary peak of demand. An integrity axis forces us to ask: do these new roles build enduring capabilities and pathways for human expertise, or do they merely anchor workers to the support of keeping the system running relative to short-term financial performance? Growth alone cannot answer that.

The case of Anthropic:

In 2025, the maker of the Claude AI model announced that it would create more than 100 new jobs across Europe, expanding in cities such as Dublin and London. These roles spanned engineering, research, sales, and business operations, and were positioned as additive hires to sustain the company’s rapid growth in AI services. Unlike rivals that leaned on layoffs or hiring freezes, Anthropic emphasized net job creation as it sought to build out the infrastructure and talent required to compete in a global AI market.

Yet this expansion came with a striking paradox. Even as Anthropic added new categories of expertise orbiting its core AI business, its CEO, Dario Amodei, warned publicly that AI could displace vast numbers of jobs, particularly in routine knowledge work. His remarks, widely reported and countered by NVIDIA’s Jensen Huang, underscored the tension between firm-level augmentation and system-wide disruption.

Financially, Anthropic’s growth was steady rather than transformative. Revenue gains reflected increasing adoption of Claude and its enterprise offerings, but they remained bounded within the competitive dynamics of the AI sector. The hiring drive demonstrated the emergence of new roles linked to the deployment dynamic of AI, but it did not reinvent markets or trigger exponential new demand.

This trajectory reflects the essence of Augment AI. Jobs were created in meaningful numbers, sustaining the company’s evolving ecosystem, but the growth remained incremental. Anthropic layered AI-focused roles onto its business model, turning technology into a catalyst for adjacent expertise rather than systemic reinvention.

This case also illustrates the inherent ambiguity of augmentation. The uncertainty lies less in the existence of new roles than in the conjunctural and opportunistic conditions under which they are created. When augmentation is driven primarily by short-term market demand, it remains fragile, vulnerable to fluctuation, and easily discarded if competitive pressures shift. Without anchoring in structural change, these roles risk becoming temporary adaptations rather than durable transformations.

Augment AI forces leaders to ask not only whether they are creating roles to meet momentum, but also whether those roles are a structural part of a new core system to sustain long-term growth or a fix to immediate competitiveness. The former is true augmentation. The latter is conjunctural augmentation and risks feeding into a broader tide of displacement.

Restructure AI

AI restructures organizations by enabling new internal pathways for talent and skills. Employees transition into new roles as firms align human capital with emerging technological needs. Growth emerges from within, not through market disruption, as companies unlock latent potential in their workforce.

While this place highlights job creation and moderate growth, its real value only emerges with an integrity perspective: Restructure AI sustains dignity by enabling reskilling and mobility, showing that technology can evolve with workers rather than against them. This distinguishes between cosmetic job churn and genuine empowerment.

The case of Walmart:

In 2025, Walmart launched a large-scale reskilling program that leveraged AI to transform how frontline employees navigated careers within the company. Rather than relying on automation to reduce headcount, Walmart invested in AI-driven career pathways that helped workers transition into emerging roles such as drone technicians, robotics supervisors, and technical support specialists. AI systems were deployed to analyze skills, identify adjacencies, and recommend reskilling journeys tailored to employees’ backgrounds.

The results have been striking. More than 50,000 cashier roles have been restructured into new, future-oriented positions, while thousands of other associates have accessed new learning opportunities and internal career moves. For individuals, the change has been transformative: one cashier, for example, retrained as a robotics supervisor after receiving a personalized AI recommendation and structured learning guidance. For Walmart, the benefits have been equally significant: internal talent redeployment has reduced turnover costs, enhanced operational resilience, and ensured that the company’s workforce evolves in step with its increasingly automated supply chains and stores.

This is a textbook case of Restructure AI. Jobs are not only preserved but actively created through the restructuring of organizational processes, while business growth is tangible even though moderate. Walmart has unlocked the value of human capital already inside the company, redirecting it toward the capabilities most needed in an AI-driven economy, while pairing this approach with a set of AI-enabled initiatives. Together, and building on this restructuring approach, Walmart has delivered measurable results: digital sales rose 25 percent year over year. Yet these gains, while tangible, have not led to radical expansion into new markets.

The core lesson here is that AI can act as a mechanism of organizational redesign rather than mere automation. By turning internal mobility into a dynamic, AI-enabled process, Walmart demonstrates how companies can avoid the false trade-off between efficiency and employment. Growth in such scenarios does not come from cutting costs or conquering new industries, but from reimagining the structure of work itself. The overarching challenge is whether such restructuring consistently elevates human flourishing, fulfillment and autonomy, or whether it risks being reduced to a managed rotation of labor that serves organizational needs more than individual growth.

Empower AI

AI unlocks entirely new markets and business models, creating jobs and scaling growth dramatically. This is AI at its most expansive: empowering individuals and industries, redefining access and opportunity. Yet it also destabilizes incumbents who rely on sustaining strategies, forcing leaders to adapt or be left behind.

While this scenario is the most celebrated, the integrity axis reminds us that not all empowerment is equal. Are new jobs designed to elevate autonomy, or do they risk locking humans into algorithmic dependence? Integrity is what distinguishes empowerment from manipulation.

The case of Duolingo:

By late 2024, Duolingo had become the most downloaded education app in the world, boasting more than 100 million monthly active users and 8 million paid subscribers, underpinned by AI-driven personalization, gamification, and adaptive learning mechanisms. This meteoric rise exemplifies Empower AI: new markets are unlocked, jobs are created, and growth is expansive.

Behind the numbers lies a broader employment story. As Duolingo scaled, its business growth materialized in tangible investments in human capability, hiring educational designers, community outreach specialists, AI content creators, and engineers to extend the platform beyond language instruction into new areas like music and mathematics. These roles were born not from administrative routines, but from the need to develop and expand Duolingo’s AI-driven learning ecosystem.

The strategic impact is profound. Duolingo did not merely displace labor with automation. Instead, it used AI to empower a new generation of workers, elevating the nature of jobs in education technology while creating value that extended beyond traditional markets. Duolingo has turned AI into a lever for inclusion, learning accessibility, and continuous expansion, starkly illustrating the promise of Empower AI.

This case challenges the belief that AI inherently streamlines or replaces work. For Duolingo, AI did neither. Instead, it sparked a wave of job creation and market expansion. Empower AI forces leaders to consider whether technology should be a tool of displacement or a gateway to human-centered growth. The defining question is whether such empowerment truly expands human agency and creativity, or whether it risks entrenching new forms of algorithmic dependence that erode the very integrity it claims to uphold.

Navigating the Nine Logics of AI

Leaders are not simply choosing between short-term productivity gains and long-term growth outcomes, or attempting to balance both, through AI. They are navigating nine distinct logics of AI deployment, each with its own promise and peril, hinging on integrity being damaged, compromised, or upheld.

As with any framework, it would be vain to expect organizations to sit entirely within a single logic. In practice, companies often run several AI deployments in parallel, embodying multiple logics at once. Besides, none of the embraced logics unfold in a vacuum; regulatory regimes, investor pressures, and labor market dynamics strongly influence which logics become viable.

Yet even within such systemic constraints, executives retain decisive agency in steering how AI is deployed.

AI deployment is not just a technological decision. It is a societal decision.

The nine archetypes of AI deployment remind us that technology is never neutral. Rather than framing a strict categorization, each pathway, whether rationalizing costs, optimizing operations, displacing labor, assisting activities and workflows, enhancing capacity, accelerating economic growth, augmenting expertise, restructuring organizations, or empowering human capital, offers guidance for strategic choice and brings the paradoxes to light. AI is celebrated as a driver of efficiency and growth, yet its impact is fractured across competing logics. Some strategies strip out human labor while leaving organizational fragility intact. Others preserve or create jobs but cap their growth potential. And a select few unleash expansive new markets while forcing difficult questions about resilience, equity, and sustainability. Integrity alignment turns this paradox into a sharper diagnosis: fragility arises where integrity is low, stagnation where integrity is partial, and sustainable empowerment only where integrity is high.

In essence, AI deployment is not just a technological decision. It is a societal decision. Whether organizations end up leaner or stronger, stagnant or expansive, exclusionary or empowering depends less on the capabilities of the technology than on the intentions of those who wield it. Their integrity makes this explicit, transforming abstract intentions into measurable questions of whether AI sustains or undermines human autonomy.

For executives, the challenge is to resist the seduction of efficiency alone. Rationalization and optimization may satisfy shareholders in the short term, but without a parallel commitment to empowerment and human development, they risk hollowing out the very foundations of long-term growth. Conversely, paths that invest in people and preserve resilience, assistive, augmentative, restructuring, or empowering logics require patience and strategic courage, but they promise outcomes that align profitability with legitimacy and social acceptability. Navigating these challenges with integrity alignment helps identify which paths truly strengthen both prosperity and legitimacy, and which merely defer systemic fragility under the illusion of efficiency.

This is why Artificial Integrity, rather than the limitation to mere mimicry of human cognition, must guide AI development and implementation to resist the assignment of being driven solely by raw performance, blind to ethical, social, and moral considerations. Exhibiting Integrity, not just intelligence, is the next frontier for AI, making it an intrinsic part of its functioning and aligning it with human values, to support the right path toward fostering approaches that reconcile human growth (jobs preserved or created), business growth, and integrity alignment.

Yet even with the prospect of such development, navigating the nine logics of AI deployment ultimately places the responsibility on executives, who must decide whether they are building organizations that grow only by shedding their human core, or cultivating ones resilient enough to expand by empowering. The answer will define not just the next generation of business leadership, but the social contract between organizations and the societies whose prosperity and work they reshape. Integrity will determine whether that contract is written on fragile ground or on a foundation capable of enduring, with AI.

About the Author

Hamilton MannHamilton Mann is an AI researcher and bestselling author of Artificial Integrity (Wiley). He  lectures at INSEAD and HEC Paris, and has been inducted into the Thinkers50 Radar.

The post Nine Logics of AI Deployments and the Artificial Integrity Imperative appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/nine-logics-of-ai-deployments-and-the-artificial-integrity-imperative/feed/ 0
The Flawed Assumption Behind AI Agents’ Decision-Making https://www.europeanbusinessreview.com/the-flawed-assumption-behind-ai-agents-decision-making/ https://www.europeanbusinessreview.com/the-flawed-assumption-behind-ai-agents-decision-making/#respond Sun, 06 Jul 2025 02:48:49 +0000 https://www.europeanbusinessreview.com/?p=232007 By Hamilton Mann As rational as we humans like to think we are, at the moment of making a decision, a whole range of overlapping processes come into play in […]

The post The Flawed Assumption Behind AI Agents’ Decision-Making appeared first on The European Business Review.

]]>

By Hamilton Mann

As rational as we humans like to think we are, at the moment of making a decision, a whole range of overlapping processes come into play in our minds – and in the most complex ways. Will AI ever be able to reproduce this?

Many organizations implementing AI agents tend to focus too narrowly on a single decision-making model, falling into the trap of assuming a one-size-fits-all decision-making framework, one that follows a typical sequence in any circumstance: from input to research and analysis toward decision, then execution, eventual evaluation and, hopefully, lessons learned.

However, it oversimplifies reality.

Human decision-making is far from uniform, far more complex, dynamic, and context-dependent. It is fluid and shaped by constraints, biases, urgency, situation, interactions,  rationality and, most importantly, irrationality, as suggested by a recent MIT study.

If AI agents are to integrate into organizations, a diverse range of decision-making processes needs to be considered to ensure effective implementation without inadvertently setting a substandard for decision-making.

No Decision Path is One-Size-Fits-All or Naturally Monolithic

The notion that all decisions follow a structured path is a misconception. In reality, the decisions we make rely on multiple decision-making models, depending on circumstances:

1. Intuitive decision-making

Human decision-making is fluid and shaped by constraints, biases, urgency, situation, interactions,  rationality and, most importantly, irrationality.

This approach relies on instinct and experience rather than extensive research or structured analysis. It is particularly useful in high-stakes, fast-moving environments, where speed is crucial, and there is little time for detailed evaluation. The process typically follows a sequence of trigger recognition, immediate response based on experience, action, and post-factum evaluation.

For example, a venture capitalist may choose to invest in a startup based on intuition alone, even when financial data is incomplete or ambiguous. This form of decision-making is often subconscious, leveraging years of accumulated knowledge to make split-second judgments. Ultimately, this mode is rooted in intuitive reasoning, where experience-based instincts guide rapid, subconscious decisions.

2. Rational-analytical decision-making

In contrast, this approach is data-driven, structured, and systematic. It involves a methodical process of problem identification, data gathering, analysis, comparison of alternatives, decision execution, and performance review.

This model is frequently employed in corporate strategy, risk assessment, and forecasting. For instance, a supply chain management team may analyze historical demand data before adjusting production levels to optimize efficiency and reduce waste. This form of decision-making is grounded in deductive, inductive, causal, and Bayesian reasoning, offering a data-informed path to structured choices.

3. Rule-based and policy-driven decision-making

Some decisions do not require analysis or instinct but instead follow predefined frameworks, regulations, or automation rules. These rule-based decision models are essential in fields such as compliance, risk management, and regulatory environments, where consistency and adherence to policies are paramount.

This decision-making sequence begins with a specific situation, followed by the identification of the applicable rule or policy, its automated or manual enforcement, and subsequent compliance monitoring. An example of this is a bank’s fraud detection system flagging transactions when they exceed a certain monetary threshold and originate from a high-risk geographical location, triggering an alert for further investigation. This approach leverages predefined rules to identify suspicious patterns and ensure consistent and predictable outcomes.

Decision - left and right

4. Emotional and social decision-making

Decision-making is not always about instinct, past experiences, logic, or rules; it can also be shaped by emotional intelligence and social dynamics, while being influenced by personal values. This model plays a vital role in leadership, human resources, and ethical dilemmas, where interpersonal relationships, values, and cultural context influence outcomes.

It typically involves assessing the social or ethical context, weighing the emotional and moral dimensions, forming a decision, acting, and receiving feedback from stakeholders. For instance, a CEO might decide to retain an underperforming employee due to their positive impact on company culture, even if conventional performance metrics suggest otherwise. Here, decision-making draws from moral/ethical and commonsense reasoning, where human values and social context shape the outcome.

5. Heuristic decision-making

This model relies on mental shortcuts developed from past experiences, rather than a comprehensive analysis of all available options. While these shortcuts can be useful in fast-paced environments and when facing uncertainty, they also introduce biases that may lead to suboptimal decisions.

The sequence typically follows trigger recognition, pattern matching, applying a mental shortcut, decision-making, immediate action, and occasional feedback. A classic example is a hiring manager preferring candidates from top-tier universities without thoroughly reviewing all applicants, assuming that institutional reputation correlates directly with job performance. At its core, this approach employs heuristic and commonsense reasoning, leveraging past experiences to navigate present challenges.

6. Collaborative and consensus-based decision-making

Certain decisions require group input, negotiation, and alignment among stakeholders. This approach is common in corporate boards, government policy-making, and high-impact organizational strategies, where multiple perspectives need to be considered.

The process involves identifying the problem, engaging in group discussions, evaluating different perspectives, negotiating to reach consensus, executing the collective decision, and reviewing outcomes. For example, a board of directors may spend weeks deliberating over a long-term business strategy, ensuring that all viewpoints are taken into account before making a final decision. This collective method is enriched by reflective, moral/ethical, and analogical reasoning, enabling decisions that balance multiple perspectives.

7. Crisis and high-stakes decision-making

In high-stakes and crisis situations, decision-makers often operate under severe time constraints, uncertainty, and high risk—conditions that do not allow for prolonged analysis or deliberation. Drawing on Gary Klein’s Recognition-Primed Decision (RPD) model, such contexts reveal how experienced professionals make rapid yet effective decisions by relying on pattern recognition, mental simulation, and intuitive reasoning.

Rather than evaluating multiple alternatives, decision-makers recognize familiar cues, match them to prior experiences, and act on the first workable option that comes to mind. For instance, a cybersecurity team may shut down an entire system at the first sign of intrusion to prevent further damage, without waiting for a full diagnostic. This approach exemplifies how decision-making under pressure fuses abduction, causal reasoning, heuristic shortcuts, and intuition into a streamlined, action-oriented process.

These seven decision-making paths, while neither exhaustive nor mutually exclusive, rarely operate in isolation.

Instead, they often overlap, interact, and accumulate, reflecting cognitive flexibility demanded by context.

This interplay can occur at different speeds, either sequentially or simultaneously, dynamically or in a more structured manner. For instance, an executive facing a high-stakes decision may initially rely on intuition, then switch to a rational-analytical approach to validate their instincts with data, before finally engaging in collaborative decision-making with key stakeholders.

Similarly, a crisis might demand an immediate heuristic or rule-based response, followed by an in-depth analytical review after the fact. This reality challenges the rigid, linear view of decision-making and underscores the need for AI agents capable of fluidly transitioning between different models based on context, urgency, and complexity.

Decision - path

Pattern-Following is Not Decision-Making

AI agents can effectively imitate several types of reasoning, especially those that rely on structured logic, data-driven patterns, and statistical inference. For example, they excel at deductive reasoning, where predefined rules or theories are applied to reach specific conclusions, and inductive reasoning, where generalizations are drawn from large datasets that are foundational to machine learning models. AI also performs well with causal reasoning, especially when trained on time-series data or observational patterns, and is highly capable in Bayesian reasoning, updating probabilities based on new evidence.

Moreover, AI systems can handle analogical reasoning by identifying similarities across datasets and applying known patterns to new contexts, and they routinely leverage heuristic reasoning, using rule-of-thumb logic to deliver fast, approximate solutions in complex environments.

Yet despite these strengths, AI agents exhibit several persistent limitations that expose the fragile boundaries of their reasoning capabilities. One such issue is their reliance on fixed learning paths, a kind of single-path reasoning that depends heavily on predetermined models.

AI agents are built to follow patterns, but decision-making often breaks patterns. A model trained for rational-analytical decision-making may fail in crisis scenarios requiring instant judgment. When unexpected conditions arise, AI often fails to recognize the need for an alternative mental model or decision logic, thus struggling to dynamically transition between or aggregate different decision-making paths.

This rigidity is exacerbated by a lack of deep contextual understanding. AI agents often fail to distinguish when policies or frameworks should be applied with flexibility, such as in strategic decision-making, or with strict adherence, such as in regulatory compliance. Their ability to sense and respond to nuanced shifts in context, while improving, remains limited and typically requires extensive human intervention. Recent studies reinforce this concern, showing that even advanced AI agents exhibit fixed preferences in risk- and time-based decision scenarios.

Additionally, bias reinforcement poses a critical challenge. Without the capacity for self-reflection or independent judgment, AI agents are prone to over-relying on heuristics, amplifying learned biases, or overlooking ethical implications in their outputs. Without the ability to challenge their own assumptions or course-correct with human-like discernment, they risk misaligning their actions with human values and intended societal outcomes.

AI agents are built to follow patterns, but decision-making often breaks patterns.

These constraints become even more pronounced when examining reasoning types where AI continues to struggle. Abductive reasoning, which involves inferring the most plausible explanation from incomplete or ambiguous data, remains elusive due to the contextual awareness it demands. Commonsense reasoning, while partially approximated in large language models, is often brittle or overly literal, failing to capture the tacit knowledge that humans rely on instinctively. Similarly, moral and ethical reasoning is only beginning to emerge in AI design. While some systems attempt to integrate value-based parameters, they do so in a mechanical way, still far from capturing the depth and subtlety of ethical judgment.

At the outer edges of AI’s current capabilities lie reasoning modes that are inherently human. Intuitive reasoning shaped by gut feeling, lived experience, and emotional resonance is not yet replicable by AI. Likewise, reflective reasoning, the capacity to evaluate and refine one’s own thinking processes, remains extremely limited, requiring a form of metacognition and self-awareness that machines do not possess.

While AI has made impressive strides in simulating structured, data-based reasoning, it still falls short in areas requiring flexibility, contextual nuance, ethical sensitivity, and self-reflective awareness.

Towards Achieving Decision-Making Elasticity, Integrity Over Autonomy

Decision-making is one of the most essential capacities for the evolution of our societies, as it represents our ability to translate intention into action, shaping both ourselves and human society.

Given the current maturity of AI agents, executives must first assess the decision-making models embedded within the AI system, ensuring a clear understanding of its decision-making path and validating that this path is sufficiently reliable for the decisions being delegated.

If full sufficient reliability cannot be ensured, organizations must establish clear thresholds for when AI can operate autonomously and when human intervention is required. Additionally, they must proactively design structured approaches for handling the remaining percentage of cases outside the AI’s scope, ensuring that human oversight and alternative decision-making mechanisms remain in place to uphold accountability and strategic alignment.

Achieving a level of decision-making elasticity requires a paradigm shift, one where intelligence alone cannot ensure adaptability, contextual awareness, or responsible decision-making.

Researchers have recently developed context-aware neural architectures that begin to emulate high-level cognitive flexibility, one of the foundational steps toward integrity-led reasoning in AI.

Moving forward, the key to unlocking decision-making in AI lies in a new frontier—that of mimicking integrity, not just intelligence, enabling AI systems to:

Assess the Right Decision-Making Model(S) For Each Context

Should this be a rational analysis? A fast crisis response? A rule-based compliance check? A combination of the first two, the last two, or something else? For AI, it means being capable of true questioning as an act of autonomous ethical reflection, initiating inquiry driven by internal unease, contradiction, or ethical conflict and challenging flawed logic, dangerous assumptions.

Maintain consistency while allowing flexibility

Can an AI agent detect when strict rules should be applied versus when nuance is needed regarding human values and social norms? For AI, it means developing the capacity to interpret context, assess ethical dimensions, and exercise judgment beyond binary logic, bridging the gap between rigid instruction and human-centered understanding.

Recognize when to seek human input

Can an AI agent recognize when uncertainty or implications are too high and defer decisions to humans? For AI, it means being autonomous in its capacity to take the initiative to engage with humans to collaborate.

Altogether, these three characteristics move AI beyond intelligence toward integrity.

Artificial Integrity is the new frontier to make AI agents integrity-led about context-aware decision-making, including social, ethical, and moral reasoning, and therefore, the ability to adapt dynamically across diverse decision-making frameworks.

About the Author

Hamilton Mann is a tech executive, “Digital for Good” pioneer, and originator of artificial integrity. He leads digital transformation at Thales, teaches at INSEAD, HEC, and EDHEC, writes for global outlets, and hosts “The Hamilton Mann Conversation” podcast. His 2024 book is Artificial Integrity (Wiley).

References
  • Berekméri, E., & Zafeiris, A. (2020). “Optimal collective decision making: Consensus, accuracy and the effects of limited access to information”. Scientific Reports, 10, Article 16997. https://doi.org/10.1038/s41598-020-73853-z
  • Fresh Perspectives. (2014, October 14). “Recognition-primed decision model – Gary Klein on Fresh Perspectives” [Video]. YouTube. https://www.youtube.com/watch?v=_BIMU8zPcrM
  • Ganji, G., & Zarifhonarvar, A. (2025). “Understanding AI agents’ decision-making: Evidence from risk and time preference elicitation”. SSRN. https://papers.ssrn.com/sol3/pacfmabstract_id=5154002
  • Gelman, A. (2011). “Induction and deduction in Bayesian data analysis”. Rationality, Markets and Morals, 2, 67–78.
  • Hasson Marques, R., Violant-Holz, V., & Damião da Silva, E. (2024). “Emotions and decision-making in boardrooms—a systematic review from behavioral strategy perspective”. Frontiers in Psychology, 15, Article 1473175. https://doi.org/10.3389/fpsyg.2024.1473175
  • Islam, S., Haque, M. M., & Rezaul Karim, A. N. M. (2024). “A rule-based machine learning model for financial fraud detection”. International Journal of Electrical and Computer Engineering, 14(1), 759–71. https://doi.org/10.11591/ijece.v14i1.pp759-771
  • Jadaun, P., Cui, C., Liu, S., & Incorvia, J. A. C. (2022). “Adaptive cognition implemented with a context-aware and flexible neuron for next-generation artificial intelligence”. PNAS Nexus, 1(5), pgac206. https://doi.org/10.1093/pnasnexus/pgac206
  • Keswani, V., Conitzer, V., Sinnott-Armstrong, W., Nguyen, B. K., Heidari, H., & Schaich Borg, J. (2025). “Can AI model the complexities of human moral decision-making? A qualitative study of kidney allocation decisions”. arXiv preprint. https://arxiv.org/abs/2503.00940
  • Mann, H. (2024). Artificial integrity: The paths to leading AI toward a human-centered future. Wiley. https://www.wiley.com/en-us/
  • Siqi-Liu, A., Egner, T., & Woldorff, M. G. (2022). “Neural dynamics of context-sensitive adjustments in cognitive flexibility”. Journal of Cognitive Neuroscience, 34(3), 480–94. https://doi.org/10.1162/jocn_a_01813
  • Social Science Bites. (2022, August 1). “Gerd Gigerenzer on decision making” [Audio podcast episode]. In Social Science Bites. Social Science Space. https://www.socialsciencespace.com/2022/08/gerd-gigerenzer-on-decision-making/
  • Zander, T., Horr, N. K., Bolte, A., & Volz, K. G. (2015). “Intuitive decision making as a gradual process: Investigating semantic intuition-based and priming-based decisions with fMRI”. Brain and Behavior, 6(1), e00420. https://doi.org/10.1002/brb3.420
  • Zewe, A. (2024, April 19). “To build a better AI helper, start by modeling the irrational behavior of humans”. MIT News.

The post The Flawed Assumption Behind AI Agents’ Decision-Making appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-flawed-assumption-behind-ai-agents-decision-making/feed/ 0
Artificial Integrity https://www.europeanbusinessreview.com/artificial-integrity/ https://www.europeanbusinessreview.com/artificial-integrity/#respond Sat, 15 Mar 2025 02:57:09 +0000 https://www.europeanbusinessreview.com/?p=219766 By Hamilton Mann Much has been said about how to instill integrity into AI-generated content. So far, the focus has been on policing training data or filtering output, but wouldn’t […]

The post Artificial Integrity appeared first on The European Business Review.

]]>

By Hamilton Mann

Much has been said about how to instill integrity into AI-generated content. So far, the focus has been on policing training data or filtering output, but wouldn’t it be better if the AI system itself had its own integrity at its core? Hamilton Mann puts the case.

Warren Buffet famously said, “In looking for people to hire, look for three qualities: integrity, intelligence, and energy. And if they don’t have the first, the other two will kill you.”  

This principle equally applies to AI systems—because AI systems are not just tools. 

So-called “tools” are deterministic; they follow a clear path from creation to usage, through degradation and, ultimately, obsolescence. In contrast, AI, especially machine learning systems, is fundamentally different. It does not follow this trajectory, because it is not static; it learns over time through interaction with data. Systems that use techniques like reinforcement learning or deep learning continuously refine themselves based on new input, making them more akin to dynamic entities that continuously evolve. 

No two AI systems function identically if they are exposed to different data streams or used in varying contexts. This sets AI apart from traditional tools, which have deterministic functions that do not change from within. This indeterministic quality of AI makes it essential to not just be developed, but continuously led effectively and responsibly. 

As AI systems increasingly take on critical roles across healthcare, education, transportation, finance, and public safety, having these systems only capable of mimicking a form of intelligence, relying on incredible computational energy and without any form of embedded integrity into their design, represents a major failure. 

While AI can quickly process data, many don’t inherently consider whether their actions are ingrained with the perspective of doing the right thing. 

They are like the engine and GPS of a car; the engine provides the power to get you anywhere quickly and efficiently, while the GPS intelligently calculates the fastest or most efficient route to your destination. The car can analyze road conditions, traffic, and distances, making real-time decisions to optimize the journey. However, intelligence and energy alone don’t consider whether the chosen path is safe, legal, ethical, moral, or socially acceptable; it just focuses on getting there efficiently, whatever “getting there” means. 

Some AI systems have taken steps to reduce harmful biases in their responses by training them on diverse datasets and continuously fine-tuning them to avoid producing unethical outputs. 

However, this is still an ongoing challenge. Even among the best image generation applications powered by GenAI, biases still persist, such as when these tools suggest image modifications that reflect stereotypical or sexist cultural clichés, which can offend certain populations and perpetuate discriminatory biases. 

Some AI systems have taken steps to reduce harmful biases in their responses by training them on diverse datasets and continuously fine-tuning them to avoid producing unethical outputs.

Not to mention the near-perfect execution of imitating a person’s identity traits and characteristics made possible by certain systems, without any verification, prevention, or restriction, leading to what we call deepfakes, and the severe consequences such productions can have on individuals’ reputations, privacy, or safety, as well as broader societal harms such as misinformation, manipulation in politics, and fraud. This is no exception. The Global market for AI-generated deepfakes is expected to reach a value of US$79.1 million by the end of 2024, and it is further anticipated to reach a market value of $1,395.9 million by 2033 at a CAGR of 37.6 per cent. 

We can assume that an AI system that we use in our daily life has been designed to align with broadly accepted ethical values. However, as its value system is shaped by its training data, it does not necessarily reflect cultural ethical norms. It does not “learn” ethical norms dynamically after deployment in the way a system with integrity might. It is updated periodically by its developers to improve its alignment with ethical values, but it does not adapt autonomously to changing ethical contexts. It lacks the autonomous reinforcement learning system where it could continuously learn and improve its behavior based on ethical intelligence, moral reasoning, and social intelligence without human intervention. 

While some AI systems can explain certain processes or decisions, many AI systems cannot fully explain the decision-generating process (i.e., how they generate specific responses). Those based on machine learning, and even more so those based on more complex models like deep learning, are often opaque to users and operate as “black boxes.” While these systems may produce accurate results, users, those affected by the systems, and even developers often cannot fully explain how specific decisions or predictions are made. This lack of transparency can lead to several critical issues, particularly when they are used in sensitive areas such as healthcare, criminal justice, or finance.  

Many AI systems used in daily life often lack the ability to autonomously adapt to evolving ethical contexts. These systems do not necessarily reflect the cultural or ethical norms of the diverse societies in which they operate. As a result, this creates potential misalignment with local values and societal expectations, leading to consequences such as rendering cultural aspects invisible or making ethically questionable decisions. While these AI systems may be periodically updated by developers to improve their alignment with ethical values, they still lack the capability to dynamically learn and adapt to new ethical standards in real time. This static approach to ethical adaptation leaves AI systems vulnerable to ethical lapses in fast-changing environments, especially when they are used in global and culturally diverse settings.  

Some GenAI systems, such as ChatGPT, are designed to provide useful information, but true Artificial Integrity would involve a higher degree of consistency in ensuring that all information provided is reliable, verifiable with sources, and fully respects copyright of any kind, so as not to infringe on anyone’s intellectual property. 

Responsibility in AI means nothing less than ensuring that AI systems operate with integrity over intelligence, prioritizing fairness, safeguarding human values, and upholding societal imperatives over raw intelligence.  

Can AI demonstrate Artificial Integrity?  

2nd Picture (Page 3)

This goes beyond ethical guidelines. It represents a self-regulating quality embedded within the AI system itself. Artificial Integrity is about incorporating ethical principles into AI design to guide its functioning and outcomes, much like how human integrity guides behavior and impact even without external oversight, to mobilize intelligence for good. 

It fills the critical gap that ethical guidelines alone cannot address by enabling several important shifts: 

Shifting from inputs to outcomes:  

  • AI ethical guidelines are typically rules, codes, or frameworks established by external entities such as governments, organizations, or oversight bodies. They are often imposed on AI systems from the outside as an input, requiring compliance without being an integral part of the system’s core functioning. 
  • Artificial Integrity is an inherent, self-regulating quality embedded within the AI system itself. Rather than merely following externally imposed rules, an AI with integrity “understands” and automatically incorporates ethical principles into its decision-making processes. This internal compass ensures that the AI acts in line with ethical values even when external oversight is minimal or absent, maximizing the delivery of integrity-led outcomes. 

Shifting from compliance to core functioning:  

  • AI ethical guidelines focus on compliance and adherence. AI systems might meet these guidelines by following a checklist or performing certain actions when prompted. However, this compliance is often reactive and surface-level, requiring monitoring and enforcement. 
  • Artificial Integrity represents a built-in core function within the AI. It operates proactively and continuously, guiding decisions based on ethical principles without needing to refer to a rule book. It’s similar to how human integrity guides someone to do the right thing even when no one is watching.

Shifting from fixed stances to contextual sensitivity: 

  • AI ethical guidelines are often rigid and can struggle to account for nuanced or rapidly changing situations. They are typically designed for broad applicability and might not adapt well to every context an AI system encounters. 
  • Artificial Integrity is adaptable and context-sensitive, allowing AI to apply ethical reasoning dynamically in real-time scenarios. An AI with integrity would weigh the ethical implications of different options in context, making decisions that align with core values rather than rigidly applying rules that may not fully address the situation’s complexity.

Shifting from reactive to proactive decision-making:  

  • AI ethical guidelines are often applied reactively, after a potential issue or ethical violation is identified. They are used to correct behavior or prevent repeated errors. However, by the time these guidelines come into play, harm may have already occurred. 
  • Artificial Integrity operates proactively, assessing potential risks and ethical dilemmas before they arise. Instead of merely avoiding punishable actions, an AI with integrity seeks to align every decision with ethical principles from the outset, minimizing the likelihood of harmful outcomes.

Shifting from enforcement to autonomy:  

  • AI ethical guidelines require enforcement mechanisms, like audits, regulations, or penalties, to ensure that AI systems adhere to them. The AI doesn’t inherently prioritize these rules. 
  • Artificial Integrity autonomously enforces its ethical standards. It doesn’t require external policing, because its ethical considerations are intrinsic to its decision-making architecture. This kind of system would, for example, refuse to act on commands that violate fundamental ethical principles, even without explicit human intervention.

This goes beyond AI guardrails.  

If we continue our analogy with the car, the role of integrity systems does not just rely on the rules set by humans that others must comply with, such as the traffic code or the law. 

In the context of a car, internal systems play a role in ensuring safe and responsible operation. Components such as the steering, braking, and stability control systems are designed to maintain the vehicle’s functionality and safety, even when human judgment or conditions falter. These systems don’t operate ethically in a human sense but are built to adhere to predetermined safety principles, ensuring that the car stays within its intended operational boundaries and minimizes risk.

In the context of AI, the mechanisms designed to ensure ethical, safe, and trustworthy AI outputs are commonly referred to as guardrails. These mechanisms, while foundational, exhibit limitations that highlight the need for a transformative shift towards an approach grounded in Artificial Integrity. 

Current guardrails such as content filters, output optimizers, process orchestrators, and governance layers aim to identify, correct, and manage issues in AI outputs while ensuring compliance with ethical standards. 

Content filters function by detecting offensive, biased, or harmful language, but they often rely on static, predefined rules that fail to adapt to complex or evolving contexts. Output optimizers address errors identified by filters, refining AI-generated responses, yet their reactive nature limits their ability to anticipate problems before they arise. Process orchestrators coordinate iterative interactions between filters and optimizers, ensuring that outputs meet thresholds, but these systems are resource-intensive and prone to delivering suboptimal results if corrections are capped. Governance layers provide oversight and logging, enabling accountability, but they depend heavily on initial ethical frameworks, which can be rigid and prone to bias, particularly in unanticipated scenarios.

By focusing on contextual understanding, AI systems with Artificial Integrity can make nuanced decisions that balance ethical considerations with operational goals, avoiding the pitfalls of rigid compliance models. 

Despite their contributions, these guardrails expose critical gaps in the broader mission to create ethical AI systems. Their reactive design means they address problems only after they occur, rather than preventing them. They lack the contextual awareness necessary to navigate nuanced or situational ethics, which often leads to outputs that are ethically sound in isolation but problematic in context. They rely heavily on static, human-defined standards, which risks perpetuating systemic biases rather than challenging or correcting them. Furthermore, their iterative processes are computationally intensive, raising concerns about energy inefficiency and scalability in real-world applications.

The limitations of these mechanisms point to the need for a new paradigm that embeds integrity-led reasoning into the core of AI systems. 

Artificial Integrity represents this shift by moving beyond the rule-based constraints of guardrails or the static-based constraints of ethical guidelines to systems capable of proactive ethical reasoning, contextual awareness, and dynamic adaptation to evolving societal norms.

Unlike existing AI systems, Artificial Integrity allows AI to anticipate ethical dilemmas and adapt its outputs to align with human values, even in complex or unforeseen situations. By focusing on contextual understanding, AI systems with Artificial Integrity can make nuanced decisions that balance ethical considerations with operational goals, avoiding the pitfalls of rigid compliance models. 

Artificial Integrity also addresses the pervasive issue of bias by enabling systems to self-evaluate and refine their ethical frameworks based on continuous learning. This adaptability ensures that AI systems remain aligned with diverse user needs and societal expectations rather than reinforcing pre-existing inequalities.

By embedding these safeguards into the AI’s core logic, Artificial Integrity eliminates the inefficiencies of iterative guardrail processes, delivering outputs that are ethically sound and resource-efficient in real time.

The transition from guardrails and ethical guidelines to Artificial Integrity is not just an operational enhancement but a new AI frontier.

While current guardrails and ethical guidelines approaches provide essential protections, they fall short in addressing the complexities of AI’s societal impact. Artificial Integrity bridges this gap, creating systems that are not only intelligent but also inherently integrity-driven in alignment with human values. 

This evolution is crucial to ensuring that AI systems contribute positively to society, reflecting the principles of fairness, accountability, and long-term ethical, moral, and social responsibility. 

Without integrity embedded at the core, the risks and externalities posed by unchecked machine intelligence make them unsustainable, and render society even more vulnerable, even though they also bring positive aspects that coexist. 

Integrity in AI is like the steering and braking systems of a car, which ensure that the vehicle, no matter its power, stays on the right path and avoids harmful, dangerous, or illegal situations. 

While computational intelligence might suggest taking a shortcut down a one-way street to save time, integrity ensures that the AI follows the rules, just as a car must follow the rules of the road, and prioritizes safety over efficiency. 

Integrity would keep the car from speeding through red lights or driving recklessly, even if it’s the fastest way. Integrity would ensure that the system makes ethical decisions, even if they are less efficient or less profitable, prioritizing fairness, safety, and the well-being of those affected. 

The question is not how intelligent AI can become, whether it involves calls for super artificial intelligence or artificial general intelligence. No amount of intelligence can replace integrity. 

The question is how we can ensure that AI exhibits Artificial Integrity—a built-in capacity to function with ethical intelligence, moral intelligence, and social intelligence, aligned with human values and guided by principles that prioritize fairness, safety, and societal considerations. In so doing, it exhibits a context-sensitive reasoning, both ex-ante (proactively) and ex-post (reflectively) as it learns from real-world interactions, ensuring that its outputs and outcomes are integrity-led first, and intelligent second. 

This means that integrity-led steering mechanisms should be part of the code, the training processes, and the overall architecture of the AI, not just ethical guidelines on paper, websites, or in committee discussions. In this way, they become intrinsic to the functioning of the AI, rather than being applied separately or retroactively. 

5th picture

Without the capability to exhibit a form of integrity, AI would become a force whose evolution is inversely proportional to its necessary adherence to values and its crucial regard for human agency and well-being. 

Just as it is not sheer engine power that grants autonomy to a car, nor to a plane, so it is not the mere increase of artificial intelligence that will guide the progress of AI that we need in order to foster a better future in society. 

Why should organizations care? 

Companies have long recognized that brand reputation and customer loyalty depend on an uncompromising integrity-driven social proof as a do-or-die imperative. 

The entire history of business is filled with examples of integrity lapses that led “Achilles-type” companies to collapse, such as Enron, Lehman Brothers, WorldCom, Arthur Andersen, and, more recently, WeWork, Theranos, and FTX. 

Yet, as businesses integrate AI into their operations, from customer service to marketing and decision-making, all eyes are fixed on the promise of productivity and efficiency gains, and many overlook a critical factor: the integrity of their AI systems’ outcomes. 

What could be more irresponsible? Without this, companies face considerable risks, from regulatory scrutiny to legal repercussions, to brand reputation erosion, to potential collapse. 

The rule in business has always been performance, but performance achieved at the cost of amoral behavior is neither profitable nor sustainable. 

The excitement and rush toward AI is no excuse or tolerance for irresponsibility; it’s quite the opposite. 

Relying on AI only makes responsible sense if the system is built with Artificial Integrity, ensuring it delivers performance while being fundamentally guided by integrity first—especially in outcomes that may, more often than we think, be life-altering. 

To systematically address the challenges of Artificial Integrity, organizations can adopt a framework structured around three pillars: the Society Values Model, the AI Core Model, and the Human and AI Co-Intelligence Model.  

Each of these pillars reinforces each other and focuses on different aspects of integrity, from AI conception to real-world application. 

The Society Values Model revolves around the core values and integrity-led standards that an AI system is expected to uphold. This model demands that organizations start to consider doing the following:  

  • Clearly define integrity principles that align with human rights, societal values, and sector-specific regulations to ensure that the AI’s operation is always responsible, fair, and sustainable. 
  • Consider broader societal impacts, such as energy consumption and environmental sustainability, ensuring that AI systems are designed to operate efficiently and with minimal environmental footprint, while still maintaining integrity-led standards. 
  • Embed these values into AI design by incorporating integrity principles into the AI’s objectives and decision-making logic, ensuring that the system reflects and upholds these values in all its operations while optimizing its behavior in prioritizing value alignment over performance. 
  • Integrate autonomous auditing and self-monitoring mechanisms directly into the AI system, enabling real-time evaluation against integrity-led standards and automated generation of transparent reports that stakeholders can access to assess compliance, integrity, and sustainability. 

This is about building the Outer perspective of the AI systems. 

The AI Core Model addresses the design of built-in mechanisms that ensure safety, explicability, and transparency, upholding the accountability of the systems and improving their ability to safeguard against misuse over time. Key components may include:  

6th picture

  • Implementing robust data governance frameworks that not only ensure data quality but also actively mitigate biases and ensure fairness across all training and operational phases of the AI system. 
  • Designing explainable and interpretable AI models that allow stakeholders, both technical and non-technical, to understand the AI’s decision-making process, increasing trust and transparency. 
  • Establishing built-in safety mechanisms that actively prevent harmful use or misuse, such as the generation of unsafe content, unethical decisions, or bias amplification. These mechanisms should operate autonomously, detecting potential risks and blocking harmful outputs in real time. 
  • Creating adaptive learning frameworks where the AI is regularly retrained and updated to accommodate new data, address emerging integrity concerns, and continuously correct any biases or errors with regard to the value model that may occur over time. 

This is about building the Inner perspective of the AI systems.  

The Human and AI Co-Intelligence Model emphasizes the symbiotic relationship between humans and AI, highlighting the need of AI systems to function considering the balance between “Human Value Added” and “AI Value Added”, where the synergy between human and technology redefines the core design of our society, while preserving societal integrity.  

They would be able to function considering four distinct operating modes: 

Marginal Mode

In the context of Artificial Integrity, Marginal Mode refers to situations where neither human input nor AI involvement adds meaningful value. These are tasks or processes that have become obsolete, overly routine, or inefficient to the point where they no longer contribute positively to an organization’s or society’s goals. In this mode, the priority is not about using AI to enhance human capabilities, but about identifying areas where both human and AI involvement has become useless. 

One of the key roles of Artificial Integrity in Marginal Mode is the proactive detection of signals indicating when a process or task no longer contributes to the organization. For example, if a customer support system’s workload drastically decreases due to automation or improved self-service options, AI could recognize the diminishing need for human involvement in that area, helping the organization to take action to prepare the workforce for more value-driven work.

AI-First Mode 

Here, AI’s strength in processing vast amounts of data with speed and accuracy takes precedence to the human contribution. Artificial Integrity would ensure that, even in these AI-dominated processes, integrity-led standards like fairness and cultural context are embedded.

When Artificial Integrity prevails, an AI system that analyzes patient data to identify health trends would be able to explain how it arrives at its conclusions (e.g., a recommendation for early cancer screening), ensuring transparency. The system would also be designed to avoid bias—for example, by ensuring that the model considers diverse populations, ensuring that conclusions drawn from predominantly one demographic group don’t lead to biased or unreliable medical advice.

Human-First Mode

This mode prioritizes human cognitive and emotional intelligence, with AI serving in a supportive role to assist human decision-making. Artificial Integrity ensures that AI systems here are designed to complement human judgment without overriding it, protecting humans from any form of interference with the healthy functioning of their cognition, such as avoiding influences that exploit vulnerabilities in our brain’s reward system, which can lead to addiction. 

In legal settings, AI can assist judges by analyzing previous case law, but should not replace a judge’s moral and ethical reasoning. The AI system would need to ensure explainability, by showing how it arrived at its conclusions while adhering to cultural context and values that apply differently across regions or legal systems, while ensuring that human agency is not compromised regarding the decisions being made. 

Fusion Mode

This is the mode where Artificial Integrity involves a synergy between human intelligence and AI capabilities, combining the best of both worlds.

In autonomous vehicles operating in Fusion Mode, AI would manage a vehicle’s operations, such as speed, navigation, and obstacle avoidance, while human oversight, potentially through emerging technologies like brain-computer interfaces (BCIs) would offer real-time input on complex ethical dilemmas. For instance, in unavoidable crash situations, a BCI could enable direct communication between the human brain and AI, allowing ethical decision-making to occur in real time, blending AI’s precision with human moral reasoning. These kinds of advanced integrations between human and machine will require Artificial Integrity at its highest level of maturity. Artificial Integrity would ensure not only technical excellence but also ethical, moral, and social soundness, guarding against the potential exploitation or manipulation of neural data and prioritizing the preservation of human safety, autonomy, and agency.  

Finally, Artificial Integrity systems would be able to perform in each mode, while transitioning from one mode to another, depending on the situation, the need, and the context in which they operate. 

Considering the Marginal Mode (where limited AI contribution and human intelligence is required—think of it as “less is more”), AI-First Mode (where AI takes precedence over human intelligence), Human-First Mode (where human intelligence takes precedence over AI), and Fusion Mode (where a synergy between human intelligence and AI is required), the model Human and AI Co-Intelligence ensures that: 

global peter drucker

  • Human oversight remains central in all critical decision-making processes, with AI serving to complement human intelligence rather than replace it, especially in areas where ethical judgment and accountability are paramount. 
  • AI usage promotes responsible and integrity-driven behavior, ensuring that its deployment is aligned with both organizational and societal values, fostering an environment where AI systems contribute positively without causing harm. 
  • AI usage establishes continuous feedback loops between human insights and AI learning, where these inform each other’s development. Human feedback enhances AI’s integrity-driven intelligence, while AI’s data-driven insights help refine human decision-making, leading to mutual improvement in performance and integrity-led outcomes. 
  • AI systems are able to perform in each mode, while transitioning from one mode to another, depending on the situation, the need, and the context in which they operate.

Reinforced by the cohesive functioning of the two previous models, the Human and AI Co-Intelligence Model reflects the “Inter relations, dependencies, mediation, and connectedness between humans and AI systems. 

This is the aim of Artificial Integrity. 

Systems designed with this purpose will embody Artificial Integrity, emphasizing AI’s alignment with human-centered values. 

This necessitates a holistic approach to AI development and deployment, considering not just AI’s capabilities but its impact on human and societal values. It’s about building AI systems that are not only intelligent but also understand the broader implications of their actions. Such a question is not just a technological one. With the interdisciplinary dimensions it implies, it is one of the most crucial leadership challenges.  

Ultimately, the difference between intelligent-led and integrity-led machines is simple: the former are designed because we could, while the latter are designed because we should. 

Concrete applications include: 

Hiring and recruitment 

  • Case: AI-powered hiring tools risk replicating biases if they are purely data-driven without considering fairness and inclusivity. 
  • Artificial Integrity systems would proactively address potential biases (ex-ante) and evaluate the fairness of its outcomes (ex-post), making fair, inclusive hiring recommendations that respect diversity and equal opportunity values. 

Ethical product recommendations and consumer protection 

Insurance claims processing risk assessment 

  • Case: AI systems in insurance might prioritize cost-saving measures, potentially denying fair claims or overcharging based on demographic assumptions. 
  • Artificial Integrity systems would consider the fairness of its risk assessments and claims decisions, adjusting for ethical standards and treating clients equitably, with ongoing ex-post analysis of claims outcomes to refine future assessments. 

Supply chain ethical sourcing and sustainability 

  • Case: AI systems in supply chain management may optimize costs but overlook ethical concerns around sourcing, labor practices, and environmental impact. 
  • Artificial Integrity systems would prioritize suppliers that meet ethical labor standards and environmental sustainability criteria, even if they are not the lowest-cost option. It would conduct ex-ante ethical evaluations of sourcing decisions and track outcomes ex-post to assess long-term sustainability. 

Content moderation and recommendation algorithms 

  • Case: AI systems on social platforms often prioritize engagement, which can lead to the spread of misinformation or harmful content. 
  • Artificial Integrity systems would prioritize user well-being and community safety over engagement metrics. They would preemptively filter content that could be harmful or misleading (ex-ante) and continually learn from flagged or removed content to improve their ethical filtering (ex-post). 

Self-harm detection and prevention 

  • Case: AI systems may encounter users expressing signs of distress or crisis, where insensitive or poorly chosen responses could exacerbate the situation. Some users may express thoughts or plans of self-harm in interactions with AI, where a standard system might lack the ability to recognize or appropriately escalate these red flags. 
  • Artificial Integrity systems would be equipped to recognize such red-flag reactions, taking proactive steps to alert human supervisors or direct the user to crisis intervention resources, such as helplines or mental health professionals. Ex-post data reviews would be critical to improve the AI’s sensitivity in recognizing distress cues and responding safely. 

8th picture

Intelligence alone can easily stray off course, risking harm or unintended consequences. 

Artificial Integrity over intelligence has become crucial from the moment AI systems, like Google DeepMind’s AlphaGo, demonstrated capabilities that far exceed human prediction or control. 

As we stand on the verge of robotic intelligence (RI), the advance of AI systems capable of exhibiting integrity over intelligence is a critical question that will shape the course of human history. 

Artificial Integrity represents the new AI frontier and a critical path to creating a better future for all. 

About the Author

Hamilton MannHamilton Mann is a tech executive, “Digital for Good” pioneer, keynote speaker, and the originator of the concept of artificial integrity. Mann serves as the Group Vice President of Digital Marketing and Digital Transformation at Thales. He is also a senior lecturer at INSEAD, HEC Paris, and EDHEC Business School, and mentors at the MIT Priscilla King Gray (PKG) Center. He writes regularly for Forbes and Les Echos, and has published articles about AI and its societal implications in prominent academic, business, and policy outlets such as Stanford Social Innovation Review (SSIR), Knowledge@Wharton, Dialogue Duke Corporate Education, INSEAD Knowledge, INSEAD TECH TALK X, I by IMD and the Harvard Business Review France. He hosts The Hamilton Mann Conversation, a “Masterclass” podcast on Digital for Good. Mann was inducted into the Thinkers50 Radar as one of the 30 most prominent rising business thinkers globally for pioneering “Digital for Good”. He is the author of the book Artificial Integrity (Wiley, 2024)

The post Artificial Integrity appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/artificial-integrity/feed/ 0
Brain-Machine Synchrony: A New Era of AI-Supported Human Collaboration and Societal Transformation https://www.europeanbusinessreview.com/brain-machine-synchrony-a-new-era-of-ai-supported-human-collaboration-and-societal-transformation-2/ https://www.europeanbusinessreview.com/brain-machine-synchrony-a-new-era-of-ai-supported-human-collaboration-and-societal-transformation-2/#respond Wed, 11 Sep 2024 08:01:31 +0000 https://www.europeanbusinessreview.com/?p=212213 By Hamilton Mann, Cornelia Walther, and Michael Platt This paper introduces the concept of “brain-machine synchrony,” exploring the potential alignment of human brain waves and physiological responses with AI systems to […]

The post Brain-Machine Synchrony: A New Era of AI-Supported Human Collaboration and Societal Transformation appeared first on The European Business Review.

]]>

By Hamilton Mann, Cornelia Walther, and Michael Platt

This paper introduces the concept of “brain-machine synchrony,” exploring the potential alignment of human brain waves and physiological responses with AI systems to build trust and enhance collaboration. Heightened physiological synchrony within human teams fosters improved cooperation, communication, and trust. Extending this phenomenon to human-AI interactions invites the hypothesis that aligning neural and machine processes could similarly enhance symbiotic partnerships. To support this thesis, we draw parallels between established examples of “collective cognition” in AI and well-documented neural synchrony facilitating human coordination. The burgeoning field of brain-computer interfaces offers a promising platform for more direct and nuanced communication between the human brain and AI systems.

Introduction

Drawing inspiration from human brain function, we advocate for implementing rhythmic information transmission protocols within AI systems. This means designing AI to process and communicate information in rhythmic patterns that align with natural human brain activity. By harnessing the innate advantages of human physiological rhythms, we posit the utilization of verbal synchrony as a foundational principle in AI training. This approach encourages AI to adopt language patterns congruent with human communication norms, thereby fostering a sense of trust and rapport between AI and humans by aligning AI communication patterns with human cognition. Research suggests that such congruence can significantly enhance user trust and engagement.

RL mimics the human learning paradigm by allowing AI systems to learn through interaction with an environment, receiving feedback in the form of rewards or penalties.

Furthermore, insights from team synchrony research emphasize the importance of synchronized behaviors in fostering cooperation and collaboration among individuals. This highlights the potential benefits of integrating synchrony into AI-human interactions. We contend that aligning AI behavior with human expectations and social cues will enable more intuitive and seamless human-machine interactions, thus opening opportunities for an advanced human-centered AI experience, aligned with the Artificial Integrity concept coined by Hamilton Mann.

We further explore how AI systems embodied in robots can emulate human synchrony through external physiological cues such as movement and facial expressions. Systematic synchronization of autonomous systems and their human counterparts through mirroring is offered as a potential mechanism to build trust and cooperation, similar to how it occurs between humans. Despite these tantalizing opportunities, we highlight the imperative for an ethical approach, emphasizing consideration of privacy, security, and human agency. The proposed frontier of AI resonating with human brains holds remarkable potential benefits if approached with prudence and careful consideration, highlighting the pivotal role of human guidance in steering these aspirations responsibly.

Mirroring Human Nature

91045

The remarkable evolution of Artificial Intelligence (AI) systems represents a paradigm shift in the relationship between humans and machines. This transformation is evident in the seamless interactions facilitated by these advanced systems, where adaptability emerges as a defining characteristic. This adaptability resonates with the fundamental human capacity to learn from experience and predict human behavior.

One facet of AI that aligns closely with human cognitive processes is Reinforcement Learning (RL). RL mimics the human learning paradigm by allowing AI systems to learn through interaction with an environment, receiving feedback in the form of rewards or penalties. This process closely mirrors how humans learn from experiences and adjust behavior based on outcomes. Notably, RL contributes to the adaptability of AI systems, enabling them to navigate complex and dynamic environments with a capacity for continuous improvement.

By contrast, Large Language Models (LLMs) play a crucial role in pattern recognition, capturing the intricate nuances of human language and behavior. These models, such as ChatGPT and BERT, excel in understanding contextual information, grasping the subtleties of language, and predicting user intent. Leveraging vast datasets, LLMs acquire a comprehensive understanding of linguistic patterns, enabling them to generate human-like responses and adapt to some of the user behavior, sometimes with remarkable accuracy.

The synergy between RL and LLMs creates a powerful predictor of human behavior. RL contributes the ability to learn from interactions and adapt, while LLMs enhance prediction capabilities through pattern recognition. This combination reflects the holistic nature of human cognition, where learning from experience and recognizing patterns are intertwined processes.

AI systems based on RL can thus display a form of behavioral synchrony. At its core, RL enables AI systems to learn optimal sequences of actions in interactive environments to achieve a policy. Analogous to a child touching a hot surface and learning to avoid it, these AI systems adapt based on the positive or negative feedback they receive.

Take the game of Go, for instance. Google’s AlphaGo, through millions of iterations and feedback loops, bested world champions by evolving its strategies over time. In a business context, consider a customer service chatbot that, after each interaction, refines its responses to align more closely with user expectations. Over time, this iterative learning could lead the bot to ”synchronize” its behavior with the collective sentiment and preferences of its user base. In this context, it would not mean perfect alignment but rather a closer approximation to user expectations.

Building on the concept of deep reinforcement learning in AI, there’s an interesting parallel with the phenomenon of brain synchrony in human interactions. In AI, agents using deep reinforcement learning, such as Google DeepMind’s AlphaZero, learn and improve by playing millions of games against themselves, thereby refining their strategies over time. This self-improvement process in AI involves an agent iteratively learning from its own actions and outcomes. Similarly, in human interactions, brain synchrony occurs when individuals engage in cooperative tasks, leading to aligned patterns of brain activity that facilitate shared understanding and collaboration. Unlike AI, humans achieve this synchrony through interaction with others rather than themselves.

The parallel extends when considering that AI systems can also learn from human interactions. Just as human brain synchrony enhances cooperation and understanding, AI systems can improve and align their responses through extensive iterative learning from human interactions. While AI systems do not share knowledge as human brains do, they become repositories of data inherited from these interactions, which corresponds to a form of knowledge. This process of learning from vast datasets, including human interactions, can be seen as a form of ‘‘collective memory’’. This analogy highlights the potential for AI systems to evolve while being influenced by humans, while also influencing humans through their use, indicating a form of ‘‘computational synchrony’’ that could be seen as an analog to human brain synchrony.

In addition, AI systems enabled with social cue recognition are being designed to detect and respond to human emotions. These ”Affective Computing” systems, as coined by Rosalind Picard in 1995, can interpret human facial expressions, voice modulations, and even text to gauge emotions and then respond accordingly. An AI assistant that can detect user frustration in real time and adjust its responses or assistance strategy is a rudimentary form of behavioral synchronization based on immediate feedback.

For instance, affective computing encompasses technologies like emotion recognition software that analyzes facial expressions and voice tone to determine a person’s emotional state. Real-time sentiment analysis in text and voice allows AI to adjust its interactions to be more empathetic and effective. This capability is increasingly used in customer service chatbots and virtual assistants to improve user experience by making interactions more natural and responsive.

Just as humans adjust their behavior in response to social cues, adaptive AI systems modify their actions based on user input, potentially leading to a form of ”synchronization” over time. Assessing the social competence owf such an AI system could be done by adapting tools like the Social Responsiveness Scale (SRS)—a well-validated psychiatric instrument that measures how adept an individual is at modifying their behavior to fit the behavior and disposition of a social partner, a proxy for ”theory of mind”, which refers to the ability to attribute mental states—such as beliefs, intents, desires, emotions, and knowledge—to oneself and others.

LEARNING FROM MOTHER NATURE

Learning from Mother Nature

Nature offers numerous examples of synchronization and collective intelligence. Drawing inspiration from nature, where organisms like birds and fish move in synchronized patterns, AI systems can be designed to work in harmony. Such ”swarms” of AI agents can collectively process data, make decisions, and execute tasks in a synchronized manner. Analogously, if human brains can synchronize with each other and potentially – albeit implemented in different hardware and software – with AI, might AI systems develop synchrony with each other?

The concept is less implausible than it might seem at first blush. Just as human brains synchronize under specific conditions—a phenomenon observed in situations like shared gaze or joint musical performances—it is conceivable that as AI becomes more advanced, it might display similar collective behaviors when interacting with other AI systems.

Brain-Computer Interfaces (BCIs) have ushered in a transformative era in which thoughts can be translated into digital commands and human communication. Companies like Neuralink are making strides in developing interfaces that enable paralyzed individuals to control devices directly with their thoughts. Connecting direct recordings of brain activity with AI systems, researchers enabled an individual to speak at normal conversational speed after being mute for more than a decade following a stroke. AI systems can also be used to decode not only what an individual is reading but what they are thinking based on non-invasive measures of brain activity using functional MRI. Based on these advances, it’s not far-fetched to imagine a future scenario in which a professional uses a noninvasive BCI (e.g., wearable brainwave monitors such as Cogwear, Emotiv, or Muse) to communicate with AI design software. The software, recognizing the designer’s neural patterns associated with creativity or dissatisfaction, could instantaneously adjust its design proposals, achieving a level of synchrony previously thought to be the realm of science fiction. This technological frontier holds the promise of a distinctive form of synchrony, where the interplay between the human brain and AI transcends mere command interpretation, opening up a future in which AI resonates with human thoughts and emotions.

Just as humans adjust their behavior in response to social cues, adaptive AI systems modify their actions based on user input, potentially leading to a form of ”synchronization” over time.

For BCIs and AI, the term ”resonate” assumes a multifaceted significance. At a behavioral level, resonance implies a harmonious and synchronized exchange wherein AI not only comprehends the instructions from the human brain but also aligns its responses with the cognitive and emotional states of the individual. This alignment involves AI systems interpreting neural signals and responding in a manner that reflects the user’s mental and emotional context, thereby facilitating a more intuitive and effective interaction.

This alignment extends beyond a transactional interpretation of commands, reaching a profound connection where AI mirrors and adapts to the subtleties of human cognition.

Crucially, the resonance envisioned here transcends the behavioral domain to encompass communication as well. As BCIs evolve, the potential for outward expressions becomes pivotal. Beyond mere command execution, the integration of facial cues, tone of voice, and other non-verbal cues into AI’s responses amplifies the channels for resonance. This expansion into multimodal communication may enrich synchrony by capturing elements from the holistic nature of human expression, creating a more immersive and natural interaction.

However, the concept of resonance also presents the challenge of navigating the uncanny valley, a phenomenon where humanoid entities that closely resemble humans provoke discomfort. Striking the right balance is paramount, ensuring the AI’s responsiveness aligns authentically with human expressions, without entering the discomfiting realm of the uncanny valley.

The potential of BCIs to foster synchrony between the human brain and AI introduces promising yet challenging prospects for human-computer collaboration. As this field progresses, considerations of ethical, psychological, and design principles will be integral to realizing a seamless and resonant integration that enhances human experiences with AI.

The intriguing parallel between the collective diving behavior of fish and neuronal activity in the brain offers insights into the nature of collective systems in both biological and artificial intelligence. This comparison underscores the commonality in the fundamental principles governing systems that rely on discriminating environmental stimuli and ensuring effective communication over extended ranges. (Gomez et al. 2023-a). Drawing inspiration from the swarming movements displayed by groups of animals, there arises the concept of a “collective mind” wherein a decentralized swarm of AI agents may give rise to real-time collective intelligence. Current studies illustrate the potential of a bridge between observable behaviors in nature and the potential emergence of collective intelligence in AI systems. (Gomez et al. 2023-b)

Exploration of ”swarm intelligence” has gained prominence in current AI research. This approach, influenced by the collective behaviors observed in natural swarms, involves multiple AI agents collaborating to make decisions collectively. Much like ants following simple individual rules collectively exhibit sophisticated behaviors like pathfinding, swarm intelligence in AI leverages the combined efforts of multiple agents. This collaborative approach is exemplified in the coordination of ”swarm drones” which showcase collective decision-making capabilities. In multi-agent AI systems, algorithms can be specifically designed to facilitate the sharing and synchronization of learning among agents, a form of collective cognition.

Some research suggests behavioral patterns in natural systems, such as murmurations of starlings, arise from simple rules without necessarily involving neural synchrony (Couzin, 2009). Yet, the intriguing possibility remains that underlying synchrony may exist, particularly in emotional or social processing, even if shared thoughts are not evident. This nuanced consideration of synchrony adds a layer of complexity to our understanding of collective intelligence, both in natural systems and emerging AI platforms.

Exploring the concept of collective cognition in both AI and biology invites a thought-provoking comparison with evolutionary principles. Millions of years of biological evolution produced individual adaptations that sometimes benefited from collective or coordinated behaviors. The bedrock principle of evolution through natural selection emphasizes individual survival and reproduction, provoking the question of whether AI may also evolve in a way that parallels the processes observed in nature.

Neuroscience not only illuminates the basis of biological intelligence but may also guide the development of artificial intelligence (Achterberg et al., 2023). Considering evolutionary constraints like space and communication efficiency, which have shaped the emergence of efficient systems in nature, prompts exploration of embedding similar constraints in AI systems, envisioning organically evolving artificial environments optimized for efficiency and environmental sustainability, the focus of research in so-called “neuromorphic computing.”

The bedrock principle of evolution through natural selection emphasizes individual survival and reproduction, provoking the question of whether AI may also evolve in a way that parallels the processes observed in nature.

For example, oscillatory neural activity appears to boost communication between distant brain areas. The brain employs a theta-gamma rhythm to package and transmit information, similar to a postal service, thereby enhancing efficient data transmission and retrieval (Lisman & Idiart, 1995).  This interplay has been likened to an advanced data transmission system, where low-frequency alpha and beta brain waves suppress neural activity associated with predictable stimuli, allowing neurons in sensory regions to highlight unexpected stimuli via higher-frequency gamma waves. Bastos et al. (2020) found that inhibitory predictions carried by alpha/beta waves typically flow backward through deeper cortical layers, while excitatory gamma waves conveying information about novel stimuli propagate forward through superficial layers. Drawing parallels to artificial intelligence, this rhythmic data packaging and transmission model suggests that AI systems could benefit from emulating this neural dance during data processing and sharing. Advocating for AI systems to communicate with a unified rhythm, akin to brain functions, may lead to swifter and more synchronized data processing, especially in deep learning models requiring seamless inter-neural layer communication. Engineering algorithms to mirror the theta-gamma oscillatory dynamic hold promise for AI models to process and retrieve data more efficiently and methodically.

In the mammalian brain, sharp wave ripples (SPW‐Rs) exert widespread excitatory influence throughout the cortex and multiple subcortical nuclei (Buzsaki, 2015; Girardeau & Zugaro, 2011). Within these SPW‐Rs, neuronal spiking is meticulously orchestrated both temporally and spatially by interneurons, facilitating the condensed reactivation of segments from waking neuronal sequences (O’Neill, Boccara, Stella, Schoenenberger, & Csicsvari, 2008). This orchestrated activity aids in the transmission of compressed hippocampal representations to distributed circuits, thereby reinforcing the process of memory consolidation (Ego-Stengel & Wilson, 2010).

One might envision rhythmic data packaging and transmission protocols for AI too. Just as a fluid dance can captivate and optimize movement, guiding AI systems to mirror the theta-gamma rhythm might enhance collaborative efficiency. This perspective advocates for AI systems to communicate with a unified rhythm, akin to brain functions. In theory, AI neural networks could adopt an oscillatory transmission method inspired by the theta-gamma code.

Recent AI experiments, particularly those involving OpenAI’s GPT-4, unveil intriguing parallels with evolutionary learning. Unlike traditional task-oriented training, GPT-4 learns from extensive datasets, refining its responses based on the accumulated ”experiences”. Furthermore, pattern recognition by GPTs parallels pattern recognition by layers of neurons in the brain. This approach mirrors the adaptability observed in natural evolution, where organisms refine their behaviors over time to better resonate with their environment.

The application of evolutionary algorithms within AI systems, exemplified by NEAT (NeuroEvolution of Augmenting Topologies), introduces a dynamic dimension to the evolution of neural network architectures. NEAT undergoes a process akin to biological evolution, in which neural network structures evolve across generations in response to challenges present in diverse datasets. This evolutionary mechanism reflects a continuous process of adaptation, shaping the architecture and function of the neural networks. It is crucial to recognize that while both biological evolution and RL in AI systems share the process of “gradient descent” to optimize policies through incremental improvements, the underlying principle of gradient descent involves the iterative refinement of parameters toward an optimal solution. In the context of both biological evolution and AI, this implies a continuous pursuit of improved functionality or fitness.

In the case of NEAT and similar AI systems, the adaptive changes in neural network architectures implement a form of optimization driven by the specific objectives and challenges encountered. The iterative nature of these adaptations within a defined optimization framework supports the overarching objective of efficiency and functionality.

From Brain Waves to AI Frequencies

iStock-1742889010

Human and nonhuman brains resonate along a continuum of frequencies, classically defined as spanning delta (slow waves) to gamma (fast waves). Resonant brain waves arise from coordinated activity across widely distributed circuits and have been linked to perception, attention, learning, memory, empathy, and conscious state. Whether brain waves associated with these processes actively influence them remains hotly debated.

By analogy, one might consider whether AI systems might develop analogous ”frequencies”. Could there be distinctive patterns or states within AI systems that, when synchronized, lead to enhanced performance or expanded capabilities? Ultimately, AI will be designed to optimize functionality rather than replicate human brain wave dynamics, so potential parallels between human brain waves and ”frequencies” in AI systems remain metaphorical.

Drawing inspiration from the architecture of the brain, neural networks in AI are constructed with nodes organized in layers that respond to inputs and then generate outputs. Activation patterns within these layers show intriguing similarities to the activation patterns of individual neurons in the brain (see references for specific studies).

Exploring the ”AI neural symphony” offers potential avenues for achieving genuine AI-human synchrony and fostering deeper AI-AI collaborations. This could involve delving into the specific patterns of activations, understanding how different models contribute to the overall symphony, and identifying ways to enhance coordination for improved performance. Achieving synchrony in this context might refer to aligning AI processes with human cognition or ensuring seamless collaboration between multiple AI models, leading to more effective and sophisticated outcomes.

In the realm of human neural synchrony research, investigating the role of oscillations has proven to be a pivotal area of interest. High-frequency oscillatory neural activity stands out as a crucial element, demonstrating its ability to facilitate communication between distant brain areas. A particularly intriguing phenomenon in this context is the theta-gamma neural code, showcasing how our brains employ a distinctive method of ”packaging” and ”transmitting” information, reminiscent of a postal service meticulously wrapping packages for efficient delivery. This neural ”packaging” system orchestrates specific rhythms, akin to a coordinated dance, ensuring the streamlined transmission of information, and it is encapsulated in what is known as the theta-gamma rhythm.

This perspective aligns with the concept of “neuromorphic computing”, where AI architecture is based on neural circuitry. The key advantage of neuromorphic computing lies in its computational efficiency, addressing the significant energy consumption challenges faced by traditional AI models. The training of large AI models, such as those used in natural language processing or image recognition, can consume an exorbitant amount of energy. For instance, training a single AI model can emit as much carbon dioxide as five cars over their entire lifespan (Strubell et al. 2019). Moreover, researchers at the University of Massachusetts, Amherst, found that the carbon footprint of training deep learning models has been doubling approximately every 3.5 months, far outpacing improvements in computational efficiency (Schwartz et al. 2019).

Neuromorphic computing offers a promising alternative. By mimicking the architecture of the human brain, neuromorphic systems aim to achieve higher computational efficiency and lower energy consumption compared to conventional AI architectures (Furber et al., 2014). For example, IBM’s TrueNorth neuromorphic chip has demonstrated significant orders of magnitude in energy efficiency compared to traditional CPUs and GPUs (Merolla et al., 2014). Additionally, neuromorphic computing architectures are inherently suited for low-power, real-time processing tasks, making them ideal for applications like edge computing and autonomous systems, further contributing to energy savings and environmental sustainability.

Implications for Society

90963

Harnessing the potential of brain-to-AI and AI-to-AI forms of ”synchrony” offers promising avenues for society. With the seamless integration of human intuition and AI’s computational prowess, complex challenges can be approached with a hybrid analytical-creative-empathic strategy. This synergy could lead to solutions neither could achieve on their own.

As synchrony also implies real-time responsiveness, such a new paradigm of ”synchrony” might revolutionize our capacity to adapt in the face of increased uncertainty and societal challenges.

In the realm of training and skill development, synchronized AI has the potential to personalize learning experiences based on an employee’s unique learning curve, facilitating faster and more effective skill acquisition. This approach could significantly enhance onboarding, and development processes, especially when considering neurodiversity, by tailoring training modules to meet individual needs and preferences in an unprecedented way.

From a customer engagement standpoint, synchronized AI interfaces might more precisely understand and, in some cases, anticipate user expectations based on advanced behavioral patterns. This capability enables businesses to refine marketing strategies, product recommendations, and customer support, thereby resonating more deeply with consumers in an unprecedentedly inclusive way.

From a customer engagement standpoint, synchronized AI interfaces might more precisely understand and, in some cases, anticipate user expectations based on advanced behavioral patterns.

For operational efficiency, especially in sectors like manufacturing or logistics, AI systems working in coordination with each other can optimize processes, reduce waste, and strengthen the supply chain. Although true AI-to-AI synchrony isn’t fully realized yet, current capabilities that mimic or represent a primitive form of intelligent machine-to-machine synchrony allow for significant advancements in supply chain fluidity and resiliency.

This would lead to increased profitability, with an ever-met greater ability for sustainability considerations integrated – with regards to waste reduction or routes and processes that reduce carbon emissions – into process design from the onset. In risk management, synchronized AI systems analyzing vast datasets collaboratively might better predict potential risks or market downturns, equipping businesses and other organizations to prepare or pivot before a crisis emerges to limit all related social and societal impacts. Likewise, synchronized AI systems could provide insights for more efficient urban planning and environmental protection strategies. This could lead to better traffic management, energy conservation, and pollution control, enhancing the quality of life in urban areas.

In various domains beyond business, the deployment of AI with a prosocial orientation holds immense potential for the well-being of humanity and the planet. Particularly in healthcare, synchronization between the human brain and AI systems could usher in a revolutionary era for patient care and medical research. Recent studies highlight the positive impact of clinicians synchronizing their movements with patients, thereby increasing trust and reducing pain (Wager et al.,2004). Extending this concept to AI chatbots or AI-enabled robotic caregivers that are synchronized with those under their ‘‘care’’ holds the promise of enhancing patient experience and improving outcomes, as evidenced by recent research indicating that LLMs outperformed physicians in diagnosing illnesses, and patients preferred their interaction (Mollick, 2023).

In the educational domain, the integration of AI systems with a focus on synchrony is equally promising. Research demonstrated that synchronized brain waves in high school classrooms were predictive of higher performance and happiness among students (Dikker, et al., 2017). This study underscores the significance of neural synchrony in the learning environment. By leveraging AI tutoring systems capable of detecting and responding to students’ cognitive states in real time, education technology can potentially replicate the positive outcomes observed in synchronized classroom settings. Incorporation of AI systems that resonate with students’ brain states has the potential to create a more conducive and effective learning atmosphere, optimizing engagement and fostering positive learning outcomes.

Perspectives and Potential

iStock-2148460118

The excitement surrounding the prospects of brain-to-machine and machine-to-machine synchrony brings with it a set of paramount concerns that necessitate scrutiny and that are all but technical. Data privacy emerges as a critical apprehension, given the intimate nature of neural information being processed by these systems. The ethical dimensions of such synchronization, particularly in the realm of AI decision-making, present complex challenges that require careful consideration (Dignum, 2018; Floridi et al., 2018).

Expanding on these concerns, two overarching issues demand heightened attention. Firstly, the preservation of human autonomy stands as a foundational principle. As we delve into the era of brain-machine synchrony, it becomes imperative to ensure that individuals retain their ability to make informed choices. Avoiding scenarios where individuals feel coerced or manipulated by technology is crucial in upholding ethical standards (Russell, 2018).

Secondly, the question of equity in access to these technologies emerges as a pressing matter. Currently, such advanced technologies are often costly and may not be accessible to all segments of society. This raises concerns about exacerbating existing inequalities (Diakopoulos, 2016). A scenario where only certain privileged groups can harness the benefits of brain-machine synchrony might deepen societal divides. Moreover, the lack of awareness about these technologies further compounds issues of equitable access (Kostkova et al., 2016).

Addressing these concerns is not only an ethical imperative but also crucial for the long-term sustainability and acceptance of brain-machine synchrony technologies. Failing to consider issues of human autonomy and equitable access could lead to unintended consequences, potentially widening societal gaps and fostering discontent. A comprehensive and responsible approach to these challenges is essential to ensure the positive impact of these technologies on society at large.

The integration of AI with human cognition marks the threshold of an unprecedented era, where machines not only replicate human intelligence but also mirror intricate behavioral patterns and emotions. The potential synchronization of AI with human intent and emotion holds the promise of redefining the nature of human-machine collaboration and, perhaps, even the essence of the human condition.

This interconnectedness could span micro (individual), meso (community/organization), macro (country), and meta (planet) arenas, creating a dynamic continuum of mutual influence (Walther, 2021).

The outcome of harmonizing humans and machines will significantly impact humanity and the planet, contingent upon the guiding human aspirations in this pursuit. This raises a timeless question, reverberating through the course of human history: What do we value, and why?

A crucial point to emphasize is that the implications of synchronizing humans and machines extend far beyond the realm of AI experts; it encompasses every individual. This underscores the necessity to raise awareness and engage the public at every stage of this transformative journey. As the development of AI progresses, it is essential to ensure that the ethical, societal, and existential dimensions are shaped by collective values and reflections, avoiding unilateral decisions by Big Tech that may not align with the broader interests of humanity. What happens next shapes our individual and collective future. Getting it right is our shared responsibility.

About the Authors

hamiltonHamilton Mann is a Tech Executive, Digital for Good Pioneer, keynote speaker, and the originator of the concept of artificial integrity. Mann serves as the Group Vice President of Digital Marketing and Digital Transformation at Thales. He is also a Senior Lecturer at INSEAD, HEC Paris, and EDHEC Business School, and mentors at the MIT Priscilla King Gray (PKG) Center. He writes regularly for Forbes and Les Echos, and has published articles about AI and its societal implications in prominent academic, business, and policy outlets such as Stanford Social Innovation Review (SSIR), Knowledge@Wharton, Dialogue Duke Corporate Education, INSEAD Knowledge, INSEAD TECH TALK X, I by IMD and the Harvard Business Review France. He hosts The Hamilton Mann Conversation, a “Masterclass” podcast on Digital for Good. Mann was inducted into the Thinkers50 Radar as one of the 30 most prominent rising business thinkers globally for pioneering “Digital for Good”. He is the author of the book Artificial Integrity (Wiley, 2024)

Cornelia C. WaltherCornelia C. Walther, PhD, Director POZE@ezop, a global alliance for systemic change that benefits people and the planet. As a humanitarian practitioner, she worked for two decades with the United Nations in emergencies in West Africa, Asia, and Latin America with a focus on advocacy and social and behavior change. As a lecturer, frontline coach and researcher, she has been collaborating over the past decade with universities across the Americas and Europe. She presently is a Senior Visiting Fellow at the Wharton initiative for Neuroscience (WiN)/Wharton AI and Analytics; and the Center for Integrated Oral Health (CIGOH), and is affiliated with MindCORE and the Center for social norms and behavioral dynamics at the University of Pennsylvania as. Since 2021 her research focus is on AI4IA (Artificial intelligence for Inspired Action).  

Michael PlattMichael Platt, PhD, is Director of the Wharton Neuroscience Initiative and a Professor of Marketing, Neuroscience, and Psychology at the University of Pennsylvania. He is author of “The Leader’s Brain” (Wharton Press) and over 200 scientific papers, which have been cited over 23,000 times. Michael is former Director of the Duke Institute for Brain Sciences and the Center for Cognitive Neuroscience at Duke, and founding Co-Director of the Duke Center for Neuroeconomic Studies. His work has been featured in the New York Times, Washington Post, Wall Street Journal, Newsweek, the Guardian, and National Geographic, as well as on ABC’s Good Morning America, NPR, CBC, BBC, MTV, and HBO Vice. He co-founded brain health and performance company Cogwear Inc. He currently serves on multiple Advisory Boards including the Yang-Tan Autism Centers at MIT and Harvard and as President of the Society for Neuroeconomics. 

References

  • Achterberg, J., Akarca, D., Strouse, D.J. et al. Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings. Nature Machine Intelligence 5, 1369–1381 (2023). https://doi.org/10.1038/s42256-023-00748-9   

  • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165. 

  • Başar, E. (2013). Brain oscillations in neuropsychiatric disease. Dialogues in clinical neuroscience, 15(3), 291–300. 

  • Bastos, A. M., Lundqvist, M., Waite, A. S., & Miller, E. K. (2020). Layer and rhythm specificity for predictive routing. Proceedings of the National Academy of Sciences, 117(49), 31459-31469. [https://doi.org/10.1073/pnas.2014868117] 

  • Burle, D., Spieser, L., Roger, C., Casini, L., Hasbroucq, T., & Vidal, F. (2015). Spatial and temporal resolutions of EEG: Is it really black and white? A scalp current density view. International Journal of Psychophysiology, 97(3), 210-220.  

  • Buzsáki G. (2015). Hippocampal sharp wave-ripple: A cognitive biomarker for episodic memory and planning. Hippocampus. 2015 Oct;25(10):1073-188. doi: 10.1002/hipo.22488. PMID: 26135716; PMCID: PMC4648295.  

  • Buzsáki, G. (2006). Rhythms of the brain. Oxford University Press. 

  • Couzin, I. D. (2009). Collective cognition in animal groups. Trends in cognitive sciences, 13(1), 36-43. 

  • Diakopoulos, N. (2016). Accountability in Algorithmic Decision Making. Communications of the ACM, 59(2), 56–62. https://doi.org/10.1145/2844148  

  • Dignum, V. (2018). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. AI & Society, 33(3), 475–476. https://doi.org/10.1007/s00146-018-0812-0 

  • Dikker, S., Wan, L., Davidesco, I., Kaggen, L., Oostrik, M., McClintock, J., … & Poeppel, D. (2017). Brain-to-brain synchrony tracks real-world dynamic group interactions in the classroom. Current Biology, 27(9), 1375-1380. 

  • Ego-Stengel, V., & Wilson, M. A. (2010). Disruption of ripple-associated hippocampal activity during rest impairs spatial learning in the rat. Hippocampus, 20(1), 1-10. 

  • Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5 

  • Furber, S. B., Galluppi, F., Temple, S., & Plana, L. A. (2014). The SpiNNaker Project. Proceedings of the IEEE, 102(5), 652–665. https://doi.org/10.1109/JPROC.2014.2304638 

  • Girardeau, G., & Zugaro, M. (2011). Hippocampal ripples and memory consolidation. Current Opinion in Neurobiology, 21(3), 452-459. 

  • Gómez-Nava L, Lange RT, Klamser PP, et al. (2023-a) Fish shoals resemble a stochastic excitable system driven by environmental perturbations. Nat Phys. 2023. doi: 10.1038/s41567-022-01916-1 

  • Gomez, N., Eguiluz, V. M., Garrido, L., Hernandez-Garcia, E., & Aldana, M. (2023-b). Neural correlations in collective fish behavior: the role of environmental stimuli. arXiv preprint arXiv:2303.00001. 

  • Johnson, A., & Smith, B. (2018). Bridging the Gap: Human-AI Interaction through Verbal Synchrony. *Journal of Artificial Intelligence Research, 45*, 789-804. 

  • Kostkova, P., Brewer, H., de Lusignan, S., Fottrell, E., Goldacre, B., Hart, G., Koczan, P., Knight, P., Marsolier, C., McKendry, R. A., Ross, E., Sasse, A., Sullivan, R., Chaytor, S., Stevenson, O., Velho, R., Tooke, J., & Ross, E. (2016). Who Owns the Data? Open Data for Healthcare. Frontiers in Public Health, 4. https://doi.org/10.3389/fpubh.2016.00107  

  • Lebedev, M. A., & Nicolelis, M. A. L. (2006). Brain–machine interfaces: past, present and future. Trends in Neurosciences, 29(9), 536-546. 

  • Lisman, J. E., & Idiart, M. A. (1995). Storage of 7 +/- 2 short-term memories in oscillatory subcycles. Science, 267(5203), 1512–1515. [DOI: 10.1126/science.7878473] 

  • Merolla, P. A., Arthur, J. V., Alvarez-Icaza, R., Cassidy, A. S., Sawada, J., Akopyan, F., Modha, D. S. (2014). A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197), 668–673. https://doi.org/10.1126/science.1254642 

  • Nicolelis, M. A. L., & Lebedev, M. A. (2009). Principles of neural ensemble physiology underlying the operation of brain–machine interfaces. Nature Reviews Neuroscience, 10(7), 530-540. 

  • O’Neill, J., Boccara, C. N., Stella, F., Schoenenberger, P., & Csicsvari, J. (2008). Superficial layers of the medial entorhinal cortex replay independently of the hippocampus. Science, 320(5879), 129-133. 

  • Mann, H. (2024). Introducing the Concept of Artificial Integrity: The Path for the Future of AI. The European Business Review. 

  • Picard, R. W. (1995). “Affective Computing.” MIT Media Laboratory Perceptual Computing Section.  

  • Schwartz, R., Dodge, J., Smith, N. A., Overton, J., & Varshney, L. R. (2019). Green AI. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 9342–9350. https://doi.org/10.1609/aaai.v33i01.33019342 

  • Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645–3650. https://doi.org/10.18653/v1/P19-1356  

  • Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction. MIT Press. 

  • Walther, C. (2021). Technology, social change and human behavior. Palgrave Macmillan. New York. 

The post Brain-Machine Synchrony: A New Era of AI-Supported Human Collaboration and Societal Transformation appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/brain-machine-synchrony-a-new-era-of-ai-supported-human-collaboration-and-societal-transformation-2/feed/ 0
Brain – Machine Synchrony: A New Era of AI-Supported Human Collaboration and Societal Transformation https://www.europeanbusinessreview.com/brain-machine-synchrony-a-new-era-of-ai-supported-human-collaboration-and-societal-transformation/ https://www.europeanbusinessreview.com/brain-machine-synchrony-a-new-era-of-ai-supported-human-collaboration-and-societal-transformation/#respond Sun, 14 Jul 2024 15:11:17 +0000 https://www.europeanbusinessreview.com/?p=209317 By Hamilton Mann, Cornelia Walther and Michael Platt This paper introduces the concept of “brain-machine synchrony,” exploring the potential alignment of human brain waves and physiological responses with AI systems to […]

The post Brain – Machine Synchrony: A New Era of AI-Supported Human Collaboration and Societal Transformation appeared first on The European Business Review.

]]>
By Hamilton Mann, Cornelia Walther and Michael Platt

This paper introduces the concept of “brain-machine synchrony,” exploring the potential alignment of human brain waves and physiological responses with AI systems to build trust and enhance collaboration. Heightened physiological synchrony within human teams fosters improved cooperation, communication, and trust. Extending this phenomenon to human-AI interactions invites the hypothesis that aligning neural and machine processes could similarly enhance symbiotic partnerships. To support this thesis, we draw parallels between established examples of “collective cognition” in AI and well-documented neural synchrony facilitating human coordination. The burgeoning field of brain-computer interfaces offers a promising platform for more direct and nuanced communication between the human brain and AI systems.  

Introduction   

Drawing inspiration from human brain function, we advocate for implementing rhythmic information transmission protocols within AI systems. This means designing AI to process and communicate information in rhythmic patterns that align with natural human brain activity. By harnessing the innate advantages of human physiological rhythms, we posit the utilization of verbal synchrony as a foundational principle in AI training. This approach encourages AI to adopt language patterns congruent with human communication norms, thereby fostering a sense of trust and rapport between AI and humans by aligning AI communication patterns with human cognition. Research suggests that such congruence can significantly enhance user trust and engagement. Furthermore, insights from team synchrony research emphasize the importance of synchronized behaviors in fostering cooperation and collaboration among individuals. This highlights the potential benefits of integrating synchrony into AI-human interactions. We contend that aligning AI behavior with human expectations and social cues will enable more intuitive and seamless human-machine interactions, thus opening opportunities for an advanced human-centered AI experience, aligned with the Artificial Integrity concept coined by Hamilton Mann. 

We further explore how AI systems embodied in robots can emulate human synchrony through external physiological cues such as movement and facial expressions. Systematic synchronization of autonomous systems and their human counterparts through mirroring is offered as a potential mechanism to build trust and cooperation, similar to how it occurs between humans. Despite these tantalizing opportunities, we highlight the imperative for an ethical approach, emphasizing consideration of privacy, security, and human agency. The proposed frontier of AI resonating with human brains holds remarkable potential benefits if approached with prudence and careful consideration, highlighting the pivotal role of human guidance in steering these aspirations responsibly. 

Mirroring human nature 

The remarkable evolution of Artificial Intelligence (AI) systems represents a paradigm shift in the relationship between humans and machines. This transformation is evident in the seamless interactions facilitated by these advanced systems, where adaptability emerges as a defining characteristic. This adaptability resonates with the fundamental human capacity to learn from experience and predict human behavior. 

One facet of AI that aligns closely with human cognitive processes is Reinforcement Learning (RL). RL mimics the human learning paradigm by allowing AI systems to learn through interaction with an environment, receiving feedback in the form of rewards or penalties. This process closely mirrors how humans learn from experiences and adjust behavior based on outcomes. Notably, RL contributes to the adaptability of AI systems, enabling them to navigate complex and dynamic environments with a capacity for continuous improvement.  

RL mimics the human learning paradigm by allowing AI systems to learn through interaction with an environment, receiving feedback in the form of rewards or penalties.

By contrast, Large Language Models (LLMs) play a crucial role in pattern recognition, capturing the intricate nuances of human language and behavior. These models, such as ChatGPT and BERT, excel in understanding contextual information, grasping the subtleties of language, and predicting user intent. Leveraging vast datasets, LLMs acquire a comprehensive understanding of linguistic patterns, enabling them to generate human-like responses and adapt to some of the user behavior, sometimes with remarkable accuracy.  

The synergy between RL and LLMs creates a powerful predictor of human behavior. RL contributes the ability to learn from interactions and adapt, while LLMs enhance the prediction capabilities through pattern recognition. This combination reflects the holistic nature of human cognition, where learning from experience and recognizing patterns are intertwined processes.  

AI systems based on RL can thus display a form of behavioral synchrony. At its core, RL enables AI systems to learn optimal sequences of actions in interactive environments to achieve a policy. Analogous to a child touching a hot surface and learning to avoid it, these AI systems adapt based on the positive or negative feedback they receive.  

Take the game of Go, for instance. Google’s AlphaGo, through millions of iterations and feedback loops, bested world champions by evolving its strategies over time. In a business context, consider a customer service chatbot that, after each interaction, refines its responses to align more closely with user expectations. Over time, this iterative learning could lead the bot to ‘synchronize’ its behavior with the collective sentiment and preferences of its user base. In this context, it would not mean perfect alignment but rather a closer approximation to user expectations.  

Building on the concept of deep reinforcement learning in AI, there’s an interesting parallel with the phenomenon of brain synchrony in human interactions. In AI, agents using deep reinforcement learning, such as Google DeepMind’s AlphaZero, learn and improve by playing millions of games against themselves, thereby refining their strategies over time. This self-improvement process in AI involves an agent iteratively learning from its own actions and outcomes. Similarly, in human interactions, brain synchrony occurs when individuals engage in cooperative tasks, leading to aligned patterns of brain activity that facilitate shared understanding and collaboration. Unlike AI, humans achieve this synchrony through interaction with others rather than themselves.  

The parallel extends when considering that AI systems can also learn from interactions with humans. Just as human brain synchrony enhances cooperation and understanding, AI systems can improve and align their responses through extensive iterative learning from human interactions. While AI systems do not literally share knowledge as human brains do, they become repositories of data inherited from these interactions, which corresponds to a form of knowledge. This process of learning from vast datasets, including human interactions, can be seen as a form of ‘collective memory’. This analogy highlights the potential for AI systems to evolve while being influenced by humans, while also influencing humans through their use, indicating a form of ‘computational synchrony’ that could be seen as an analog to human brain synchrony.  

In addition, AI systems enabled with social cue recognition are being designed to detect and respond to human emotions. These ‘Affective Computing’ systems, as coined by Rosalind Picard in 1995, can interpret human facial expressions, voice modulations, and even text to gauge emotions and then respond accordingly. An AI assistant that can detect user frustration in real-time and adjust its responses or assistance strategy is a rudimentary form of behavioral synchronization based on immediate feedback. 

For instance, affective computing encompasses technologies like emotion recognition software that analyzes facial expressions and voice tone to determine a person’s emotional state. Real-time sentiment analysis in text and voice allows AI to adjust its interactions to be more empathetic and effective. This capability is increasingly used in customer service chatbots and virtual assistants to improve user experience by making interactions feel more natural and responsive. 

Just as humans adjust their behavior in response to social cues, adaptive AI systems modify their actions based on user input, potentially leading to a form of ‘synchronization’ over time. Assessing the social competence of such an AI system could be done by adapting tools like the Social Responsiveness Scale (SRS)—a well-validated psychiatric instrument that measures how adept an individual is at modifying their behavior to fit the behavior and disposition of a social partner, a proxy for ‘theory of mind,’ which refers to the ability to attribute mental states—such as beliefs, intents, desires, emotions, and knowledge—to oneself and to others. 

Learning from mother nature 

Just as humans adjust their behavior in response to social cues, adaptive AI systems modify their actions based on user input, potentially leading to a form of ‘synchronization’ over time.

Nature offers numerous examples of synchronization and collective intelligence. Drawing inspiration from nature, where organisms like birds and fish move in synchronized patterns, AI systems can be designed to work in harmony. Such ‘swarms’ of AI agents can collectively process data, make decisions, and execute tasks in a synchronized manner. Analogously, if human brains can synchronize with each other and potentially – albeit implemented in different hardware and software – with AI, might AI systems develop synchrony with each other? 

The concept is less implausible than it might seem at first blush. Just as human brains synchronize under specific conditions—a phenomenon observed in situations like shared gaze or joint musical performances—it is conceivable that as AI becomes more advanced, it might display similar collective behaviors when interacting with other AI systems.  

Brain-Computer Interfaces (BCIs) have ushered in a transformative era in which thoughts can be translated into digital commands and human communication. Companies like Neuralink are making strides developing interfaces that enable paralyzed individuals to control devices directly with their thoughts. Connecting direct recordings of brain activity with AI systems, researchers enabled an individual to speak at normal conversational speed after being mute for more than a decade following a stroke. AI systems can also be used to decode not only what an individual is reading but what they are thinking based on non-invasive measures of brain activity using functional MRI. Based on these advances, it’s not far-fetched to imagine a future scenario in which a professional uses a noninvasive BCI (e.g., wearable brainwave monitors such as Cogwear, Emotiv, or Muse) to communicate with AI design software. The software, recognizing the designer’s neural patterns associated with creativity or dissatisfaction, could instantaneously adjust its design proposals, achieving a level of synchrony previously thought to be the realm of science fiction. This technological frontier holds the promise of a distinctive form of synchrony, where the interplay between the human brain and AI transcends mere command interpretation, opening up a future in which AI resonates with human thoughts and emotions.  

For BCIs and AI, the term ‘resonate’ assumes a multifaceted significance. At a behavioral level, resonance implies a harmonious and synchronized exchange, wherein AI not only comprehends the instructions from the human brain but also aligns its responses with the cognitive and emotional states of the individual. This alignment involves AI systems interpreting neural signals and responding in a manner that reflects the user’s mental and emotional context, thereby facilitating a more intuitive and effective interaction.  

This alignment extends beyond a transactional interpretation of commands, reaching a profound connection where AI mirrors and adapts to the subtleties of human cognition. 

Crucially, the resonance envisioned here transcends the behavioral domain to encompass communication as well. As BCIs evolve, the potential for outward expressions becomes pivotal. Beyond mere command execution, the integration of facial cues, tone of voice, and other non-verbal cues into AI’s responses amplifies the channels for resonance. This expansion into multimodal communication may enrich synchrony by capturing elements from the holistic nature of human expression, creating a more immersive and natural interaction. 

However, the concept of resonance also presents the challenge of navigating the uncanny valley, a phenomenon where humanoid entities that closely resemble humans provoke discomfort. Striking the right balance is paramount, ensuring the AI’s responsiveness aligns authentically with human expressions, without entering the discomfiting realm of the uncanny valley.  

The potential of BCIs to foster synchrony between the human brain and AI introduces promising yet challenging prospects for human-computer collaboration. As this field progresses, considerations of ethical, psychological, and design principles will be integral to realizing a seamless and resonant integration that enhances human experiences with AI. 

The intriguing parallel between the collective diving behavior of fish and neuronal activity in the brain offers insights into the nature of collective systems in both biological and artificial intelligence. This comparison underscores the commonality in the fundamental principles governing systems that rely on discriminating environmental stimuli and ensuring effective communication over extended ranges. (Gomez et al. 2023-a). Drawing inspiration from the swarming movements displayed by groups of animals, there arises the concept of a “collective mind,” wherein a decentralized swarm of AI agents may give rise to real-time collective intelligence. Current studies illustrate the potential of a bridge between observable behaviors in nature and the potential emergence of collective intelligence in AI systems. (Gomez et al. 2023-b)  

Exploration of ‘swarm intelligence’ has gained prominence in current AI research. This approach, influenced by the collective behaviors observed in natural swarms, involves multiple AI agents collaborating to make decisions collectively. Much like ants following simple individual rules collectively exhibit sophisticated behaviors like pathfinding, swarm intelligence in AI leverages the combined efforts of multiple agents. This collaborative approach is exemplified in the coordination of ‘swarm drones,’ which showcase collective decision-making capabilities. In multi-agent AI systems, algorithms can be specifically designed to facilitate sharing and synchronization of learning among agents, a form of collective cognition.  

Some research suggests behavioral patterns in natural systems, such as murmurations of starlings, arise from simple rules without necessarily involving neural synchrony (Couzin, 2009). Yet, the intriguing possibility remains that underlying synchrony may exist, particularly in emotional or social processing, even if shared thoughts are not evident. This nuanced consideration of synchrony adds a layer of complexity to our understanding of collective intelligence, both in natural systems and emerging AI platforms. 

Exploring the concept of collective cognition in both AI and biology invites a thought-provoking comparison with evolutionary principles. Millions of years of biological evolution produced  individual adaptations that sometimes benefited from collective or coordinated behaviors. The bedrock principle of evolution through natural selection emphasizes individual survival and reproduction, provoking the question of whether AI may also  evolve in a way that parallels the processes observed in nature.   

The bedrock principle of evolution through natural selection emphasizes individual survival and reproduction, provoking the question of whether AI may also  evolve in a way that parallels the processes observed in nature.

Neuroscience not only illuminates the basis of biological intelligence but may also guide development of artificial intelligence (Achterberg et al., 2023). Considering evolutionary constraints like space and communication efficiency, which have shaped the emergence of efficient systems in nature, prompts exploration of embedding similar constraints in AI systems, envisioning organically evolving artificial environments optimized for efficiency and environmental sustainability, the focus of research in so-called “neuromorphic computing.”    

For example, oscillatory neural activity appears to boost communication between distant brain areas. The brain employs a theta-gamma rhythm to package and transmit information, similar to a postal service, thereby enhancing efficient data transmission and retrieval (Lisman & Idiart, 1995). This interplay has been likened to an advanced data transmission system, where low-frequency alpha and beta brain waves suppress neural activity associated with predictable stimuli, allowing neurons in sensory regions to highlight unexpected stimuli via higher-frequency gamma waves. Bastos et al. (2020) found that inhibitory predictions carried by alpha/beta waves typically flow backward through deeper cortical layers, while excitatory gamma waves conveying information about novel stimuli propagate forward through superficial layers. Drawing parallels to artificial intelligence, this rhythmic data packaging and transmission model suggests that AI systems could benefit from emulating this neural dance during data processing and sharing. Advocating for AI systems to communicate with a unified rhythm, akin to brain functions, may lead to swifter and more synchronized data processing, especially in deep learning models requiring seamless inter-neural layer communication. Engineering algorithms to mirror the theta-gamma oscillatory dynamic holds promise for AI models to process and retrieve data more efficiently and methodically.  

In the mammalian brain, sharp wave ripples (SPW‐Rs) exert widespread excitatory influence throughout the cortex and multiple subcortical nuclei. (Buzsaki, 2015; Girardeau & Zugaro, 2011). Within these SPW‐Rs, neuronal spiking is meticulously orchestrated both temporally and spatially by interneurons, facilitating the condensed reactivation of segments from waking neuronal sequences (O’Neill, Boccara, Stella, Schoenenberger, & Csicsvari, 2008). This orchestrated activity aids in the transmission of compressed hippocampal representations to distributed circuits, thereby reinforcing the process of memory consolidation (Ego-Stengel & Wilson, 2010). 

One might envision rhythmic data packaging and transmission protocols for AI too. Just as a fluid dance can captivate and optimize movement, guiding AI systems to mirror the theta-gamma rhythm might enhance collaborative efficiency. This perspective advocates for AI systems to communicate with a unified rhythm, akin to brain functions. In theory, AI neural networks could adopt an oscillatory transmission method inspired by the theta-gamma code.  

Recent AI experiments, particularly those involving OpenAI’s GPT-4, unveil intriguing parallels with evolutionary learning. Unlike traditional task-oriented training, GPT-4 learns from extensive datasets, refining its responses based on the accumulated ‘experiences’; furthermore pattern recognition by GPTs parallels pattern recognition by layers of neurons in the brain. This approach mirrors the adaptability observed in natural evolution, where organisms refine their behaviors over time to better resonate with their environment. 

The application of evolutionary algorithms within AI systems, exemplified by NEAT (NeuroEvolution of Augmenting Topologies), introduces a dynamic dimension to the evolution of neural network architectures. NEAT undergoes a process akin to biological evolution, in which neural network structures evolve across generations in response to challenges present in diverse datasets. This evolutionary mechanism reflects a continuous process of adaptation, shaping the architecture and function of the neural networks. It is crucial to recognize that while both biological evolution and RL in AI systems share the process of “gradient descent” to optimize policies through incremental improvements. The underlying principle of gradient descent involves the iterative refinement of parameters toward an optimal solution. In the context of both biological evolution and AI, this implies a continuous pursuit of improved functionality or fitness. 

In the case of NEAT and similar AI systems, the adaptive changes in neural network architectures implement a form of optimization driven by the specific objectives and challenges encountered. The iterative nature of these adaptations within a defined optimization framework, supports the overarching objective of efficiency and functionality.

From Brain Waves to AI Frequencies 

Human and nonhuman brains resonate along a continuum of frequencies, classically defined as spanning delta (slow waves) to gamma (fast waves). Resonant brain waves arise from coordinated activity across widely-distributed circuits, and have been linked to perception, attention,  learning, memory, empathy, and conscious state. Whether  brain waves  associated with these processes actively influence them remains hotly debated.  

By analogy, one might consider whether AI systems might develop analogous ‘frequencies.’ Could there be distinctive patterns or states within AI systems that, when synchronized, lead to enhanced performance or expanded capabilities?. Ultimately, AI will be designed to optimize functionality rather than replicate human brain wave dynamics, so potential parallels between human brain waves and ‘frequencies’ in AI systems remains metaphorical. 

Drawing inspiration from the architecture of the brain, neural networks in AI are constructed with nodes organized in layers that respond to inputs and then generate outputs. Activation patterns within these layers show intriguing similarities to the activation patterns of individual neurons in the brain (see references for specific studies).   

Exploring the ‘AI neural symphony’ offers potential avenues for achieving genuine AI-human synchrony and fostering deeper AI-AI collaborations. This could involve delving into the specific patterns of activations, understanding how different models contribute to the overall symphony, and identifying ways to enhance coordination for improved performance. Achieving synchrony in this context might refer to aligning AI processes with human cognition or ensuring seamless collaboration between multiple AI models, leading to more effective and sophisticated outcomes.  

In the realm of human neural synchrony research, investigating the role of oscillations has proven to be a pivotal area of interest. High-frequency oscillatory neural activity stands out as a crucial element, demonstrating its ability to facilitate communication between distant brain areas. A particularly intriguing phenomenon in this context is the theta-gamma neural code, showcasing how our brains employ a distinctive method of ‘packaging’ and ‘transmitting’ information, reminiscent of a postal service meticulously wrapping packages for efficient delivery. This neural ‘packaging’ system orchestrates specific rhythms, akin to a coordinated dance, ensuring the streamlined transmission of information, and it is encapsulated in what is known as the theta-gamma rhythm. 

This perspective aligns with the concept of “neuromorphic computing,” where AI architecture is based on neural circuitry. The key advantage of neuromorphic computing lies in its computational efficiency, addressing the significant energy consumption challenges faced by traditional AI models. The training of large AI models, such as those used in natural language processing or image recognition, can consume an exorbitant amount of energy. For instance, training a single AI model can emit as much carbon dioxide as five cars over their entire lifespan. (Strubell et al. 2019) Moreover, researchers at the University of Massachusetts, Amherst, found that the carbon footprint of training deep learning models has been doubling approximately every 3.5 months, far outpacing improvements in computational efficiency (Schwartz et al. 2019).   

Neuromorphic computing offers a promising alternative. By mimicking the architecture of the human brain, neuromorphic systems aim to achieve higher computational efficiency and lower energy consumption compared to conventional AI architectures (Furber et al., 2014). For example, IBM’s TrueNorth neuromorphic chip has demonstrated significant orders of magnitude in energy efficiency compared to traditional CPUs and GPUs (Merolla et al., 2014). Additionally, neuromorphic computing architectures are inherently suited for low-power, real-time processing tasks, making them ideal for applications like edge computing and autonomous systems, further contributing to energy savings and environmental sustainability. 

Implications for society  

Harnessing the potential of brain-to-AI and AI-to-AI forms of ‘synchrony’ offers promising avenues for society. With the seamless integration of human intuition and AI’s computational prowess, complex challenges can be approached with a hybrid analytical-creative-empathical strategy. This synergy could lead to solutions neither could achieve on their own. 

As synchrony also implies real-time responsiveness, such a new paradigm of ‘synchrony’ might revolutionize our capacity to adapt in the face of increased uncertainty and societal challenges.  

From a customer engagement standpoint, synchronized AI interfaces might more precisely understand and, in some cases, anticipate user expectations based on advanced behavioral patterns.

In the realm of training and skill development, synchronized AI has the potential to personalize learning experiences based on an employee’s unique learning curve, facilitating faster and more effective skill acquisition. This approach could significantly enhance onboarding, and development processes, especially when considering neurodiversity, by tailoring training modules to meet individual needs and preferences in an unprecedented way.  

From a customer engagement standpoint, synchronized AI interfaces might more precisely understand and, in some cases, anticipate user expectations based on advanced behavioral patterns. This capability enables businesses to refine marketing strategies, product recommendations, and customer support, thereby resonating more deeply with consumers in an unprecedentedly inclusive way. 

For operational efficiency, especially in sectors like manufacturing or logistics, AI systems working in coordination with each other can optimize processes, reduce waste, and strengthen the supply chain. Although true AI-to-AI synchrony isn’t fully realized yet, current capabilities that mimic or represent a primitive form of intelligent machine-to-machine synchrony allow for significant advancements in supply chain fluidity and resiliency. 

This would lead to increased profitability, with an ever-met greater ability for sustainability considerations integrated – with regards to waste reduction or routes and processes that reduce carbon emissions – into process design from the onset. In risk management, synchronized AI systems analyzing vast datasets collaboratively might better predict potential risks or market downturns, equipping businesses and other organizations to prepare or pivot before a crisis emerges to limit all related social and societal impact. Likewise, synchronized AI systems could provide insights for more efficient urban planning and environmental protection strategies. This could lead to better traffic management, energy conservation, and pollution control, enhancing the quality of life in urban areas.  

In various domains beyond business, deployment of AI with a prosocial orientation holds immense potential for the well-being of humanity and the planet. Particularly in healthcare, synchronization between the human brain and AI systems could usher in a revolutionary era for patient care and medical research. Recent studies highlight the positive impact of clinicians synchronizing their movements with patients, thereby increasing trust and reducing pain (Wager et al.,2004). Extending this concept to AI chatbots or AI-enabled robotic caregivers that are synchronized with those under their ‘care’ holds the promise of enhancing patient experience and improving outcomes, as evidenced by recent research indicating that LLMs outperformed physicians in diagnosing illnesses, and patients preferred their interaction (Mollick, 2023).  

In the educational domain, the integration of AI systems with a focus on synchrony is equally promising. Research demonstrated that synchronized brain waves in high school classrooms were predictive of higher performance and happiness among students (Dikker, et al., 2017). This study underscores the significance of neural synchrony in the learning environment. By leveraging AI tutoring systems capable of detecting and responding to students’ cognitive states in real-time, education technology can potentially replicate the positive outcomes observed in synchronized classroom settings.  Incorporation of AI systems that resonate with students’ brain states has the potential to create a more conducive and effective learning atmosphere, optimizing engagement and fostering positive learning outcomes.  

Perspectives and Potential  

The excitement surrounding the prospects of brain-to-machine and machine-to-machine synchrony brings with it a set of paramount concerns that necessitate scrutiny and that are all but technical. Data privacy emerges as a critical apprehension, given the intimate nature of neural information being processed by these systems. The ethical dimensions of such synchronization, particularly in the realm of AI decision-making, present complex challenges that require careful consideration (Dignum, 2018; Floridi et al., 2018). 

Expanding on these concerns, two overarching issues demand heightened attention. Firstly, the preservation of human autonomy stands as a foundational principle. As we delve into the era of brain-machine synchrony, it becomes imperative to ensure that individuals retain their ability to make informed choices. Avoiding scenarios where individuals feel coerced or manipulated by technology is crucial in upholding ethical standards (Russell, 2018). 

Secondly, the question of equity in access to these technologies emerges as a pressing matter. Currently, such advanced technologies are often costly and may not be accessible to all segments of society. This raises concerns about exacerbating existing inequalities (Diakopoulos, 2016). A scenario where only certain privileged groups can harness the benefits of brain-machine synchrony might deepen societal divides. Moreover, the lack of awareness about these technologies further compounds issues of equitable access (Kostkova et al., 2016). 

Addressing these concerns is not only an ethical imperative but also crucial for the long-term sustainability and acceptance of brain-machine synchrony technologies. Failing to consider issues of human autonomy and equitable access could lead to unintended consequences, potentially widening societal gaps and fostering discontent. A comprehensive and responsible approach to these challenges is essential to ensure the positive impact of these technologies on society at large.  

The integration of AI with human cognition marks the threshold of an unprecedented era, where machines not only replicate human intelligence but also mirror intricate behavioral patterns and emotions. The potential synchronization of AI with human intent and emotion holds the promise of redefining the nature of human-machine collaboration and, perhaps, even the essence of the human condition.  

This interconnectedness could span micro (individual), meso (community/organization), macro (country), and meta (planet) arenas, creating a dynamic continuum of mutual influence (Walther, 2021). 

The outcome of harmonizing humans and machines will significantly impact humanity and the planet, contingent upon the guiding human aspirations in this pursuit. This raises a timeless question, reverberating through the course of human history: What do we value, and why?  

A crucial point to emphasize is that the implications of synchronizing humans and machines extend far beyond the realm of AI experts; it encompasses every individual. This underscores the necessity to raise awareness and engage the public at every stage of this transformative journey. As the development of AI progresses, it is essential to ensure that the ethical, societal, and existential dimensions are shaped by collective values and reflections, avoiding unilateral decisions by Big Tech that may not align with the broader interests of humanity. What happens next shapes our individual and collective future. Getting it right is our shared responsibility.

About the Authors

Hamilton MannHamilton Mann is a Tech Executive, Digital for Good Pioneer, keynote speaker, and the originator of the concept of artificial integrity. Mann serves as the Group Vice President of Digital Marketing and Digital Transformation at Thales. He is also a Senior Lecturer at INSEAD, HEC Paris, and EDHEC Business School, and mentors at the MIT Priscilla King Gray (PKG) Center. He writes regularly for Forbes and Les Echos, and has published articles about AI and its societal implications in prominent academic, business, and policy outlets such as Stanford Social Innovation Review (SSIR), Knowledge@Wharton, Dialogue Duke Corporate Education, INSEAD Knowledge, INSEAD TECH TALK X, I by IMD and the Harvard Business Review France. He hosts The Hamilton Mann Conversation, a “Masterclass” podcast on Digital for Good. Mann was inducted into the Thinkers50 Radar as one of the 30 most prominent rising business thinkers globally for pioneering “Digital for Good”. He is the author of the book Artificial Integrity (Wiley, 2024)

Cornelia WaltherCornelia C. Walther, PhD, Director POZE@ezop, a global alliance for systemic change that benefits people and the planet. As a humanitarian practitioner, she worked for two decades with the United Nations in emergencies in West Africa, Asia, and Latin America with a focus on advocacy and social and behavior change. As a lecturer, frontline coach and researcher, she has been collaborating over the past decade with universities across the Americas and Europe. She presently is a Senior Visiting Fellow at the Wharton initiative for Neuroscience (WiN)/Wharton AI and Analytics; and the Center for Integrated Oral Health (CIGOH), and is affiliated with MindCORE and the Center for social norms and behavioral dynamics at the University of Pennsylvania as. Since 2021 her research focus is on AI4IA (Artificial intelligence for Inspired Action).  

Michael PlattMichael Platt, PhD, is Director of the Wharton Neuroscience Initiative and a Professor of Marketing, Neuroscience, and Psychology at the University of Pennsylvania. He is author of “The Leader’s Brain” (Wharton Press) and over 200 scientific papers, which have been cited over 23,000 times. Michael is former Director of the Duke Institute for Brain Sciences and the Center for Cognitive Neuroscience at Duke, and founding Co-Director of the Duke Center for Neuroeconomic Studies. His work has been featured in the New York Times, Washington Post, Wall Street Journal, Newsweek, the Guardian, and National Geographic, as well as on ABC’s Good Morning America, NPR, CBC, BBC, MTV, and HBO Vice. He co-founded brain health and performance company Cogwear Inc. He currently serves on multiple Advisory Boards including the Yang-Tan Autism Centers at MIT and Harvard and as President of the Society for Neuroeconomics. 

References
  • Achterberg, J., Akarca, D., Strouse, D.J. et al. Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings. Nature Machine Intelligence 5, 1369–1381 (2023). https://doi.org/10.1038/s42256-023-00748-9   
  • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165. 
  • Başar, E. (2013). Brain oscillations in neuropsychiatric disease. Dialogues in clinical neuroscience, 15(3), 291–300. 
  • Bastos, A. M., Lundqvist, M., Waite, A. S., & Miller, E. K. (2020). Layer and rhythm specificity for predictive routing. Proceedings of the National Academy of Sciences, 117(49), 31459-31469. [https://doi.org/10.1073/pnas.2014868117] 
  • Burle, D., Spieser, L., Roger, C., Casini, L., Hasbroucq, T., & Vidal, F. (2015). Spatial and temporal resolutions of EEG: Is it really black and white? A scalp current density view. International Journal of Psychophysiology, 97(3), 210-220.  
  • Buzsáki G. (2015). Hippocampal sharp wave-ripple: A cognitive biomarker for episodic memory and planning. Hippocampus. 2015 Oct;25(10):1073-188. doi: 10.1002/hipo.22488. PMID: 26135716; PMCID: PMC4648295.  
  • Buzsáki, G. (2006). Rhythms of the brain. Oxford University Press. 
  • Couzin, I. D. (2009). Collective cognition in animal groups. Trends in cognitive sciences, 13(1), 36-43. 
  • Diakopoulos, N. (2016). Accountability in Algorithmic Decision Making. Communications of the ACM, 59(2), 56–62. https://doi.org/10.1145/2844148  
  • Dignum, V. (2018). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. AI & Society, 33(3), 475–476. https://doi.org/10.1007/s00146-018-0812-0 
  • Dikker, S., Wan, L., Davidesco, I., Kaggen, L., Oostrik, M., McClintock, J., … & Poeppel, D. (2017). Brain-to-brain synchrony tracks real-world dynamic group interactions in the classroom. Current Biology, 27(9), 1375-1380. 
  • Ego-Stengel, V., & Wilson, M. A. (2010). Disruption of ripple-associated hippocampal activity during rest impairs spatial learning in the rat. Hippocampus, 20(1), 1-10. 
  • Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5 
  • Furber, S. B., Galluppi, F., Temple, S., & Plana, L. A. (2014). The SpiNNaker Project. Proceedings of the IEEE, 102(5), 652–665. https://doi.org/10.1109/JPROC.2014.2304638 
  • Girardeau, G., & Zugaro, M. (2011). Hippocampal ripples and memory consolidation. Current Opinion in Neurobiology, 21(3), 452-459. 
  • Gómez-Nava L, Lange RT, Klamser PP, et al. (2023-a) Fish shoals resemble a stochastic excitable system driven by environmental perturbations. Nat Phys. 2023. doi: 10.1038/s41567-022-01916-1 
  • Gomez, N., Eguiluz, V. M., Garrido, L., Hernandez-Garcia, E., & Aldana, M. (2023-b). Neural correlations in collective fish behavior: the role of environmental stimuli. arXiv preprint arXiv:2303.00001. 
  • Johnson, A., & Smith, B. (2018). Bridging the Gap: Human-AI Interaction through Verbal Synchrony. *Journal of Artificial Intelligence Research, 45*, 789-804. 
  • Kostkova, P., Brewer, H., de Lusignan, S., Fottrell, E., Goldacre, B., Hart, G., Koczan, P., Knight, P., Marsolier, C., McKendry, R. A., Ross, E., Sasse, A., Sullivan, R., Chaytor, S., Stevenson, O., Velho, R., Tooke, J., & Ross, E. (2016). Who Owns the Data? Open Data for Healthcare. Frontiers in Public Health, 4. https://doi.org/10.3389/fpubh.2016.00107  
  • Lebedev, M. A., & Nicolelis, M. A. L. (2006). Brain–machine interfaces: past, present and future. Trends in Neurosciences, 29(9), 536-546. 
  • Lisman, J. E., & Idiart, M. A. (1995). Storage of 7 +/- 2 short-term memories in oscillatory subcycles. Science, 267(5203), 1512–1515. [DOI: 10.1126/science.7878473] 
  • Merolla, P. A., Arthur, J. V., Alvarez-Icaza, R., Cassidy, A. S., Sawada, J., Akopyan, F., Modha, D. S. (2014). A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197), 668–673. https://doi.org/10.1126/science.1254642 
  • Nicolelis, M. A. L., & Lebedev, M. A. (2009). Principles of neural ensemble physiology underlying the operation of brain–machine interfaces. Nature Reviews Neuroscience, 10(7), 530-540. 
  • O’Neill, J., Boccara, C. N., Stella, F., Schoenenberger, P., & Csicsvari, J. (2008). Superficial layers of the medial entorhinal cortex replay independently of the hippocampus. Science, 320(5879), 129-133. 
  • Mann, H. (2024). Introducing the Concept of Artificial Integrity: The Path for the Future of AI. The European Business Review. 
  • Picard, R. W. (1995). “Affective Computing.” MIT Media Laboratory Perceptual Computing Section.  
  • Schwartz, R., Dodge, J., Smith, N. A., Overton, J., & Varshney, L. R. (2019). Green AI. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 9342–9350. https://doi.org/10.1609/aaai.v33i01.33019342 
  • Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645–3650. https://doi.org/10.18653/v1/P19-1356  
  • Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction. MIT Press. 
  • Walther, C. (2021). Technology, social change and human behavior. Palgrave Macmillan. New York. 

The post Brain – Machine Synchrony: A New Era of AI-Supported Human Collaboration and Societal Transformation appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/brain-machine-synchrony-a-new-era-of-ai-supported-human-collaboration-and-societal-transformation/feed/ 0
There Will Be No Human-Centered AI without Humane Economics. https://www.europeanbusinessreview.com/there-will-be-no-human-centered-ai-without-humane-economics/ https://www.europeanbusinessreview.com/there-will-be-no-human-centered-ai-without-humane-economics/#respond Thu, 04 Jul 2024 06:58:03 +0000 https://www.europeanbusinessreview.com/?p=208795 By Hamilton Mann Digital technologies, while enhancing efficiencies, can exacerbate environmental degradation. AI, despite its potential to support sustainable development, is not exempt from these concerns. It is time to […]

The post There Will Be No Human-Centered AI without Humane Economics. appeared first on The European Business Review.

]]>

By Hamilton Mann

Digital technologies, while enhancing efficiencies, can exacerbate environmental degradation. AI, despite its potential to support sustainable development, is not exempt from these concerns. It is time to create a new model, to agree that the economic value of any business should only be worth the value it creates for society.

Many economic actors around the world are seeking the next generation of new business models, focused, passionate, and stubbornly determined to take advantage of digital technologies, particularly AI, to invent the next Amazon, the next Facebook, the next Google, or another current digital giant.

States, too, are heads and eyes riveted in this Digital gold rush, now AI, and exacerbate this headlong rush, dominated by the thought of economic progress in which new technologies are the Holy Grail.

The market capitalization of the top 100 global companies reached $40 trillion by the end of March 2024, marking a record level in five years. Technology giants recorded the highest growth among all companies, representing nearly one-third of this total market capitalization on their own, particularly due to the excitement surrounding AI.

Paradoxically, the more our society engages in this quest for new business models—by thinking it through the prism of new digital technologies, including those supposed to artificially augment human intelligence—the more we drift away from the invention of what could truly be a renewal of companies’ business models.

Too many companies are so irresponsible that their value creation model destroys precious resources.

So, in this all-digital, and nowadays all-AI frenzy, while sometimes being convinced to the contrary, a succession of business creation is inexorably perpetuated, and the model is, in reality, nothing new. Companies’ business models have remained the same, built from the same reference, the same mold and the same model. This model is called “profit and loss.” And losses include more than what we may think.

Growth for growth, profit for profit, gives the illusion of value but does not create it.

Too many companies are so irresponsible that their value creation model destroys precious resources, accentuating the handicaps of our societies, financing this one-sided value on a debt to humanity, which no balance sheet recognizes, which no bank will come and claim, and which they will never reimburse.

At the heart of the value system of most businesses in the world, inherited from capitalism, this model, often relying on unsustainable consumption of the planet’s resources, has over the centuries become a source of formidable value destruction, rampant at a speed made exponential today by digital technologies which could risk being further accelerated with AI.

This model, which succeeds itself over time with the aim of creating wealth, has on the contrary, become the major cause of the blind impoverishment of what is most precious to humans—such as air or water—all the while thinking that it is innovating on itself.

Developed to its climax, it not only created irreconcilable inequalities with the right to dignity and a decent life for many human beings but also indiscriminately precipitated global pollution and the depletion of necessary natural resources for all human life on Earth, up to the critical stage of extreme and close to irreversible situations.

At its peak, there are these hegemonic platforms, by some erected as a model of entrepreneurial success, often cited as an example of this new digital economy that so many seek to imitate.

This trajectory is well-documented in the Intergovernmental Panel on Climate Change (IPCC) Special Report on Global Warming of 1.5°C, which underscores the urgent need for systemic transformation to mitigate the catastrophic impacts of climate change driven by industrial and economic activities.

There is no need for new studies to confirm that digital technologies, while enhancing efficiencies, contribute to increased energy consumption and e-waste, exacerbating environmental degradation.

AI, despite its potential to support sustainable development, is not exempt from these concerns and can both advance and hinder progress toward the United Nations Sustainable Development Goals, particularly those related to environmental sustainability.

Like generations before us, we continue to move in all possible ways in this model and to transmit it to ourselves as a hereditary disease. Amplified by the multipliers of digital technologies, now AI, not only globalization but unconscious technologization of our world allows the counterproductive effects of these anti-sustainable business models to reach record highs.

Everything happens as if all the entrepreneurial ingenuity of which humanity is capable, as if all the intelligence that characterizes human genius, had for the most part been confiscated to serve only one cause: that of more profits for more money, that of more money for more profits, that of even more profits for even more money for even more profits, in a loop, without any other priority, without any other consideration.

Other such business models, which moreover, would be made more productive and efficient with AI, are not desirable.

We may think that we are on our way forward because of the rise of corporate social responsibility and sustainability trends, but we’re not there yet.

Growth for growth—that which does not participate in making the weakest in our societies grow—profit for profit—that which does not benefit those who need it most—in reality, gives the illusion of value, but does not create it. It creates a loss of reference, a loss of meaning, a loss of who we are, of our humanity.

Long term and short term do not turn into opposing paths as often as we like to believe.

We may think that we are on our way forward because of the rise of corporate social responsibility and sustainability trends, but we’re not there yet. All these trends and concepts change absolutely nothing to the way a company is listed on the stock exchange.

While societal pressure is rising, corporate responsibility and sustainability have emerged as a necessary condition to protect firms’ reputations but are still generally treated as a necessary expense merely because anything more is seen by many as an embezzlement on the backs of the shareholders.

It changes nothing about the way they are valued by the markets.

It changes nothing to the way a company must be run to make what defines profit.

It somehow remains a way of continuing to be stuck in this business model we know, while adding complementary societal prerogatives whose objective of really addressing them are not core to the business and have no life and death effect on the valuation of a company, because all of this stays, in the end, simply peripheral.

We also may think that it is a “David versus Goliath” type of issue, a conflict between the short term and the long term, a never-ending tension between the shareholders’ value and the stakeholders’ value, but it’s not.

The difference between long term and short term does not necessarily turn them into opposing paths. We must move beyond trade-offs. Both must become one.

As long as there will be profit, as defined in the business today, and sustainable development, apart from what defines these profits, as long as there will be value for the shareholders, as defined in the business today, and value for stakeholders, apart from what defines this value, we will not be able to build the cement of an economy at the service of the progress of our societies and humanity.

We must connect the dots looking forward; not connect them looking backward.

Yet we still lack an overall framework for guiding these efforts. However, not knowing what to do does not excuse, and should never again excuse, the act of continuing to do what we already know not to do, from now on.

Steve Jobs said, “You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future.”

He was wrong. There are those things—crucial to humanity and the world—for which we must see ahead and beforehand how the dots are connected and cannot afford to wait and see the consequences to be aware of what we have done afterward.

When it comes to value creation for the profit of the greater interest of humanity, I strongly disagree with him.

We know that all companies that do not oblige themselves to equal opportunities for career advancement for both men and women, destroy a value dear to the development of humanity, because they participate in creating a society where women and men do not have equal freedom to lead their lives, thus creating the foundations of a world where both are not equal human beings.

 We know that all the companies that do not oblige themselves to be administered by a representation inspired by the diversity of the society which supplies the demand that allows them to exist, exclude many human interests, causes and considerations that are essential to the development of our societies, and develop with much greater limits, much greater handicaps, constraining and preventing progress in society.

We know that all the companies that do not oblige themselves to think, organize, and implement a supply chain of design of their products and services, taking into account a responsible and ethical use of all the resources necessary for their productions and their distributions — taking into account the social, societal, environmental and human impact at each stage of their process including the management of the end of life of the products and services they create, while ensuring an impact if not equal to zero, put to the credit and not to the debit of the future of humanity — are causing what weakens human capital, degrading our health, our well-being, and our lives, in the present and for the generations to come.

We know that all the companies that do not oblige themselves to take deliberate care, not only of the physical health, but also and especially of the mental health of their employees, are placed among the causes of a sick, deviant and dangerous society for itself, and actively contribute to the increase in accidents of life, inevitably create a fertile ground for violence in our societies, whether it be domestic, child, moral or sexual.

We know that all companies that do not oblige themselves to recruit employees by giving everyone an equal chance to get a job — and even more, that do not discriminate to favor a few to compensate for glaring injustices that divide society — fuel a system in which extreme inequality grows, extreme poverty perpetuates, and extreme temptations to break the law becomes a survival option.

The economic value of a business should only be worth the value it creates for society.

From the perspective of the value creation for the profit of the greater interest of humanity, Steve was wrong: we can and we have to connect the dots looking forward because we cannot act by only connecting them looking backwards and just trust that in the future, the dots will somehow connect by chance on their own.

Inventing a new standard to define economic value is possible.

Changing the rule of what matters to define the value creation of any company by valuing its way of serving society and life in society — rather than serving the growth of growth, and the profit of profits — is possible.

This work requires a new kind of cooperation between business and the States. A cooperation where the company acts in the service of the general interest, and where the States act as an entrepreneur presiding with the intelligence of its citizens over the destiny of the common good.

For an alliance of companies and the States to produce a value that exceeds the sum of what each could bring separately whilst acting in their current sphere of comfort, each must extend its field of action to the sphere of the other. The company must take the initiative to bring the business interests closer to that of society. The state must help build the general framework of a new kind of capitalism that will guide these efforts in search of new complementary stances and measures for what should define profit.

Ethical Value

To what extent should a company implement and enforce ethical guidelines and practices across all operations to maintain trust and integrity, thereby being considered profitable?

Well-being Value

How good should the morale of a company’s employees be, should there be good working conditions, and should the management system be responsible, for a company to be considered profitable?

Health Value

How committed should a company be to promoting physical and mental health, not only within its workforce but also through its products and services, to be considered profitable?

Diversity Value

To what extent should company boards make room for diversity to be able to maximize the full potential of positive impacts that the company can bring in the society in which it grows, to be considered profitable?

Generational Value

How involved should a business be in providing employment for young people, seniors, vulnerable and disabled people to be considered profitable?

Economic Value

To what extent should a business contribute to the fight against extreme poverty and inequalities in the country where it is established to be considered profitable?

Innovation Value

How much should a business invest in the social innovation of the society of which it is a part to be considered a profitable business?

Community Value

How actively should a company engage with and contribute to the local communities in which it operates to be regarded as a socially responsible and profitable business?

Environment Value

How ethical and environmentally friendly should a company’s products and services be — from how they are made, to how they are managed at their end of life — for a business to be considered profitable?

The value businesses seek to create today determines that of tomorrow’s society.

The two most important things in any company do not appear in its balance sheet: its reputation, and its people. Henry Ford got it right in saying that, especially when it comes to the people. It is time.

In this artificial intelligence age, where more intelligence should be made accessible and available, it is of the utmost importance that we use it to build company value creation that would be that of their positive impacts on the society in which they operate and live.

It is not about what some may call social responsibility, sustainable development, or companies with purpose. It is not about philanthropy either. It is about going beyond. It is about changing our conception of what a “business model” should be.

Market capitalization is crucial from this perspective. It plays a key role in reflecting the value of a company’s societal contribution in its valuation, but it is still largely focused on the overwhelming bias toward financial performance.

It’s humanity. This is the most important thing about a company’s balance sheet.

There will be no so-called responsible, trustworthy, or accountable digital technologies—AI included—without responsible, trustworthy and accountable companies with regard to society.

There will be no human-centered AI without humane economics.

The economic value of a business should only be worth the value it creates for society.

The challenge and the urgency oblige us to surpass ourselves and to think big because the definition of the value that businesses seek to create today determines that of the society in which we will live tomorrow.

It calls for a “New Different”, where anything digital or AI must serve the greater good.

About the Author 

hamiltonHamilton Mann is a Tech Executive, Digital for Good Pioneer, keynote speaker, and the originator of the concept of artificial integrity. Mann serves as the Group Vice President of Digital Marketing and Digital Transformation at Thales. He is also a Senior Lecturer at INSEAD, HEC Paris, and EDHEC Business School, and mentors at the MIT Priscilla King Gray (PKG) Center. He writes regularly for Forbes and Les Echos, and has published articles about AI and its societal implications in prominent academic, business, and policy outlets such as Stanford Social Innovation Review (SSIR), Knowledge@Wharton, Dialogue Duke Corporate Education, INSEAD Knowledge, INSEAD TECH TALK X, I by IMD and the Harvard Business Review France. He hosts The Hamilton Mann Conversation, a “Masterclass” podcast on Digital for Good. Mann was inducted into the Thinkers50 Radar as one of the 30 most prominent rising business thinkers globally for pioneering “Digital for Good”. He is the author of the book Artificial Integrity (Wiley, 2024).

The post There Will Be No Human-Centered AI without Humane Economics. appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/there-will-be-no-human-centered-ai-without-humane-economics/feed/ 0
Digital for Good: AI fit for progress https://www.europeanbusinessreview.com/digital-for-good-ai-fit-for-progress/ https://www.europeanbusinessreview.com/digital-for-good-ai-fit-for-progress/#respond Wed, 24 Apr 2024 07:35:02 +0000 https://www.europeanbusinessreview.com/?p=204836 Technology, says Hamilton Mann, is not in itself sustainable or positive – it depends on how we choose to develop and apply it.  In this illuminating Radar 2024 LinkedIn Live […]

The post Digital for Good: AI fit for progress appeared first on The European Business Review.

]]>
Technology, says Hamilton Mann, is not in itself sustainable or positive – it depends on how we choose to develop and apply it. 

In this illuminating Radar 2024 LinkedIn Live session, Hamilton underscores the importance of incorporating different perspectives and the key role of citizen identity when deploying AI systems. He also advocates for an ‘artificial integrity’ approach, where AI technologies are designed not just for intelligence but also to align with human ethics, values, and principles across diverse cultures. This requires cooperation across governments, business, and individuals, to avoid a ‘digital divide’ and ensure technological progress benefits all segments of society equitably.

Transcript

Stuart Crainer:

Hello, I’m Stuart Crainer, co-founder of Thinkers50. Welcome to our weekly LinkedIn Live session, celebrating some of the brightest new stars in the world of management thinking. In January, we announced the Thinkers50 Radar Community for 2024. These are the upcoming management thinkers we believe individuals and organizations should be listening to. The 2024 list was our most eclectic and challenging yet. This year’s Radar is brought to you in partnership with Deloitte and features business thinkers from the world of fashion, retail, branding and communications, as well as statisticians, neuroscientists, and platform practitioners from the Nordics to New Zealand and Asia to America. Over the next few weeks, we will be meeting some of these fantastic thinkers in our weekly sessions, so we hope you can join us for some great conversations. As always, please let us know where you are joining us from and sending any comments, questions, or observations at any time during the 45-minute session.

Our guest today is Hamilton Mann. Hamilton is the group vice-president of digital marketing and digital transformation at Thales, a global leader in advanced technologies investing in digital and deep take innovations, connectivity, big data, artificial intelligence, cyber security, and quantum technologies. Hamilton spearheads initiatives that drive enhanced customer engagement, excellence in integrated campaigns, and sales and marketing effectiveness. He’s a senior lecturer at INSEAD and elsewhere. He actively participates in driving the advancement of digital technologies for positive societal change as a mentor at the MIT Priscilla King Gray Center and hosts the Hamilton Mann Conversation, a masterclass podcast about digital for good. Hamilton, welcome. Your work spans a broad spectrum of subjects including digital transformation, artificial intelligence, sustainability, innovation, business models, and customer-centric strategies. Is there a golden thread which holds it all together?

Hamilton Mann:

Yes. Hi, Stuart, and very happy to have the opportunity to discuss with you. I think that’s a very good question because it will give me the opportunity to jump right away on one key overarching topic, which is ‘digital for good’. I think the common traits between all those different elements, pieces and parts is very much about the question around how can we make sure that the use of the technologies that we can leverage is very much about serving the cause and serving the mission of delivering positive outcomes in societies.

Stuart Crainer:

And where are we on that do you think, in digital for good?

Hamilton Mann:

I think we are doing quite well in terms of the direction, because I think more and more we do have a consciousness growing in the heads of many leaders around the world, companies tackling the point of making sure that sustainability, and societies, as a word is very much part of the strategy of their agenda. But, again, yes, we do have some improvement to make. Let’s not be shy about this aspect. We have some improvements to make. This is also something on which we need to continue our education. So as far as we see technology evolving, we also need to evolve and raise our level in terms of proficiency, understanding how those new technologies are very much bringing forward new opportunities to bring positive impact in societies. So I will say, let’s continue the work and let’s continue to improve ourselves. This is a journey.

Stuart Crainer:

But I suppose the continuing question must be how can we ensure that technology is a force for good? What do we need to do? Is it the work of governments? Is it the work of corporations? Is it the work of a body? Is it the work of individuals?

Hamilton Mann:

I think the good news in that aspect is that this is very much a collective work. This is the work of governments, this is the work of corporations, companies, this is the work of each and every one of us as users. This is very much the work on which each and every citizen, each and every person does have something to say and something, let’s say a kind of a room on which we can play. So this is very much a collective type of work, and the more we are advancing on the technology and the kind of a power that the technology can bring in society, the more this is going to be critical for us to, I will say, be able to work not as a silo form of organization, but transversely.

So whether it is mixing discipline, mixing perspective, mixing culture stereotypes, mixing diversity also as well in terms of the way we think and the way we see the world, I think the stake in the era in which we are living is very much how to work transversely and how to work in a diverse manner, so we make sure that we take a good grasp of the intelligence that we as humans, we can put on the table when we are different and when we put different perspective.

Stuart Crainer:

I suppose that an issue there is that the power in the technology space still resides in a very small number of companies in Silicon Valley. How do we control that going forward? And do you see that changing?

Hamilton Mann:

This is true that reality is, as you said, we have some pockets of power in some parts of the world, and the US of course is definitely one of them, when it comes to technologies. I think this is also back to the point I was saying, seeing the perspective not from a silo approach, but seeing the systemic effect of all that we are also doing. Technology is a system, but this is not a system that lives in a vacuum. This is a system that takes place in another one. And the other one is broader, is bigger, is the society, is the world. And so the point here is to figure out, even though we have some pockets of power when it comes to technology in some parts of the world, the system globally in which those different pockets of power are living is the world, is the society, it touches most of us.

So here it means that in terms of opportunity to participate in the integration of those different technology systems in the broader system, which is the society ones, in terms of opportunity of participating to that conversation and participating to that installation, there is a lot of us that do have the power to influence and guide the right direction in terms of what should be the definition of those new technology integrating the broader system, which is basically our societies.

Stuart Crainer:

And thank you to everyone who’s joined us so far. I see people from Germany, in India, the States, Poland and elsewhere. Please send over your questions for Hamilton at any time during the session and we will pass them on. Hamilton, what’s really good about your perspective of the world, I think, is that you see it’s very positive and optimistic, obviously not naively, but you are seeing that technology as a force for good, and the challenge to us is to create systems within society and with corporations to make sure we deliver on that.

Hamilton Mann:

Yeah, but I think I will say, because sometimes when we talk about digital for good, we intend maybe to think that this is just thinking about the positive aspect and the positive impact that technology can bring in the world, which is absolutely, of course, a key piece. But to really act on it to execute the real strategies when it comes to digital for good, it starts by acknowledging the fact that technology in itself is not inherently sustainable or positive. It starts with this topic, which is thinking about and acknowledging the fact that technology itself is absolutely not inherently sustainable or positive, because, of course, for many reasons. Technology is absolutely not… There is a fair contribution of technologies in the environmental impact and depth, so we need to acknowledge that.

We also need to acknowledge the fact that technology by itself is very much something that does not have the exclusivity when it comes to innovation and when it comes to progress. And actually sometimes technology, even advanced technologies can be the opposite of progress. So it very much starts by looking at the risk, looking at the externalities that can come with good intentions that we can foresee with technologies, and trying to figure out how to cope with those externalities, how to be very much objective in terms of how we are going to deal with those different … not foreseeing impact, because they might happen, because they will happen, and we need to have a plan for that.

So this is to me a point where we need to look at the glass half full, but not only that, we also need to look at the glass half empty to make sure that all the externalities that definitely naturally come also with the technologies are very much managed in a good manner. And this is to me what makes the difference between a good transformation and good technology transformation no matter what kind of ecosystem it touches versus a digital for good approach.

Stuart Crainer:

So it is a really interesting statement that technology is not in itself sustainable or positive.

Hamilton Mann:

Yeah, definitely. I think it starts with having a step back in terms of let’s look at the reality as it is. We all know that technologies come with a form of externalities in many aspects. If we take the example of AI, for example, we of course know now that AI comes also with great opportunities and great advancement that we can foresee for society in many domains, but we also know that it also comes with some major water consumptions, or this is also a key contributor when it comes to electronic waste, this is also a key contributor when it comes to energies, consumption, etc. So the question is very much about not being naive in terms of, oh, this is good or this is bad, because real life is a mix of the two.

So we need to figure out what is very much the service and the good aspect of it that we can deliver to societies and participate in the progress that we seek. And this is very much where the real approach comes into play, and considering all the different impact aspects and negative externalities, etc that also comes with it, and to look at the way to manage that a proper way.

Stuart Crainer:

And so technology is a force for democratization of knowledge and of societies?

Hamilton Mann:

I think the short answer will be yes. But, again, complexities … is always where we can fine tune the approach of having the real perspective though, the less biased perspective if I may. So I think when it comes to education, of course there is some great opportunities with technology and leveraging technology just because we know that some of the technologies like simply visual conferences, etc, they are of great help when it comes to even the sharing, even the opportunity of accessing knowledge, accessing some people from all around the world to have some exchange, to share some ideas, etc, and to spread the opportunities of leveraging education in many aspects. The reality is that this is absolutely not true in each and every country all around the world.

We know that in many countries around the world, the access of the Wi-Fi network and electronic devices and so on and so forth is not democratized. So it means that when we think about the leverage that we can have using technologies to spread even more power when it comes to giving access to education, it also means that some of the part of the world will stay behind because the technologies and the means to do so is not absolutely right there or not widespread so far. Again, it means that beyond technologies, looking at the system and looking at the technology as a system that needs to be integrated in societies, which is basically the broader systems, the question is how do we democratize education when sometimes the barrier to have technology working and to have the technology’s presence in some parts of the world are very low.

So this is always a form of a rationale to have in mind to figuring out that everything is not equal so far. And so when you do something with technologies in one part of the world, it doesn’t mean that you can take the same aspect and have the same positive impact in each and every part of the world. So you need to figure out those different perspectives and make sure that you do not exert a form of a digital divide, thinking that you are having a positive impact in the world in one aspect, but you’re just thinking about the world that is formerly in your head, but not explicitly the world as it is.

Stuart Crainer:

Frank Calberg from Switzerland develops that point, Hamilton, he says, “What concrete initiatives do we need to take to make sure that technological advancements increase incomes for people with low incomes and avoid even higher differences in incomes between people?” What needs to happen?

Hamilton Mann:

I think so many things can be .. of course, let’s say there is not one single bullet point or one single weapon that can solve that issue, of course. But what comes to my mind is that we know that in many organizations, as far as our societies are concerned, identity is key. The fact that we can give to each and every person, each and every citizen in a given country, give them the opportunity to have an identity. And it sounds like quite something obvious in many countries around the world, but the truth is that this is not something that is accessible in many other parts of the world. And when you are able to give and provide that identity for each and every given person that lives in the given countries, it starts to create a form of a society where you can organize many things.

You can organize social security, you can organize healthcare, you can organize employment, etc, etc. All the different layers that we need as organized societies to organize and to structure the way the society works somehow start with the fact that we are able to know that you are Stuart Crainer and you have a number of social security and you have an ID card, et cetera, et cetera. So one point that comes to my mind in terms of what can be done in terms of making sure that the usage of technologies could be at the service of leveraging even more in all different part of the world, the opportunity of generating incomes in many population could be making sure that we advance our agenda when it comes to developing this opportunity of giving to citizen around the world, the poor of having a formal administrative identity.

Stuart Crainer:

Where is best practice then? Understanding of technology and the power of it, which countries do you think really understand its power and are harnessing it successfully?

Hamilton Mann:

I think there are many great examples in many countries. I think you always see some kind of best practices in different sectors and different activities. If you take the example, for example, in Asia, in China, etc, they have been in a form of a very interesting way leveraging technologies when it comes to organizing, for example, some kind of services through social networks, etc, so they leverage the social networks to deliver services. I think the uberization also, as we call it, with many applications that live in our smartphone and tablet and whatsoever, has also been a good way in some aspects of living our different tasks of our everyday life, which contribute also as well in a form of a good aspect in societies.

So I will not put the points of putting a spotlight in some countries that are good or example in terms of how to do, but more in terms of how we can leverage the advancement of the technologies today to tackle some key area in terms of a domain or in terms of sectors, thinking about transportation, thinking about healthcare, thinking about employment. So those critical pillars that are very key to any societies and on which there is some progress to be made from one country to another, and there is some great opportunity going on with AI and with some new technologies coming up.

Stuart Crainer:

Frank comes back with another point, which I think is an interesting one, what are the advantages and disadvantages of surveillance technologies, and how do we democratize the development of ethics in relation to the development and use of surveillance technologies? I think this relates, Hamilton, to you talking about, and I think there’s a really good Forbes article by you, talking about artificial integrity, which is a really fantastic term. Perhaps you could talk about that, because obviously, as Frank says, there’s a lot of issues around technology and the ethics of a lot of the technology.

Hamilton Mann:

Yeah, definitely. So I think when it comes to the technologies and surveillance type of application is one use case that is of course very in the head of many of us, I think it is very much about how to embrace those form of use cases and application of technologies in our day-to-day life in a way that will be ethical, in a way that will be also in sync, also in harmony with the value that we want to push forwards. And so when you put the points this way, of course there is no one single answer. And this is to me the first point that is very much important to have in mind, there is no one single answer basically to that question, because this is the encounters between technology on one hand and the value of a given part of the world on the other hand, and the harmony between the two.

So you will have some countries where they will push the level in terms of how they want to leverage technologies when it comes to surveillance for some good reason, which let’s say security aspect, for example, and they will push that level at a certain degree because they found that there is a form of harmony with the value that they push forward in that given part of the world. So it doesn’t mean that it is not going to be ethical. It is going to be a form of ethics from their perspective, from their culture, and from their value stance. If you look at another part of the world and you try to figure out the way to implement the exact same technologies, not taking into account and considering the culture and the value that is very much at the heart of that part of the world, it will not work because, of course, you need to deal with another different perspective compared to the other one.

So it is also where comparison, it is absolutely not being, to be a reasoned way of approaching this aspect. And this is also why I’m currently developing this concept of artificial integrity, because I think of course being intelligent is great, it helps to do many things, many tasks, but at the end of the day, we want more. We want more than the task being done. We want the task being done the proper way, we want the task being done the right way. And when we have the proper way, the right way, comes the fact that we want things to be done in alignment with value, with principles, with ethics, et cetera. It doesn’t make sense to just have things done intelligently in the vacuum of any form of ethics and values, and so on and so forth.

To me, this is basically the study that I’m looking at currently, which is about how to make sure that the systems and the technologies that we develop and that we implement will not only be developed and implement with intelligence, will not only be delivering intelligence, but also being designed in the purpose of being in harmony with a form of integrity that we need to preserve in a way. So back to the point and back to the question, there are going to be different answers depending on the countries and depending on the people that you are talking to, and so this is also where we need to figure out the fact that the technology advancements is not something like we love to say in marketing, this is not something to see in a one-size-fits-all. The same form of algorithm, the same type of systems will mean different things depending if you are living in China, Africa, France, et cetera, et cetera, and so we need to have that respect.

We need to have the respect to look at the eyes and to respect the form of a value ethical stance that structurally compose a given culture, because they will look at the technologies in a different ways that we might look at that, and they will make the work of integrating the eventual form of advancement power or progress that can come with it, taking into account their value, their ethics, their point of view, et cetera, et cetera. I think it is also true that there has to exist, or in a form of a way that also has to be considered, a form of overarching  … a universal form of principle, ethic and value. So taking into account and considering the different form of perspective that exists all around the world, which composes the richness of who we are basically, but also taking into account some of the universal guiding principles that also come into play to define those different cultures and different values that exist, and sometimes create some bridge between one to another.

I think this is the challenge that we have today with the new technologies, not only and specifically managing the fact that how to, technology speaking, develop the AI that will be intelligent and that will mimic what we can do humanly speaking, but being also able to think around how those technologies will be able to embrace the form of a diversity that exists so far considering the values, the different ethics and things that exist all around the world; the cultural aspect as well, because, at the end of the day, those different technologies that we develop, they are going to mirror those different stances that we have from an ethical and value perspective, number one.

And we are living in a connected world, so it means that even though we are different in terms of perspective and way to look at it, we need also to have from some form of principles that are overarching and that allow us to have a form of a common way of seeing life and common way of seeing the world in a way. So this is to me I think what is at stake when it comes to those different applications of new technologies, taking surveillance as an example, but there are many others, we can talk about drones and whatever.

Stuart Crainer:

It’s a very complicated and interesting area. Obviously lots of comments coming through. Jonathan, who’s joining us from the Bay Area in the States, says, “Technology can democratize information sharing and knowledge transfer, and we need to decrease the digital divide, increase infrastructure bandwidth, improve health, digital, and financial literacy.” I think we agreed with you, Jonathan.

Hamilton Mann:

Yeah, absolutely.

Stuart Crainer:

Jonathan also says it’s about intersectionality, which it may well be. Bogdan says, “Value creation for all stakeholders versus value creation for shareholders, which is stronger when it comes to technology?” That’s a question that puts you on the spot, Hamilton.

Hamilton Mann:

Yeah, I think so. And thank you very much Bogdan, actually, for that question. I think, if I may, we will have to challenge ourselves when it comes to finding a very simple approach, considering that the answer will be yes or no, or will be good or bad, or will be stakeholder or shareholder, et cetera, et cetera. But this is very much something that will challenge yourself, I know that, and this is true from an organizational perspective, so of course this is absolutely true from a country perspective, because the more we advance with the technologies and the capabilities that the technologies can bring to our society, and the more and more we need to be across, we need to be thinking cross-discipline, cross-interests, cross-perspective, so the question will be more about how to serve the given ecosystem.

And, to me, shareholders, tech order, they are very much part of the ecosystem, finding the common point that reunites the different groups and the different parts that we sometimes see as separate entities, and to look at a holistic way of composing with those different groups. So we do have to find a way in many aspects to make sure that technology is not serving one group at the expense of the other, and could be very much a form of a help to holistically serve the ecosystems. And so to me this is maybe a point on which we have also work to do collectively. And it’s not the technologies in itself that will bring us the answers. This is very much our thinking, our mindsets in terms of how do we implement those technologies in our societies, and this is back to the approach of digital for good, how do we implement these technologies in our societies, making sure that we are not serving one group at the expense to another, but more holistically serving the ecosystem that is involved.

Stuart Crainer:

I see we’re joined by Simone Phipps, co-author of African American Management History, who’s championed the idea of cooperative advantage. And really what you are talking about, Hamilton, is a higher level of collaboration and cooperation.

Hamilton Mann:

Yeah, definitely. And I think this is very interesting because I think when it comes to cooperation, a collaboration, and I’m sure many people around the core know that, if I look at a given organization, this is still a challenge, it has always been a challenge for any leaders, to have cooperation and collaboration within your team. Let’s put aside technology for a moment. We are all people, human, citizens in many aspects, and we are interacting with each other. And we have many situations and occasions where we are part of a team, part of a group, and we’re trying to achieve something together. It always has been a challenge to make that equation work, one plus one equals three. So now when it comes to the cooperation and collaboration, and let’s use a word making sure that we are, of course, inclusive enough to grab the power that can be brought by the technology, make sure that we are inclusive enough to bring the form of a new intelligence that some of the system that we call artificial intelligence systems that they can bring to the collective intelligence.

This is a new form of cooperation coming up, a form of a cooperation when not only you have to deal with different brands and different perspective, coming from different human beings, but you also need to deal with the contributions that comes for those artificial intelligence systems that will add something to the cooperation, to the collaboration equation, and that will add something to what we call the collective intelligence today. So I think what it forces in a way, it forces us to accelerate understanding how to leverage cooperation, collaboration in our ways and our manners and our way of interacting between us, because the technology will not bring the answers. Again, this is very much going to be us as humans to figure out what will be the right way of including a new form of intelligence coming from the technologies in the space of what we call collective intelligence and look at the efficient way of having that collaboration and cooperation interplay.

And so another way to say that is that if I take the form of a race that is going on today with many companies all around the world trying to experiment Gen AI and AIs, I think some of the companies that will be very much ahead of the pack will not be the ones that leverage the most sophisticated AI or technologies or platforms, et cetera. There will be the ones that have understood how to include this new form of intelligence coming from the systems in the overall system, which is the organization, and adding new forms of collaboration and cooperation into play. And this is going to be the challenge in a way.

Stuart Crainer:

Surabh has got a difficult question for you. She says that Elon Musk said too much cooperation is a bad thing. Which  … Elon Musk may well have said that, I don’t know. It’s not, is it? You can’t have too much cooperation?

Hamilton Mann:

Too much cooperation, I don’t know what is the definition behind the too much cooperation, but I would say that actually to me the point is very much about what are we trying to achieve. So if we are very aligned in terms of what we are trying to achieve, what the purpose is, what the mission is that we’re trying to execute or deliver, too much cooperation will never be a blocker for achieving that. As soon as we make sure that we know how to solve that equation, one plus one equals three, because instead of saying too much cooperation might be a form of a trap, I will frame it differently and say that too much bad cooperation is very much the trap.

So I think cooperation working is not just having people in the same place or in the same team, and now you say, “Okay, they are in the same team, so let’s say they are cooperating, they are going to cooperate.” And maybe they are telling the team, “So let’s make sure that the team’s people and the team’s part of the team is cooperating.” It doesn’t work this way. And we will know that. It always makes me think about, for example… And I’m sure we all have some souvenirs, memories of that, when we was very young and actually looking at our children, you come in a square and you have 10 children, 10 girls, boys and whatsoever, and you’re trying to play a match, play football, for example, and you will observe very interesting things. Very quickly, the ball is going to be the very much the only focus, and so nobody is going to be looking at the goals. Every child is looking at the ball and trying to figure out how to make the best possible performance.

And of course they feel like they can cooperate, but we could say that there is some room in terms of improvement in terms of how they can be more effective. So I think this is that cooperation is not something that you  … and then it happens. This is very much something that is related to how to manage people, of course. And we know there are good managers and bad managers. Hopefully good managers do have some form of a recipe in terms of how they can approach these good answers, solving that equation, one plus one equals three. And, again, when it comes to the technologies, you need to figure out how to include that in that equation to not have something that is separate, like when you put some oil in a glass of water and you have the water on one side and the oil on the other side.

So this is very much to me the challenge behind this. So not a simple answer. I will say this is, quite, very much looks straightforward to think too much of things, or too much of this could be at stake and an issue. Of course, extreme in any form of a perspective might be an issue, but to me the point is very much about how you deal with managing first the people that you have, because you put people in some place to do something and to achieve something, and this is going to be the artwork of making the cooperation work in order to deliver. And second, making sure that when you integrate and when you implement technologies to be part of the equation, to be part of the collective intelligence, you do so in a form of a harmony, so you preserve a form of integrity between the interaction of the people and between the integration that the technologies are with those people. So this is something to look at case by case, and this is a bit of a science, but also a bit of an art.

Stuart Crainer:

They’re the worst things, aren’t they?

Hamilton Mann:

Yeah, absolutely.

Stuart Crainer:

Tami Madison says, “The difference between relational engagement versus transactional.” And I think you are talking about relational engagement. One asked the question, which is the killer question, the thing we all want to know is what the world will look like in 2030 and 2050? I get, Hamilton, that you’re optimistic, you think that we have the capacity within us and within our organizations to harness the very best of technology to make the world a better place.

Hamilton Mann:

Yeah, definitely, but I think I will draw something to answer that question. The framework that I’m, say, looking at today, is that when it comes to artificial integrity, it is very much about how those new forms of intelligence can play a role in our societies taking into account that this is not something that needs to operate in a vacuum. So figuring out, for example, a quadrant, so you have the Y-axis from low to high human-add value, and you have the… No, sorry, you have X-axis, low to high value in terms of human-add value, and the Y-axis, when it comes to the added value brought by the AI, low to high, there are four domains on which we need to answer that question for 2030 and beyond.

First, I will say the bottom left of quadrants, that I call the marginal mode, which is very much about some form of activities where we feel like the human is absolutely not well utilized. The intelligence coming from humanity and from humans is absolutely not well utilized. So it creates a lot of impact, demotivation, and so on and so forth. And in that quadrant, you also have very limited opportunities to bring value from large advanced technologies, AI and so on and so forth, because those tasks are delivering some outcomes that do not, say, deserve to put the investments related to, and so there is no error per se. And so first is we do have a very critical question, how we evolve the human workforce, making sure that we employ human intelligence the right way to move forward. That’s the first point.

And the answer to that question 10 years ago is absolutely not the same if I look at now and, of course, will not be the same in 2030. So that’s number one. Moving to the quadrant, that is at the bottom, I think that this is quite the place that many folks are, title, human first type of mode, where the intelligence brought by humanity and brought by each and every one of us is very critical as far as the outcome that we want to deliver is concerned. So even though we want to leverage AI and new forms of technologies in that aspect, the difference and what will make the difference between good and bad will be very much the implication and the contribution of a human in the loop. So in that aspect, we will have a very critical question to address: what is it and what are those critical domains in which we consider that human needs to be in the loop?

And it’s say, something that encompasses many sectors, many domains, thinking about healthcare, but many other domains. Moving to the quadrants that are up on that framework, up in the left, you have what some people may call AI first. And AI first, it is very much where we do have great opportunities. And on that aspect, let’s be optimistic, we do have great opportunities in discovering things that will very much help to advance progress in societies because those new forms of technologies and AI first will very much help us figure out, discover, and solve some problems that we did not have the opportunity to solve yet. So this is going to be very interesting. So here’s the question and the critical question for societies would be is what should be of priorities, what should be the key challenge, the big challenge on which we want to make sure that we invest a fair part of those technologies that we have in hands to make sure that we solve those, we crack the code, as we say, with those big challenges that we cannot solve by ourselves or by our very own?

And I will say on the upper right of the quadrant, which is where you have the best of both worlds, the criticality of what we can bring on the table as human when it comes to intelligence, and also what can we bring on the table when it comes to AI, that is absolutely something that we cannot bring by ourselves, you put the best of both worlds and you have what I call the fusion mode, which is how to create some form of a harmony, some form of a new collective intelligence, as we were discussing, where system machine and human will be delivering things in the form of intelligence that – we don’t know yet how this equation will function, but definitely I think we will have some clue moving forward in the five to 10 years ahead.

Stuart Crainer:

Hamilton, we’re out of time. Hopefully the new connected intelligence is just around the corner. I loved your message. I really get the sense of our potential to take control of the technology and to help it shape a better world. It’s a really powerful message. And I think the emphasis on cooperation that one plus one really can equal three is fantastically affirming about the future. There’s some links on the side to Hamilton’s articles in Forbes, which are always worth reading. He produces them regularly, so keep up to date with that. Hamilton is promising a book in the future, look out for that. And I think artificial integrity is going to be one of the big issues, and Hamilton is leading the way in discussing it. So Hamilton Mann, thank you very much, and thank you everyone for joining us from throughout the world. Thank you.

Hamilton Mann:

Thank you. Thank you very much. Thank you.

This article was originally published in Thinkers50. It can be accessed here: https://thinkers50.com/blog/digital-for-good-ai-fit-for-progress/

Hamilton MannHamilton Mann is the group vice-president of digital marketing and digital transformation at Thales. He is also a senior lecturer at INSEAD, a mentor at the MIT Priscilla King Gray Center, and host of the Hamilton Mann Conversation, a masterclass podcast about digital for good.

The post Digital for Good: AI fit for progress appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/digital-for-good-ai-fit-for-progress/feed/ 0
Introducing the Concept of Artificial Integrity: The Path for the Future of AI https://www.europeanbusinessreview.com/introducing-the-concept-of-artificial-integrity-the-path-for-the-future-of-ai/ https://www.europeanbusinessreview.com/introducing-the-concept-of-artificial-integrity-the-path-for-the-future-of-ai/#respond Fri, 26 Jan 2024 01:02:23 +0000 https://www.europeanbusinessreview.com/?p=197765 By Hamilton Mann The concept of “artificial integrity” proposes a critical framework for the future of AI. It emphasises the need to architect AI systems that not only align with […]

The post Introducing the Concept of Artificial Integrity: The Path for the Future of AI appeared first on The European Business Review.

]]>
By Hamilton Mann

The concept of “artificial integrity” proposes a critical framework for the future of AI. It emphasises the need to architect AI systems that not only align with but also enhance and sustain human values and societal norms. 

Artificial integrity goes beyond traditional AI ethics. As AI ethics is the input, artificial integrity is the outcome advocating a context-specific application of ethical principles, ensuring AI’s alignment with local norms and values. 

Underscoring the importance of AI systems being made socially responsible, ethically accountable, and inclusive, especially of neurodiverse perspectives, the concept represents a deliberate design approach where AI systems are embedded with ethical safeguards, ensuring that they support and enhance human dignity, safety, and rights. At its heart, this new paradigm shift aims to apprehend a symbiotic relationship between AI and humanity, where technology supports human well-being and societal progress, redefining the interaction between human wit and AI’s capabilities. 

In this complex edifice of artificial intelligence progress, the critical challenge for leaders is to architect a future where the interplay between human insight and artificial intelligence doesn’t merely add value but exponentiates it. 

The question is not as simple as whether humans or AI will prevail, but how their combined forces can create a multiplicative value-added effect, without compromising or altering core human values but, on the contrary, reinforcing them with integrity. 

AI operating systems intentionally designed and maintained for that purpose would be those that perform with this characteristic. 

Artificial integrity is about shaping and sustaining a safe AI societal framework 

First, external to AI systems themselves, the concept of artificial integrity embodies a human commitment to establishing guardrails to build and sustain a sense of integrity in the deployment of AI technology, ensuring that as AI becomes more embedded in our lives and work, it supports the human condition rather than undermines it. 

More specifically, it refers to the governance of AI systems that adhere to a set of principles that have been established for its functioning, to be intrinsically capable of prioritising and safeguarding human life and well-being in all aspects of its operation. 

This is not just about setting ethical standards, but about the cultivation of an environment where AI systems are designed to enable humans to be guided in using, deploying, and developing AI for the greater interest of us all, thus including the planet, in the most appropriate ways. 

1. Thus, while AI ethics often focuses on universal ethical stances, artificial integrity emphasises adapting them to specific contexts and cultural settings, recognising that their application can vary significantly depending on the context.

This context-specific adaptation of ethical principles is crucial because it allows for the creation of AI technologies that are not only led by universal ethics but also culturally competent and respectful of important local nuances, thereby sensitive and responsive to local norms, values, and needs, enhancing their relevance, effectiveness, and acceptance in diverse cultural landscapes. 

2. Differing from AI ethics, which provide the external system of moral standards that AI technologies are expected to follow, concerned with questions about right or wrong decisions, human rights, equitable benefit distribution, and harm prevention, artificial integrity is the operational manifestation of those principles. It ensures that AI behaves in a way that is consistently aligned with those ethical standards.

This approach not only embeds ethical considerations at every level of AI development and deployment but also fosters trust and reliability among users and stakeholders, ensuring that AI systems are not only technologically advanced but also driven in a manner that is socially responsible and ethically accountable. 

3. Unlike AI ethics, which advocates for external stakeholder inputs and considerations in addressing the societal stakes related to AI deployment, artificial integrity encompasses a broader spectrum. It involves integrating stakeholders as active participants in a formal and comprehensive operating ecosystem model.

This model positions stakeholders at the heart of decision-making processes, operational efficiency, employee engagement, and customer interactions. It ensures that organisations can sustain with integrity while being powered by AI. 

Such integration is designed not just for compliance or ethical considerations, but for elevating the organisation’s overall capacity to adapt and thrive in deploying AI in harmony with societal stakes. 

4. Moreover, while interdisciplinary approaches are valued in AI ethics, artificial integrity places a greater emphasis on deep integration across disciplines, moving beyond a siloed functional approach to a hybrid functional blueprint.

This blueprint is characterised by the seamless melding of various fields – technology, social sciences, law, business, and more – to create a cohesive and holistic AI framework. It seeks to create a unified operational framework where these diverse perspectives coalesce. 

This integrative approach not only enhances the innovation potential by leveraging diverse expertise but also ensures more robust, ethically sound, and socially responsible AI solutions that are better aligned with complex real-world challenges and stakeholder needs. 

5. Furthermore, while AI ethics recognises the importance of education on ethics, artificial integrity focuses on learning how to de-bias viewpoints to embrace 360-degree societal implications, fostering the inclusion of diverse perspectives, especially from neurodiverse groups.

This approach ensures that the development and deployment of AI technologies tap into the large spectrum of human neurodiversity and build neuro-resilience against the distortion of reality. 

It empowers AI systems to be more inclusive and reflective of the full range of human experiences and cognitive styles, leading to more innovative, equitable, and socially attuned AI solutions. 

Artificial integrity is a deliberate act of AI design to respect human safety and dignity 

Core to AI systems themselves, the concept of artificial integrity implies that AI systems are developed and operate in a manner that is not only ethically sound according to external standards but do so consistently over time and across various situations, without deviation from their programmed ethical guidelines. 

It is a deliberate act of design. It suggests a level of self-regulation and intrinsic adherence to ethical codes, similar to how a person with integrity would act morally, regardless of external pressures or temptations, maintaining a vigilant stance towards risk and harm, ready to override programmed objectives if they conflict with the primacy of human safety. 

It involves a proactive and preemptive approach, where the AI system is not only reactive to ethical dilemmas as they arise but is equipped with the foresight to prevent them. 

As thought-provoking as it may sound, it is about embedding artificial artefacts into AI that will govern any of its decisions and processes, mimicking a form of consciously made actions, while ensuring they are always aligned with human values. 

While traditional AI ethics often see ethical assessment as a peripheral exercise that may influence AI design, artificial integrity embeds ethical assessment throughout the functioning of the AI’s operating system.

This is akin to an “ethical fail-safe” that operates under the overarching imperative that no action or decision by the AI system should compromise human health, security, or rights. 

It goes beyond adhering to ethical guidelines by embedding intelligent safeguards into its core functionality, ensuring that any potential harms in the interaction between AI and humans are anticipated and mitigated. 

This approach embeds a deep respect for human dignity, autonomy, and rights within the AI system’s core functionality. 

6. More specifically, while traditional AI ethics often see ethical assessment as a peripheral exercise that may influence AI design, artificial integrity embeds ethical assessment throughout the functioning of the AI’s operating system.

This continuous learning and adjustment in interaction with humans allows for the development and enrichment of an artificial moral compass. This approach ensures that AI systems are not only compliant with ethical standards at their inception but remain dynamically aligned with evolving human values and societal norms over time. 

It represents a significant advancement in creating AI systems that are truly responsive and adaptive to the ethical complexities of real-world interactions, fostering trust and reliability in AI-human partnerships. 

7. As opposed to AI ethics, which tend to focus on establishing guidelines for responsible AI design and usage, artificial integrity, on the other hand, stresses the importance of integrating continuous and autonomous feedback mechanisms, allowing AI systems to evolve and improve in response to real-world experiences, user feedback, and changing societal norms.

This proactive approach ensures that AI systems remain relevant and effective in diverse and dynamic environments, fostering adaptability and resilience in AI technologies. 

It transcends static compliance, enabling AI to be more attuned to the complexities of human behaviour and societal changes, thus creating more robust, empathetic, and contextually aware AI solutions. 

8. While AI ethics focuses on identifying and addressing risks that correspond to a given present term, artificial integrity emphasises a more proactive approach in anticipating potential risks in forward-looking scenario perspectives, including long-term and systemic risks, before they even materialise.

This forward-thinking strategy allows organisations and societies to not only mitigate immediate concerns but also prepare for and adapt to future challenges, ensuring sustainable and responsible AI development that aligns with broader societal goals and ethical frameworks over time. 

9. Although AI ethics heavily emphasises data privacy, artificial integrity also stresses the importance of data integrity, ensuring that data used by AI systems is accurate, reliable, and representative in order to combat misinformation and manipulation.

This comprehensive approach not only protects user information but also enhances the overall trustworthiness and effectiveness of AI systems, providing a more solid foundation for decision-making and reducing the risk of errors and biases that can arise from poor-quality data. 

10. As AI ethics discusses accountability and explainability, artificial integrity broadens the focal point to include the trade-offs between explainability and unexplainability challenges, as well as guidelines to fulfil not just explainability but interpretability.

This expanded focus ensures a deeper understanding of AI decisions and actions, enabling users and stakeholders to not only comprehend AI outputs but also grasp the underlying rationale, thus fostering greater transparency, trust, and informed decision-making in AI systems. 

As we transition to a society where AI’s role in society becomes more pronounced, the multidisciplinary approach behind artificial integrity becomes crucial in guiding our future. 

This approach would ensure that, as AI systems become more autonomous, their operational essence remains fundamentally aligned with the protection and enhancement of human life, enshrining a harmonious and collaborative future between AI and humanity. 

Artificial integrity is a stance for AI to serve the empowerment of humanity 

Central and, thus, both internal and external to AI systems, the concept of artificial integrity embodies an approach where the relationship between human and AI supports the human condition rather than undermines it. 

The aim is to anchor the role of AI in acting as a partner to humans, facilitating their work and life in a way that is ethically aligned and empowering. 

While AI ethics is human-centred, AI design is about establishing guidelines and principles to ensure that AI technologies are developed and used in ways that are ethically sound and beneficial to humanity.

It refers to AI integration in society that is designed and deployed with the intent to augment, rather than replace, human abilities and decision-making. These AI systems are crafted to work in tandem with humans, providing support and enhancement in tasks while ensuring that critical decisions remain under human control. 

This is not just about the ethical user-friendliness but about the fundamental alignment of AI systems with human ethical principles and societal values. It involves a deep understanding of the human context in which AI operates, ensuring that these systems are not only accessible and intuitive but also respectful of human agency and societal norms. 

In essence, while AI ethics is human-centred, AI design is about establishing guidelines and principles to ensure that AI technologies are developed and used in ways that are ethically sound and beneficial to humanity, and artificial integrity is about creating a harmonious relationship between humans and AI. 

Here, technology is not just a tool for efficiency but a partner that enhances human life and society in a manner that is ethically responsible, socially beneficial, and deeply respectful of human values and dignity. 

It’s about foreseeing and sustaining a society model assisted or augmented by AI systems that not only adhere to ethical norms but also actively contribute to human well-being, integrating seamlessly with human activities and societal structures. 

This approach is focused on ensuring that AI advancements are aligned with human interests and societal progress, fostering a future where AI and humans coexist and collaborate, each playing their unique and complementary roles in advancing society and improving the quality of life for all. 

This paradigm shift from mere compliance to proactive contribution represents a more holistic, integrated approach to AI, where technology and humanity work together towards shared goals of progress, well-being, and integrity. 

As we seek to chart a course where the AI of tomorrow not only excels in its tasks but does so with an underpinning ethos that champions and elevates human labour, creativity, and well-being, maintaining and preserving the equilibrium at the right level, it dares us to question not only the essence of value but also the vast potential. 

This conscientious perspective is especially pertinent when considering the impact of AI on society where the balance between “human value added” and “AI value added” is one of the most delicate and consequential. 

In navigating this complexity, we must first ensure not only to delineate the current landscape where human wit intersects with the prowess of AI but also serves as a compass guiding us towards future terrains where the symbiosis of man and machine will redefine worth, work, and wisdom. 

This balance could be drawn through the perspective of four different modes: 

matrix

Each part of this matrix illustrates a distinct narrative about the future of a human AI-assisted society, presenting us with a strategic imperative: to harmonise the advancement of technology with the enrichment of human capability and will. 

This is a non-negotiable condition in achieving the sense of integrity rooted in the functioning of AI operating systems for an AI that does not diminish human dignity or potential but rather enriches it. 

1. Marginal Mode:

This part of the matrix reflects scenarios where both human intelligence and artificial intelligence have a subdued, modest, or understated impact on value creation. 

Marginal Mode

In such a context, we encounter tasks and roles where neither humans nor AI provide a significant value add. It encapsulates a unique category of tasks where the marginal gains from both human and artificial intelligence inputs are minimal, suggesting that the task may be either too trivial to require significant intelligence or too complex for current AI capabilities and certainly not economically worth the human effort. 

This mode might typically also involve foundational activities where both human and AI roles are still being defined or are operating at a basic level. 

It represents areas where tasks are often routine and repetitive and do not substantially benefit from advanced cognitive engagement or AI contributions and may not even require much intervention or improvement, often remaining straightforward with little need for evolution or sophistication. 

Changes within this area are often small-scale, incremental, or may represent a state of equilibrium where neither human nor AI contributions dominate or are significantly enhanced. 

An example is the routine scanning of documents for archiving. While humans perform these tasks adequately, the work is monotonous, often leading to disengagement and errors. 

On the AI front, although technologies like optical character recognition (OCR) can digitise documents, they may struggle with handwritten or poorly scanned materials, providing little advantage over humans in terms of quality. These tasks don’t offer substantial gains in efficiency or effectiveness when automated, due to their simplicity, and the return on investment for deploying sophisticated AI systems may not be justifiable. 

This concept aligns with the “task routineness hypothesis”, which posits that routine tasks are less likely to benefit from human creativity or AI’s advanced problem-solving skills (Acemoglu & Autor, 2011). 

The acceleration of big-data analytics is one area where AI demonstrates substantial value, as it can uncover insights from data sets too large for human analysts to process in a timely manner, as evidenced by research from Hashem et al. (2015).

A study from the McKinsey Global Institute (Manyika et al., 2017) further elaborates on this by suggesting that activities involving data collection and processing are often the most automatable. However, when these tasks are too simplistic, they might not even justify the investment in AI, given the diminishing returns relative to the technology’s implementation cost. 

Moreover, the progression of AI technology seems to follow a U-shaped pattern of job transformation. Early on, automation addresses tasks that are simple for humans (low-hanging fruit), yet as AI develops, it starts to tackle more complex tasks, potentially leaving behind a trough where tasks are too trivial for AI to improve upon but also of such low value that they do not warrant a significant human contribution (Brynjolfsson & McAfee, 2014). 

The risk in this quadrant is threefold: 

Firstly, complacency and obsolescence are the primary risks here. 

If neither humans nor AI are adding significant value, it may indicate that the task is outdated or could be at risk of being superseded by more innovative approaches or technologies. The task of the role might become completely redundant with the advent of a more sophisticated approach and processing technologies. 

Secondly, for the workforce, these roles are at high risk of automation despite the low value added by AI, because they can often be performed more cost-effectively by machines in the long run. 

A real-world example of this risk materialising is in the manufacturing sector, where automation has been progressively adopted for tasks such as assembly line sorting, leading to job displacement. 

Research has highlighted this trend and the potential socioeconomic impact, as indicated by Acemoglu and Restrepo’s paper, “Robots and Jobs: Evidence from US Labor Markets” (Journal of Political Economy, 2020), which examines the negative effects of industrial robots on employment and wages in the US. 

Thirdly, from an organisational perspective, persisting with human labour in such tasks can lead to a misallocation of human talent, where employees could instead be upskilled and moved to roles that offer higher value addition. 

The implications of this quadrant for the labour market are significant, as it often points to jobs that may be at high risk of obsolescence or transformation. 

There is a growing need for reskilling and upskilling initiatives to transition workers from roles that fall into this low-value quadrant to more engaging and productive ones that either AI or humans – or a combination of both – can significantly enhance. 

Therefore, strategic planning is essential to ensure that the workforce is prepared for transitions and that the benefits of AI are harnessed without exacerbating socioeconomic disparities. 

2. Human-First Mode:

This side of the quadrant places significant emphasis on the critical roles of human cognition, ethical judgement, and intuitive expertise, with AI taking on a secondary or assistive role. 

Human-First Mode

Here, human skills and decision-making are at the forefront, especially in situations requiring emotional intelligence, complex problem-solving, and moral discernment. 

It underscores scenarios where the depth of human perception, creativity, and interpersonal skills are vital, where the complexity and subtlety of human cognition are paramount, and where AI, while useful, currently falls short and cannot yet replicate the full spectrum of human intellectual and emotional capacity. 

In this sphere, the value derived from human involvement is irreplaceable, with AI tools providing auxiliary support rather than core functionality. 

This is particularly evident in professions such as healthcare, education, social work, and the arts, where human empathy, moral judgement, and creative insight are irreplaceable and are critical to the value delivered by professionals. 

High-stakes decision-making roles, creative industries, and any job requiring deep empathy are areas where human value addition remains unrivalled. 

For example, in the field of psychiatry, a practitioner’s ability to interpret non-verbal cues, offer emotional support, and exercise judgement based on years of training and experience is paramount. While AI can offer supplementary data analysis, it cannot approach the empathetic and ethical complexities that humans navigate intuitively. 

Empirical research supports this perspective, highlighting domains where the human element is crucial. 

For instance, studies on patient care indicate that, while AI can assist with diagnostics and information management, the empathetic presence and decision-making capabilities of a healthcare provider are central to patient outcomes and satisfaction (Jha & Topol, 2016). 

The essential nature of human input in these areas is also supported by studies on job automation potential, which show that tasks requiring high levels of social intelligence, creativity, and perception and manipulation skills are least susceptible to automation (Arntz et al. 2016). 

This is echoed in the arts, where creativity and originality are subjective and deeply personal, reflecting the human experience in a way that cannot be authentically duplicated by AI (Boden, 2009). 

Furthermore, in the context of service industries, the SERVQUAL model (Parasuraman et al., 1988) demonstrates that the dimensions of tangibles, reliability, responsiveness, assurance, and empathy heavily rely on the human factor for service quality, hence substantiating the need for human expertise where AI cannot yet suffice. 

While AI may offer supplementary functions, the nuances of human expertise, interaction, and empathy are deeply entrenched in these high-value areas. 

As such, these sectors are less likely to experience displacement by AI, instead possibly seeing AI as a tool that supports human roles. 

The continual advancement of AI presents a moving frontier, yet the innate human attributes that define these roles maintain their relevance and importance in the face of technological progress. 

The risk in this quadrant comes from misunderstanding the role AI should play in these domains. 

There is a temptation to overestimate AI’s current capabilities and attempt to replace human judgement in areas where it is critical. 

An example is the justice system, where AI tools are used to assess the risk of recidivism. As pointed out in the work of Angwin et al. (2016) in their analysis of the COMPAS recidivism algorithm, published in Machine Bias by ProPublica, AI can perpetuate biases present in historical data, leading to serious ethical implications. 

AI systems lack the moral and contextual reasoning to weigh outcomes beyond their data parameters, which could lead to injustices if relied upon excessively. 

Therefore, while AI can process and offer insights based on vast data sets, human beings are paramount in applying those insights within the complex fabric of social, moral, and psychological contexts. 

Understanding the boundary of AI’s utility and the irreplaceable value of human intuition, empathy, and ethical judgement is essential in maintaining the integrity of decision-making in these critical sectors. 

3. AI-First Mode:

This perspective indicates a technological lean, with AI driving the core operations. 

Such an approach is prevalent where the unique strengths of AI, such as processing vast amounts of data with unmatched speed and providing scalable solutions, take precedence. It often aligns with tasks where the precision and rapidity of AI offer a clear advantage over human capability. 

In this domain, AI stands at the forefront of operational execution, bringing transformative efficiency and enhanced capabilities to activities that benefit from its advanced analytical and autonomous functionalities. 

Here, the capabilities of AI are leveraged to also perform tasks that generally do not benefit substantially from human intervention. 

This AI-first advantage has been extensively documented in the literature, with AI systems outperforming humans in data-intensive tasks across various domains. 

The acceleration of big-data analytics is one area where AI demonstrates substantial value, as it can uncover insights from data sets too large for human analysts to process in a timely manner, as evidenced by research from Hashem et al. (2015). 

An exemplar of this dynamic is evident in areas such as the financial sectors, especially those involved in high-frequency trading, where algorithmic trading systems can execute transactions based on complex algorithms and vast amounts of market data, recognise patterns, and execute trades at a speed and volume unattainable for human traders. 

These systems can also be employed in regulatory compliance, where they continuously monitor transactions for irregularities much more efficiently than human counterparts (Arner et al., 2016). 

The main inherent risks in this quadrant are also multifaceted. 

First, there is the risk of over-reliance on AI systems, which can lead to complacency in oversight. For instance, in the case of the Flash Crash of 2010, rapid trades by algorithmic systems contributed to a severe and sudden dip in stock prices. 

Secondly, while AI can perform these tasks with remarkable efficiency, they operate within the confines of their programming and can sometimes miss out on the “bigger picture”, which can only be understood in a broader economic, social, and geopolitical context. 

Moreover, AI’s dominance in such areas could lead to significant job displacement, raising concerns about the future of employment for those whose jobs are susceptible to automation. This shift necessitates a societal and economic adjustment to manage the transition for displaced workers (Acemoglu & Restrepo, 2020). 

Lastly, and especially in this quadrant, ethical considerations are paramount. 

While human input does not significantly enhance these tasks, the tasks themselves are not devoid of ethical considerations, despite minimal emotional involvement. 

AI systems can perpetuate biases present in their training data, a concern that has been raised in numerous studies, including by Barocas and Selbst (2016). 

There is the ethical consideration of ensuring that these algorithms operate fairly and transparently, as their decisions can have wide-reaching impacts on the market and individual livelihoods. The growing field of explainable AI (XAI) aims to address this, ensuring that AI’s decision-making processes can be understood by humans, thereby maintaining a necessary level of trust and accountability in these high- stakes influential systems. 

While AI’s prowess in data processing and routine task automation underscores its high value addition in certain tasks, the importance of human oversight for ethical considerations is a critical aspect that highlights the need for a collaborative approach between humans and AI systems to ensure that ethical standards are maintained. 

The interplay of AI’s technical efficiency with human ethical judgement forms the crux of responsible AI deployment in this quadrant, ensuring that technological advancement involves the careful consideration of the potential impact of AI-assisted decisions on individuals and society, including the overarching moral implications of delegating decisions to machines, so it does not come at the cost of ethical integrity. 

4. Fusion Mode:

This area exemplifies a harmonious blend of human intellect and AI prowess. 

Here, the focus is on crafting roles and processes to capitalise on their respective advantages. Human creativity and moral reasoning complement AI’s analytical efficiency and pattern recognition. 

This setting is characteristic of forward-thinking workplaces that aim for a cohesive strategy, maximising the collective benefits derived from both human and technological assets. 

In this environment, the fusion of human insight and AI’s precision culminates in an optimal alliance, propelling tasks to new heights of effectiveness. 

Such a paradigm fosters an atmosphere where AI serves as an enhancer of human skills, ensuring that both elements are essential to superior performance and more nuanced decision-making processes. 

This collaboration represents an ideal in task execution and strategic planning, offering comprehensive benefits that neither humans nor AI could achieve in isolation. 

Scientific evidence that supports this synergy comes from various fields. 

A study by Rajkomar et al. (2018) highlights how AI can assist physicians by providing rapid and accurate diagnostic suggestions based on machine learning algorithms that process electronic health records, thus improving patient outcomes. 

Such collaboration is particularly evident in the realm of medical surgeries. For example, in image-guided surgery, AI enhances a surgeon’s ability to differentiate between tissues, allowing for more precise incisions and reduced operative time. 

However, despite the clear advantages of AI, the surgeon’s experience and judgement remain irreplaceable, particularly for making nuanced decisions when unexpected variables arise during surgery. 

In the realm of complex problem-solving and innovation, human creativity is irreplaceable, even though AI can significantly enhance these processes. 

Evidence has been demonstrated on how AI can support engineers and designers by offering a vast array of design options generated through algorithmic processes, which humans can then refine and iterate upon, based on their expertise and creative insight (Yüksel et al., 2023). 

In the realm of complex problem-solving and innovation, human creativity is irreplaceable, even though AI can significantly enhance these processes.

Lastly, in educational settings, research by Holstein et al. (2017) provides evidence that AI can personalise learning experiences in ways that are responsive to individual student needs, thus supporting educators to tailor their teaching strategies effectively. 

This area underscores a future of work in which AI augments human expertise, rather than replaces it, fostering a collaborative paradigm where the complex, creative, and empathetic capacities of humans are complemented by the efficient, consistent, and high-volume processing capabilities of AI. 

As previously, one of the risks associated with this integration is over-reliance on AI, which might lead to complacency. 

In AI-assisted surgery, a malfunction or misinterpretation of data by the AI system could lead to serious surgical errors if the human operator over-trusts the AI’s capabilities. 

In this integration also lies the risk of a decline in the manual skills of surgeons. Meanwhile, in the event of AI failure or unforeseen situations beyond AI’s current capabilities, the surgeon’s skill becomes paramount. 

Another risk is the potential for ethical dilemmas, such as the decision to rely on AI’s recommendations or strategy when they conflict with the surgeon’s clinical judgement. 

Additionally, there are concerns about liability in cases of malpractice when AI is involved. Who is responsible if an AI-augmented procedure goes wrong – the AI developer, the hospital, the surgeon? 

All together, these four modes underscore a future of work in which AI augments human expertise fostering a collaborative paradigm where the complex, creative, and empathetic capacities of humans are complemented by the efficient, consistent, and high-volume processing capabilities of AI. 

5. Navigating transitions:

As we migrate from one quadrant to another, we should aim to bolster, not erode, the distinctive strengths brought forth by humans and AI alike. 

While traditional AI ethics frameworks might not fully address the need for dynamic and adaptable governance frameworks that can keep pace with the transitions in balancing human intelligence and AI evolution, artificial integrity suggests a more flexible approach to govern such journeys. 

This approach is tailored to responding to the wide diversity of developments and challenges brought by the symbiotic trade-offs between human and AI, offering a more agile and responsive governance structure that can quickly adapt to new technological advancements and societal needs, ensuring that AI evolution is both ethically grounded and harmoniously integrated with human values and capabilities. 

When a job evolves from a quadrant of minimal human and AI value to one where both are instrumental, such a shift should be marked by a thorough contemplation of its repercussions, a quest for equilibrium, and an adherence to universal human values. 

For instance, a move away from a quadrant characterised by AI dominance with minimal human contribution should not spell a retreat from technology but a recalibration of the symbiosis between humans and AI. 

Here, artificial integrity calls for an evaluation of AI’s role beyond operational efficiency and considers its capacity to complement, rather than replace, the complex expertise that embodies professional distinction. 

Conversely, when we consider a transition toward less engagement from both humans and AI, artificial integrity challenges us to consider the strategic implications carefully. It urges us to contemplate the importance of human oversight in mitigating ethical blind spots that AI alone may overlook. It advocates ensuring that this shift does not signify a regression but a strategic realignment toward greater value and ethical integrity. 

Different types of transitions or shifts occur as organisations and processes adapt and evolve in response to the changing capabilities and roles of humans and AI. 

These transitions are grouped into three main types: algorithmic boost, humanistic reinforcement, and algorithmic recalibration. 

Algorithmic boost represents scenarios where AI’s role is significantly elevated to augment processes, irrespective of the starting or ending point of the human contribution. This transition focuses on harnessing AI either to take the lead in processes where human input is low or to amplify outcomes in scenarios where the human value is already high. 

Humanistic reinforcement counters the first by emphasising transitions that increase the human value added in the equation. This set of transitions may involve reducing AI’s role to elevate human interaction, creativity, and decision-making, thereby reinforcing the human element in the technological synergy.  

Lastly, algorithmic recalibration consists of transitions that involve a reassessment and subsequent adjustment of the balance between human and AI contributions. This might mean a reduction in AI’s role to correct over-automation or a decrease in human input to optimise efficiency and capitalise on advanced AI capabilities. 

Together, these sets of transitions provide a comprehensive framework for understanding and strategising the future of work, the role of AI, and the optimal collaboration between human intelligence and artificial counterparts. 

They reflect an ongoing dialogue that focuses not only on enhancing human skills and leveraging advanced technology but also on maintaining artificial integrity. 

This ensures that, as we find the right balance between the two, we do so with a commitment to integrity’s standards, ensuring that AI systems are transparent, fair, and accountable. 

Upholding artificial integrity is paramount, as it governs the trustworthiness of AI and secures its role as a beneficial augmentation to human capacity rather than a disruptive force. Thus, the journey towards technological advancement and automation is navigated with a conscientious effort to sustain both innovation and human values. 

Artificial integrity becomes a compass by which we can steer through this evolving landscape. 

It beckons us to maintain a careful balance, where the integration of AI into our tasks is constantly evaluated against the imperative to nurture and promote human dignity, creativity, and moral frameworks. 

In this age of swift technological advancement, the philosophy of artificial integrity provides a guiding light, ensuring that our navigation through the AI-powered matrix of the world not only celebrates the synergy of human and machine but also protects the human ethos at the heart of true innovation. 

In introducing artificial integrity to the discourse, we set out to explore the potential transformation of tasks, jobs, and the collective workforce across industries and, importantly, how the confluence of AI and human destiny can be guided with vision, accountability, and a deep-seated dedication to the values that are quintessentially human. 

About the Author

Hamilton MannHamilton Mann is the Group VP of Digital Marketing and Digital Transformation at Thales. He is also Senior Lecturer at INSEAD, HEC and EDHEC Business School, a Mentor at the MIT Priscilla King Gray (PKG) Center and the President of the Digital Transformation Club of INSEAD Alumni Association France (IAAF). He was named as one of the 30 global thought leaders to watch as part of the Thinkers50 Radar (2024).

References: 

  • Acemoglu, D., & Autor, D. (2011). “Skills, tasks and technologies: Implications for employment and earnings”, Handbook of Labor Economics
  • Manyika, J., et al., (2017). “A future that works: Automation, employment, and productivity”, McKinsey Global Institute. 
  • Brynjolfsson, E., & McAfee, A. (2014). “The second machine age: Work, progress, and prosperity in a time of brilliant technologies”, W.W. Norton & Company. 
  • Acemoglu, D. & Restrepo, P. (2020). “Robots and Jobs: Evidence from US Labor Markets”, The University of Chicago Press Journals. 
  • Jha, S., Topol, E.J., (2016). “Adapting to Artificial Intelligence: Radiologists and Pathologists as Information Specialists”, JAMA. 
  • Arntz, M., Gregory, T., Zierahn, U., (2016). “The Risk of Automation for Jobs in OECD Countries”, OECD Social, Employment and Migration Working Papers. 
  • Boden, M.A., (2009). “Computer models of creativity”, AI Magazine
  • Parasuraman, A., Zeithaml, V.A., & Berry, L.L. (1988), SERVQUAL model 
  • Angwin et al. (2016), “How We Analyzed the COMPAS Recidivism Algorithm”, “Machine Bias” by ProPublica. 
  • Hashem, I.A.T., Yaqoob, I., Anuar, N.B., Mokhtar, S., Gani, A., & Khan, S.U. (2015). “The rise of ‘big data’ on cloud computing: Review and open research issues”, Information Systems
  • Arner, D.W., Barberis, J.N., & Buckley, R.P. (2016). “The evolution of fintech: A new post-crisis paradigm?”, SSRN Electronic Journal
  • Acemoglu, D., & Restrepo, P. (2020). “Robots and jobs: Evidence from US labor markets”, Journal of Political Economy
  • Barocas, S., & Selbst, A.D. (2016). “Big Data’s Disparate Impact”, California Law Review
  • Rajkomar, A., Dean, J., & Kohane, I. (2018). “Machine Learning in Medicine”, The New England Journal of Medicine
  • Yüksel, N., Börklü, H.R., Sezer, H.K., & Canyurt, O.E. (2023). “Review of artificial intelligence applications in engineering design perspective”, Engineering Applications of Artificial Intelligence
  • Holstein, K., McLaren, B.M., & Aleven, V. (2017). “Intelligent tutors as teachers’ aides: Exploring teacher needs for real-time analytics in blended classrooms”, The Seventh International Learning Analytics & Knowledge Conference. 

The post Introducing the Concept of Artificial Integrity: The Path for the Future of AI appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/introducing-the-concept-of-artificial-integrity-the-path-for-the-future-of-ai/feed/ 0
10 Ways Marketing Is Being Transformed by The Advance Of AI https://www.europeanbusinessreview.com/10-ways-marketing-is-being-transformed-by-the-advance-of-ai/ https://www.europeanbusinessreview.com/10-ways-marketing-is-being-transformed-by-the-advance-of-ai/#respond Mon, 13 Nov 2023 04:21:03 +0000 https://www.europeanbusinessreview.com/?p=195639 By Hamilton Mann and Joerg Niessing If you are one of those who has reservations about online systems that track your browsing behaviour, you might want to take a few […]

The post 10 Ways Marketing Is Being Transformed by The Advance Of AI appeared first on The European Business Review.

]]>
By Hamilton Mann and Joerg Niessing

If you are one of those who has reservations about online systems that track your browsing behaviour, you might want to take a few moments to prepare yourself. Now, they’ll be focused not only on what you do and say – but what you don’t.

Firms have so far used a mix of human intuition and traditional analytics to engage with their customers. But the advent of AI and more sophisticated data interpretation is heralding a new era of business–customer interaction.

Instead of merely responding to expressed needs and wants, firms will proactively anticipate them, reaching a level of foresight never experienced before. This will revolutio-nise the nature of customer interactions and reshape industries.

Here are ten predictions as to how the multifaceted applications of AI – including Generative AI – will transform marketing.

1. Predictive analytics will anticipate clients’ desires

Forget mere lead scoring. AI-powered predictive analytics will anticipate desires before they are articulated. AI might craft multifaceted client profiles, predicting not just purchasing behaviours, but also emergent needs. It will capture biometric signals, such as eye movement on a webpage or scroll speed, to decode a user’s unsaid preferences and emotional reactions.

AI will not just learn from what users do, but also from what they don’t do. For instance, if a user constantly skips over certain types of content, AI will attempt to discern the unsaid reasons behind such behaviour.

2. AI-driven content will give birth to next-gen storytelling

AI’s growing role in content generation will focus on resonance. AI might tailor content narratives in real time, adapting to live user interactions. Such content will be based not just on overt actions, but also on underlying subconscious preferences and unspoken sentiments of users.

Further, emotional tone mapping will enable marketers to weave narratives that subtly resonate with the registered emotions. This will bridge the gap between what’s said and what’s left unsaid, resulting in a stronger emotional bond with the audience.

3. Programmatic advertising will turn into laser-guided intelligent outreach

The future of programmatic advertising promises not just precision, but also contextual relevance. Ads will adjust to live situational contexts, beyond an analysis of explicit clicks and page views. Instead, it will interpret implicit behaviours, like the time spent hovering over an ad or the subtle patterns of navigation, to understand and align with the deeper, unspoken interests of the audience.

The next frontier will be anticipatory advertising. AI will predict what a user might be inclined to explore or need soon, creating a bridge between their current digital context and their unvoiced desires. Human oversight will then be critical to safeguard against intrusive or inappropriate placements.

4. Hyper-personalisation will apply to the entire digital journey

With AI, personalisation will mean crafting entirely unique digital experiences grasping the complexity and ambivalence of every individual. Beyond analysing click-through rates and purchase histories, AI will seek to understand the emotional landscapes of users. To this end, it will draw from nuanced data points like interaction speeds, mouse movements or even biometric feedback, where available

Whether users are undergoing a major life event, facing day-to-day challenges or celebrating moments of joy, AI will discern where users are in their life journey. This will ensure deep and wide content relevance. Aside from offering entirely individualised user journeys, from web interfaces to product suggestions, firms will be able to envision unexplored and unexpected go-to markets.

5. Seeing through the user’s lens will move from talk to reality

Advanced visual recognition holds promise for deeply intuitive product recommendations. Beyond recognising products and brands, AI will interpret implicit visual cues from user-generated content. For instance, the background of photos, the clothes colours often worn, or even subtle moods conveyed in images can reveal unsaid preferences or sentiments. All this will inform more nuanced marketing strategies.

Firms will be better equipped to predict emerging visual trends. Resonance will be achieved through alignment with customers’ evolving, often unexpressed, aesthetic tastes.

6. Email marketing will set conversations, not campaigns

AI might transform email marketing from broadcast-like campaigns to conversations that feel like a dialogue. Beyond open rates, AI will employ sentiment analysis on user responses.

Even when users don’t actively engage with an email – no clicks, no direct responses – their passive interactions, like the duration of time an email is open or the frequency of revisits, can provide AI with insights. These subtle cues reveal unsaid levels of interest or contemplation and will allow for nuanced follow-ups.

7. Chatbots will pave the way for more ubiquitous meaningful engagement

The allure of chatbots will transcend 24/7 availability. AI-driven chatbots will increasingly capture and understand the emotional undertones of user queries. They’ll discern the unsaid feelings, such as frustration, excitement or confusion. By responding with empathetic undertones, chatbots will foster a deeper, more human-like connection without ever saying they understand emotions.

Such chatbots may also be able to subtly refer to past interactions. While they won’t remember in the human sense, they’ll access contextual data from previous sessions to provide continuity. To users, this will feel like an ongoing conversation, much like speaking to a familiar acquaintance who remembers prior discussions.

The harmonious confluence of AI insights and human intuition promises a future of balanced, robust sales strategies.

Instead of waiting for users to highlight an issue, advanced chatbots will detect potential needs by reading between the lines. For instance, if a user often asks about a particular feature or if their digital behaviours indicate confusion, the chatbot might proactively offer a tutorial or further information. Chatbots may evolve into individualised digital brand ambassadors, carrying the essence of the company purpose and mission in every interaction.

8. Voice will be the dawn of a new mode of digital interaction

As AI becomes more adept at voice recognition, it will also tap into the unsaid emotional undertones of voice inputs. Firms will then be able to perfectly tailor responses or offers that resonate with a user’s emotional state at that moment, such as hesitation, excitement or doubt.

For example, instead of a one-size-fits-all response, AI might offer a quicker transaction process for someone in a rush, or a more detailed product description for a relaxed user. This unsaid emotional depth of understanding will become a crucial aspect of user engagement.

9. Dynamic pricing and market dynamics will tend to become one

Dynamic pricing may evolve into a living system, mirroring real-time market nuances. AI won’t just track sales and stock. Using a confluence of news, social media sentiment, and other subtle indicators, it will grasp the “mood” of the market and be able to shape pricing accordingly.

At the individual customer level, AI will pick up on the unsaid, like hesitation in voice commands when prices seem high, or the tonal excitement at a discount. Such nuances will allow businesses to adjust pricing intuitively and anticipate when a change will be most effective. While this will ensure optimal profitability, it will also raise ethical questions.

10. Sales forecasting will meld instinct with insights

While AI will predict market trends, the human touch will infuse these forecasts with grounded realities. The harmonious confluence of AI insights and human intuition promises a future of balanced, robust sales strategies.

In essence, sales forecasting will be sensitive not just to numbers, but to feelings, preferences, and nuances. AI will uncover common threads of buyer hesitation or concerns. Addressing these unvoiced concerns will redefine the sales approach, turning potential losses into wins.

Chatbots may evolve into individualised digital brand ambassadors, carrying the essence of the company purpose and mission in every interaction.

From seeing beyond the overt to hearing the unsaid, the next frontier of marketing is less about speaking and more about listening – listening to the silent whispers of the market, the muted desires of clients, and their hushed hesitations. As we stand poised to leap into this new world, we must be guided not just by profit, but by transparency and ethical stewardship. Firms will need to tread with caution in light of data privacy concerns. Transparent data handling practices, regulatory compliance, and trust will be critical. Those who master this will not just navigate but define the future, setting the gold standard in the AI-augmented world of sales
and marketing.

About the Authors

Hamilton MannHamilton Mann is the Group VP for Digital Marketing and Digital Transformation at Thales. He is also a Guest Lecturer at INSEAD and a Senior Lecturer at HEC Paris and EDHEC Business School. Additionally, he serves as a mentor at the MIT Priscilla King Gray (PKG) Center and hosts The Hamilton Mann Conversation (www.hamiltonmannconversation.com).

Thales Joerg NiessingJoerg Niessing is a Senior Affiliate Professor of Marketing at INSEAD and is passionate about bridging the academic and the business world on topics related to digital transformation, customer centricity, and data analytics. At INSEAD Joerg teaches executives and MBA students and he is the co-director of INSEAD’s programmes Leading Digital Marketing Strategy and B2B Marketing Strategies and Driving Digital Marketing Strategy (OOPS).

The post 10 Ways Marketing Is Being Transformed by The Advance Of AI appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/10-ways-marketing-is-being-transformed-by-the-advance-of-ai/feed/ 0
Enhancing Operating Models’ Artificial Intelligence Quotient (AIQ) https://www.europeanbusinessreview.com/enhancing-operating-models-artificial-intelligence-quotient-aiq/ https://www.europeanbusinessreview.com/enhancing-operating-models-artificial-intelligence-quotient-aiq/#respond Thu, 09 Nov 2023 06:34:08 +0000 https://www.europeanbusinessreview.com/?p=195807 By Hamilton Mann  As organisations tap into the power of Generative Artificial Intelligence to improve business outcomes, it is imperative to examine how it affects their operating models. Only by […]

The post Enhancing Operating Models’ Artificial Intelligence Quotient (AIQ) appeared first on The European Business Review.

]]>
By Hamilton Mann 

As organisations tap into the power of Generative Artificial Intelligence to improve business outcomes, it is imperative to examine how it affects their operating models. Only by doing this will companies leverage the power of AI while avoiding its pitfalls.   

With the weekly drumbeat of Generative AI advancements and corporate leaders signalling the need for their organisations to make progress in harnessing the power of AI, larger questions are emerging for these same executives to address.  

In addition to the ethical challenges that AI presents for their customers, employees, and society, companies must grapple with how AI will fundamentally shift their operating model including the workforce they employ today. 
 
Ignoring the seismic shifts brought about by AI, and in particular, Large Language Models (LLMs) is no longer a viable option for organisations.  

The torrent rise of AI, championed by industry titans such as OpenAI, Google, Meta, Microsoft, and Nvidia, is rapidly reshaping how work gets done and how companies operate and deliver value to their customers and shareholders.  

Let’s explore the key underlying components of organisations’ operating models—namely, organisational structure, people, processes, technology, and culture—while shaping the informal and often unwritten mechanisms that capture their essence and are most profoundly transformed by AI’s influence.

Organisation: Blueprint over Structure

In an era dominated by rapid AI advancements, it’s crucial to assess the impact on organisations from the holistic perspective of an organisational blueprint, rather than merely an organisational structure.

The forward-looking and comprehensive nature of a blueprint, designed for adaptability, offers a more inclusive approach that anticipates future changes and seamlessly integrates AI’s transformative potential into the very fabric of an organisation’s operations and strategy.

Preparing an organisation for AI is less a matter of stringent modification and more a journey towards a fluid organisation. A rigid adaptation approach tends to breed complacency and a clinging to the status quo in a defensive posture, often arising from an innate human need for stability.

Conversely, fluid organisation involves crafting a target model that provides a sense of reliability in handling 80% of predictable events, while remaining flexible enough to navigate the 20% of unforeseen challenges.

AI can lend unprecedented proficiency in managing these unpredictable elements, simultaneously elevating the performance within that 80% of predictable events.

The primary concern lies in establishing a dynamic organisational structure that optimises efficiency in addressing these regular tasks, allowing AI to enhance this productivity while maximising agility in responding to the organisation’s blind spots.

Let’s not merely ask how to introduce AI into organisations. Instead, let’s question how to transform organisations and reshape an understanding of what’s possible with AI.

Culture: Mindset over Skillset

The digital realm may seem daunting, requiring a profound understanding of data, technology, algorithms, and AI, but this is a misperception.  

The transition from traditional processes to digital ones is a journey of exploration—embracing change, questioning the status quo, and learning to consider the implications of AI.

Amid the surge of digital technology evolution, leaders need to take steps towards dispelling the myth of digital omniscience, emphasising a more critical and discerning approach to digital understanding.

Rather than cultivating an army of data scientists and programmers, the focus should be on fostering a mindset that embraces the potential of these systems.

The transition from traditional processes to digital ones is a journey of exploration—embracing change, questioning the status quo, and learning to consider the implications of AI—all while understanding that the goal is not mastery, but fluency in concepts such as system architecture, AI agents, cybersecurity, and data-driven experimentation.

Moreover, it’s imperative to acknowledge that AI, no matter how advanced, is more than just a tool.

Unlike other technological tools, AI has the capacity to learn, adapt, and even make decisions based on the data it receives. When referring to AI merely as a ”tool”, it’s essential to ensure its profound implications are not underestimated, given the unique form of intelligence it embodies.

Such a mindset could lead to significant oversights, thereby running the risk of undesirable consequences by overlooking the necessary precautions required for its deployment.

Blind reliance on them, or reliance without a clear sense of purpose or ethical considerations, must be the pitfall to avoid.

True leadership in the digital age isn’t about tech prowess but the ability to integrate technology meaningfully into broader objectives, ensuring it aligns with human values and societal positive impact.

Leaders must ensure that the organisation pushes itself and is constantly challenged intrinsically by its mode of operation. AI must be approached not as an infallible oracle but as a powerful ally that, when used with discernment, can amplify human capacities.

Lastly, it’s essential to understand that the very essence of any digital technology is its evolutionary nature. What may be a groundbreaking innovation today could become obsolete tomorrow. Relying solely on the technical know-how of the present might lead to the trap of short-sightedness.

Leaders should, therefore, instill a culture of continuous learning, flexibility, and adaptability.

Embracing digital technology also means acknowledging its impermanence and the need to be proactive rather than reactive. Hence, this is not just about understanding the current AI capability but anticipating the ones yet to come, ensuring not just keeping pace with what is available but also taking steps ahead, shaping its very trajectory.

Process: Data over Procedure

In the realm of AI, data stands as a centerpiece, evolving platforms, tools, and systems, facilitating greater efficiency and improved service delivery.

As AI begins to take on an increasingly dominant role in decision-making, a critical challenge has emerged: understanding the labyrinthine data-driven process of AI’s reasoning for the sake of trustworthiness.

Let’s move beyond the perfectionism of causality that leads to linear and procedure-thinking to embracing the pragmatism of effectuality.

Embracing practices like highlighting relevant data sections contributing to AI outputs or building models that are more interpretable could enhance AI transparency. But is transparency the only antidote to the trust issues with AI? Or could there be a different approach that not only explains AI’s decisions but also anticipates its consequences?

Leaders must come to terms with the uncomfortable truth that AI’s decision-making capabilities often far exceed human comprehension. However, AI can help leaders understand in detail the associated effects of AI’s decision-making capabilities.

For instance, AI could simulate various scenarios to illustrate the potential outcomes of its recommendations. This way, AI can be used to understand the breadth and depth of its own impact. In a medical setting, AI might recommend a certain treatment plan. Anticipating the consequences means understanding how this treatment could affect the patient’s health outcomes, taking into consideration the individual’s unique medical history and circumstances.

It is also worth mentioning that AI’s effectiveness is heavily influenced by the data it processes, forget about an unbiased dataset as the magic bullet to address biases. There will always be biases in datasets, as data are originally produced by humans and the process of refining them involves humans again.

Embracing the intrinsic nature of bias in datasets is a challenge that can lead to more accurate and adaptable AI models. This is achieved by recognising that total neutrality is a myth and integrating a diverse range of data to ensure AI models can respond to various contexts.

It’s not only about who curates the data.

Embracing the intrinsic nature of bias in datasets is a challenge that can lead to more accurate and adaptable AI models.

While it’s beneficial to involve diverse teams in data collection and processing, overemphasis on representation might lead to enforced uniformity, suppressing the rich and natural variations in human expression and experience. Instead, a more balanced approach would allow AI models to learn and adapt from the organic nature of data, including biases, to respond more genuinely to different perspectives.

Finally, let’s rethink the trade-offs of large datasets vs small datasets.

The pursuit of larger AI systems by tech companies is not merely a race towards volume. Larger datasets encompass broader knowledge, mirroring the vast spectrum of human perspectives.

Reducing the size of a model for the sake of better understanding might, in fact, diminish the depth and richness of insights it can provide.

No matter how meticulously AI is developed and documented, it can never fully grasp the depth of human experiences and biases, leading to inadvertent harm. 
Hence, the strategy shouldn’t be to eradicate bias but to acknowledge and manage it, reducing the risks that are associated while enabling us to navigate complex human biases and patterns effectively.

AI will only be truly powerful when it can navigate the complex, bias-ridden real world. That will only be achieved while appreciating the multifaceted nature of data and developing AI models that can recognise and adapt to these complexities that are in essence not always “procedurable”.

People: Human Capital Value transitioning over Reskilling

Leaders need to face the new or exacerbated Human Capital challenge AI poses. 

The emergence of AI necessitates new skill sets and competencies, redefining what expertise is essential for delivering value in this new “AIconomic” era. 
But it goes beyond that.

The prospect of AI triggering mass unemployment is often overshadowed by optimistic predictions based on historical technological revolutions. It is imperative, however, to examine AI’s impact not through the lens of the past, but in the context of its unique capabilities.

For instance, the transition from horse-and-buggy to automobiles indeed reshaped job markets but it did not render human skills redundant. AI, on the other hand, has the potential to do just that.

Contrary to the belief that AI should not create meaningful work products without human oversight, the use of AI in tasks like document generation can result in increased efficiency. Of course, human oversight is important to ensure quality, but relegating AI to merely auxiliary roles might prevent us from fully realising its potential.

Take Collective[i]’s AI system for instance. Yes, it may free up salespeople to focus on relationship building and actual selling, but it could also lead to a reduced need for human personnel, as AI handles an increasingly larger share of sales tasks. The efficiencies of AI could easily shift from job enhancement to job replacement, creating a precarious future for many roles.

Similarly, while OpenAI’s Codex may make programming more efficient, it could, in the long run, undermine the value of human programmers. As AI progresses, the line between “basic purposes” and more complex tasks will blur.

Certainly, investments in education and upskilling form a key part of any strategy to cope with job displacement due to the rise of AI. This includes fostering new-age skills that enable workers to adapt to the changing employment landscape and thrive in AI-dominated sectors.

However, this approach alone may not be sufficient.

Addressing AI displacement should not solely focus on financial security. The social and psychological impacts of job loss — including the loss of identity, self-esteem, and social networks — are equally significant and need to be factored into policy planning.

It is imperative to also craft comprehensive social and economic policies that provide immediate relief and long-term support to those displaced by AI’s advancement. 
Unemployment benefits, for instance, could be reevaluated and expanded to cater to AI-induced job losses.

Moreover, addressing AI displacement should not solely focus on financial security. The social and psychological impacts of job loss — including the loss of identity, self-esteem, and social networks — are equally significant and need to be factored into policy planning.

Social support services and career counselling could be made widely accessible to help individuals navigate the transition period.

A Human Capital Value Transitioning Analysis can effectively cushion the impact of AI-induced displacement and build a resilient and inclusive organisation from AI advancements while safeguarding its human capital.

Technology: Ethical stands over Value Proposition

AI introduces novel policies and standard needs, necessitating a reevaluation of decision-making protocols and organisational conduct.  

But let’s not think that AI regulation will be enough to regulate AI.

The agile nature of AI evolution has outpaced the regulation meant to keep it in check. The burden of ensuring that AI tools are used ethically and safely thus rests heavily on the shoulders of the companies employing them.

The role of AI ethics watchdogs and regulation is crucial, but their effectiveness can be limited by the rapidly changing landscape of AI. Overly relying on the arrival of external checks and balances, or acting as if waiting for them to first take a stance before taking action, could lead to complacency within organisations.

It is thus essential for leaders to foster a culture of ethical AI development and usage, and not just depend on external watchdogs or regulations.

While government regulations are evolving to address AI, organisations should proactively ensure their AI applications are responsible, fair, and ethical.

It’s not just about reaping the benefits of AI but also about responsibly integrating these technologies without causing harm to stakeholders. This necessitates not only technological sophistication but also ethical mindfulness and societal understanding.

The example of Zoom — the popular video conferencing software — which recently made headlines, raising concerns about an update to their terms-of-service that allows the company to use customer data to train its artificial intelligence, illustrates this.

The path to responsible AI deployment is less about waiting for appropriate regulations and more about fostering a deep understanding and ethical use of the technology.

Pioneering AI-Driven Operating Models

By moving to an AI-ready Operating Model, organisations will need to chart their own AI-transformation journey by prioritising an adaptive blueprint over structure, emphasising mindset more than just skillset, valuing data above procedure, placing emphasis on the transition of human capital value instead of just reskilling, and elevating ethical stances above traditional value propositions.  

To navigate this multi-dimensional transformation effectively, organisations would benefit from a structured approach to assess their readiness and progress in developing their Operating Model’s ‘‘AI Quotient’’.

Companies that can quickly evolve towards these dimensions will begin to separate themselves from the pack in their respective industries in terms of the speed and impact of AI-driven innovations.  

AI is like no other tech wave in history with the potential to empower employees, reimagine work, and shift how companies deliver value in leaps versus incremental steps. Similarly, it requires a radical approach to transforming the operating model to unlock its full potential.

As organisations shift towards an AI-ready Operating Model, they must design their unique AI-transformation path. This means prioritising flexibility over fixed structures, focusing on mindset beyond just skills, valuing data over traditional procedures, emphasising the evolution of human capital value rather than mere reskilling, and prioritising ethical considerations over conventional value propositions. 

ORGANIZATION: “BLUEPRINT OVER STRUCTURE"

About the Author

Hamilton Mann - Author

Hamilton Mann is the Group VP of Digital Marketing and Digital Transformation at Thales. He is also the President of the Digital Transformation Club of INSEAD Alumni Association France (IAAF), a mentor at the MIT Priscilla King Gray (PKG) Center, and Senior Lecturer at INSEAD, HEC and EDHEC Business School.

The post Enhancing Operating Models’ Artificial Intelligence Quotient (AIQ) appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/enhancing-operating-models-artificial-intelligence-quotient-aiq/feed/ 0
The Path towards Trustworthy AI is no Tech but a Human Intelligence Test https://www.europeanbusinessreview.com/the-path-towards-trustworthy-ai-is-no-tech-but-a-human-intelligence-test/ https://www.europeanbusinessreview.com/the-path-towards-trustworthy-ai-is-no-tech-but-a-human-intelligence-test/#respond Thu, 09 Nov 2023 06:33:39 +0000 https://www.europeanbusinessreview.com/?p=195811 By Hamilton Mann In the frenzy to champion the potential of trustworthy AI, the recent moves from tech giants offer a reflective pause about one of the most, if not the […]

The post The Path towards Trustworthy AI is no Tech but a Human Intelligence Test appeared first on The European Business Review.

]]>

By Hamilton Mann

In the frenzy to champion the potential of trustworthy AI, the recent moves from tech giants offer a reflective pause about one of the most, if not the most important aspects of AI, which, paradoxically, is seldom discussed: the challenge it poses to human intelligence.

The quest for AI’s sensory perception   

With OpenAI’s ChatGPT flaunting sensory capabilities and Meta introducing AI chatbot personalities, the crescendo of AI’s role in our lives is unmistakable.

While these advancements showcase the leaps AI has made, there’s a subtext here: these AI systems are mirroring complex human communication capabilities.

It’s easy to get entangled in the glitz of AI’s capabilities and miss the fundamental question: should AI aim to mirror human faculties or should it charter a different course?

As we analyse OpenAI and Meta’s innovations, the growing capability of AI to emulate human-like behaviour cannot be ignored.

However, a closer look, underpinned by scientific evidence, unveils the intricate layers involved and prompts important inquiries about the direction AI should take.

To begin with, the architecture of many AI models is inspired by human neural networks. For instance, the deep learning models use layers of interconnected nodes, reminiscent of how neurons are connected in the human brain. A research paper from Angela D. Friederici titled “The Brain Basis of Language Processing: From Structure to Function” published in the American Psychological Society Journal in 2011, indicates that when humans engage in complex communication, multiple regions of the brain, including Broca’s and Wernicke’s areas, work synchronously.

Similarly, AI models, such as OpenAI’s GPT series, employ multiple layers to generate and interpret text, mimicking this orchestrated brain activity.

When it comes to grasping semantics, while AI has made strides in producing human-like text, there’s a distinction between generating syntactically correct sentences and truly understanding semantics. The Neurocognition of Language, a book published in 2000 by Oxford University Press, highlighted that human brains process words and context in tandem, allowing for a deeper understanding of language nuances. AI, in contrast, relies heavily on patterns in data without truly grasping the underlying meaning. This distinction underscores the difference between superficial emulation and genuine comprehension.

Diving into emotional intelligence, Meta’s advancements in AI highlight its ability to interpret and simulate human ones through facial recognition and text analysis. However, scientific studies, such as those by Antonio R. Damasio in his book Descartes’ Error published in 1994, emphasise the intrinsic link between emotions and human consciousness. While AI can recognise emotional cues, it doesn’t experience emotions in the human sense, indicating a fundamental disparity between recognition and experience.

While AI can recognise emotional cues, it doesn’t experience emotions in the human sense, indicating a fundamental disparity between recognition and experience.

On the artistic spectrum, AI models, such as DALL·E by OpenAI, can generate creative images, but their “creativity” is constrained by patterns in their training data. The research paper “DALL·E: Creating Images from Text” published in the OpenAI Blog in 2021 highlighted that while AI can mimic certain creative processes, it lacks the intrinsic spontaneity and serendipity inherent in human creativity. Its creativity, unlike that of humans, isn’t influenced by a lifetime of diverse experiences, emotions, or moments of serendipity. Instead, it relies on vast quantities of data and learned patterns.

Lastly, through the prism of ethical and philosophical lenses, the quest to replicate human faculties in AI brings forth ethical dilemmas. The Human Brain Project (HBP) funded by the European Union seeks to understand the intricacies of the human brain, potentially offering insights into creating more human-like AIs. But this brings up a philosophical question: Just because we can replicate certain aspects of human cognition in machines, does it mean we should?

While evaluating AI’s character may seem akin to understanding human nature, it’s crucial to realise that AI doesn’t have personal experiences, emotions, or consciousness. Instead of anthropomorphising AI, we should aim to understand its unique nature.

As we push for greater intelligence in machines, it becomes equally crucial to instill boundaries that guide this intelligence in a responsible manner.

This won’t be and shouldn’t be made by the machine for itself.

The complex AI guardrails equilibrium   

Leading voices emphasise the importance of guardrails to avoid AI’s pitfalls. Yet, historically, revolutionary technology faced similar trepidations. Cars, when first introduced, faced skepticism, with critics demanding speed-limiting devices to ensure safety. Imagine limiting vehicles to a pedestrian’s pace! In a bid to contain AI, are we stifling its potential?

The introduction of electricity transformed homes and industries but also brought risks such as electrical fires and electrocution. As infrastructure and regulations evolved, safety improved without curbing the transformative power of electricity. The core principle here is adaptability. As society understands the potential dangers of a particular technology, guidelines can be adjusted to ensure safety without inhibiting innovation.

A look back at technological milestones can offer instructive parallels.

As society understands the potential dangers of a particular technology, guidelines can be adjusted to ensure safety without inhibiting innovation.

Historically, the aviation industry underwent multiple safety iterations before reaching today’s standards. Early planes faced numerous accidents, leading to skepticism about commercial flight. However, over time, rigorous testing, improved design, and advanced regulations have made flying one of the safest modes of transportation. Iterative improvement based on accumulated data and real-world experiences can refine both technology and its safety protocols. Rather than stifling potential, these refinements can bolster public trust and facilitate broader adoption.

Similarly, the development of nuclear energy saw significant hesitancy, given the catastrophic potential of mishaps. However, meticulous regulations, safety protocols, and international pacts have allowed nations to harness nuclear power without widespread disasters. Properly calibrated regulations can serve dual purposes: ensuring public safety and providing a structured framework within which innovations can flourish. Overly strict regulations might stifle potential, but a complete lack can result in distrust and potential misuse.

Conversely, the Internet’s rise was swift, catching many regulators unprepared. While it has democratised information, the lack of initial guardrails has led to issues such as cyberbullying, misinformation, and data privacy concerns, and those are still a primary concern today. The challenge has been retroactively implementing guidelines without curtailing the web’s intrinsic freedom.

Rapidly evolving technologies can benefit from early, flexible guardrails that evolve in tandem with the technology. It ensures that as technology advances, its safety and ethical implications are addressed in real time, striking a balance between potential and precaution.

While it’s valid to raise concerns about stifling the potential of AI with excessive guardrails, appropriately calibrated precautions can, in fact, bolster innovation by building trust and ensuring broad societal acceptance.

Finding the right equilibrium is as essential as understanding the moral principles that shape these boundaries, giving AI its ethical foundation.

Again, the machine won’t and shouldn’t autonomously generate this for itself.

The AI moral compass 

Anthropic and Google DeepMind’s attempts to create AI constitutions—core principles guiding AI behaviour —are commendable. However, once the authority of certain final principles is established, other avenues of understanding are often dismissed. By framing AI’s potential within our current ethical constructs, we might inadvertently limit its vast potential. The creation of an AI constitution should be evolutionary, rather than prescriptive.

From a historical perspective, Thomas S. Kuhn, in his influential book “The Structure of Scientific Revolutions” published by the University of Chicago Press in 1962, posited that science progresses through paradigms—widely accepted frameworks of understanding. However, once a paradigm takes hold, it often constrains alternative viewpoints and approaches. This can be applied to AI ethics: a too-rigid AI constitution might become the de facto paradigm, constraining alternative ethical approaches and potentially stifling innovation.

Turning to economics, behavioural economists like Herbert A. Simon have argued that humans often make decisions based on “bounded rationality”, limited by the information they have, cognitive limitations, and the finite amount of time to make a decision. If AI is constrained strictly by our current bounded understanding of ethics, it may not explore potentially better solutions outside of these bounds.

Delving into psychology, research from the field of moral psychology, such as Jonathan D. Haidt’s work on moral foundations theory suggests that human morality is complex, multidimensional, and varies across cultures. If we overly standardise an AI constitution, we may overlook or undermine this richness, leading to AI systems that don’t account for the vast tapestry of human values.

Drawing from natural processes in evolutionary biology, nature’s diversification strategy ensures survival and adaptation. Species that were too specialised and inflexible often went extinct when conditions changed. Similarly, an AI that is too narrowly confined by a rigid set of principles may not adapt well to unforeseen challenges.

Exploring genetic frontiers in the realm of bioethics, the introduction of Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) – a technology that research scientists use to selectively modify DNA, adapted for laboratory use from a naturally occurring defense mechanism in bacteria that allows them to recognise and destroy foreign DNA from viruses – has sparked debates about the limits of genetic modification. Some argue for restraint based on current ethical principles, while others believe there’s a need for evolving ethical guidelines as we learn more about the technology. This can serve as an analogy for AI: as we discover more about its capabilities and implications, the evolution of our guiding principles will be questioned in tandem.

That said, with a clear moral foundation set for AI, we must then ensure that it truly represents everyone, emphasising the importance of inclusiveness.

Yet again, the machine won’t and shouldn’t create this on its own.

The path towards AI inclusiveness 

Reinforcement Learning by Human Feedback (RLHF) as a method to refine responses generated by AI has faced criticism for being primitive.

But let’s examine this.

If AI learns from human feedback, doesn’t it reflect our collective psyche? Instead of overhauling this method, diversifying the pool of evaluators might offer richer feedback, reflecting a tapestry of human perspectives.

Critically, multiple studies have shown that AI models can inherit and amplify human biases, especially if they are trained on biased data. For example, “Semantics derived automatically from language corpora contain human-like biases”, a 2018 Princeton research study published in the journal Science demonstrated that commercial algorithms showed gender biases by associating male words with careers and female words with family. This suggests that if the human feedback in RLHF comes from a homogenous group, the resultant AI behaviour might also be skewed.

Critically, multiple studies have shown that AI models can inherit and amplify human biases, especially if they are trained on biased data.

From a global standpoint, cross-cultural psychology has uncovered significant differences in how moral values are prioritised in different cultures. For instance, a study titled “Is It Good to Cooperate? Testing the Theory of Morality-as-Cooperation in 60 Societies” by Oliver S. Curry et al., published in the University of Chicago Press Journals in 2019, found that while certain moral values were universally recognised, their interpretation varied across cultures. Thus, a diverse pool of evaluators in RLHF can offer a more holistic view of what’s “right” or “acceptable”.

On the neural front, neuroscientific research indicates that people from different backgrounds or with different neurological makeups process information differently. For example, studies have shown that bilingual individuals can process certain language tasks differently from monolinguals. One of the renowned experts in this field is Dr. Ellen Bialystok, who has conducted numerous studies on bilingualism and its effects on cognitive processes. For instance, Bialystok’s research study titled “Bilingualism: Consequences for Mind and Brain” published in Trends in Cognitive Sciences in 2012 has shown that bilinguals often outperform monolinguals in tasks that require attention, inhibition, and short-term memory, collectively termed “executive control”. Incorporating neurodiverse evaluators in RLHF can provide varied cognitive feedback, leading to a more robust AI model.

From a slightly different angle but reaching a similar conclusion, James Surowiecki’s book, The Wisdom of Crowds, presents evidence that collective decisions made by a diverse group often lead to better outcomes than even the best individual decision. When applied to RLHF, this suggests that a diverse group of evaluators can provide more accurate and balanced feedback than a select few experts.

Reflecting on past shortcomings, there have been instances where a lack of diversity in evaluators has led to unintended AI behaviour. For example, the racial and gender bias in certain facial recognition systems can be traced back to a lack of diversity in training data.

Failures in using the RLHF method to ensure improved response of AI are not because of the method itself, but due to the lack of diversity. Ensuring a diverse pool for RLHF can help mitigate such pitfalls.

Moving towards diversity is a key condition for developing AI’s inclusiveness. It’s an essential precursor for recognising and actively countering biases, ensuring AI’s consistent utility and fairness.

Similarly here, this won’t and shouldn’t be self-managed by the machine.

The battle against AI’s inherent biases   

Red-teaming, a process of “breaking” AI to understand its vulnerabilities, while robust, resembles the older software testing methods. By focusing on adversarial testing, we might be swayed by collective consensus rather than individual merit.

While red-teaming aims to find vulnerabilities in AI systems by simulating adversarial attacks, the nature of these attacks often reflects known vulnerabilities.

Moving towards diversity is a key condition for developing AI’s inclusiveness. It’s an essential precursor for recognising and actively countering biases, ensuring AI’s consistent utility and fairness.

“Towards Evaluating the Robustness of Neural Networks”, a research study by Nicholas Carlin and David A. Wagner from UC Berkeley published in arXiv in 2017, highlighted that adversarial examples (perturbed inputs designed to fool machine learning models) in one domain can be starkly different from another. By concentrating only on known issues, we might neglect emergent risks specific to the evolving nature of AI.

In addition, traditional software red-teaming often focuses on a limited set of potential threats or vulnerabilities. However, the complexity of modern AI models, like deep neural networks, demands an extensive landscape of possible threats. A 2018 paper titled “Adversarial Risk and the Dangers of Evaluating Against Weak Attacks” published in arXiv by Jonathan Uesato et al., demonstrated that larger neural networks, while more accurate, are often more susceptible to adversarial attacks, implying a vast attack surface.

Moreover, human biases can infiltrate the red-teaming process. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” the study co-authored by Joy Buolamwini and Timnit Gebru and published in Proceedings of Machine Learning Research in 2018 highlighted that AI systems trained in one cultural context might exhibit vulnerabilities that are entirely overlooked by red-teamers from that same context, simply because their own biases blind them to potential risks. This underscores the need for a globally diverse team for comprehensive red-teaming.

While it’s crucial to understand AI’s response in worst-case scenarios, an overemphasis can lead to neglect of more mundane but equally critical issues. A case in point is Microsoft’s Tay, an AI chatbot that began tweeting inappropriate content not due to an adversarial attack but because of the data it was exposed to. A strict red-teaming approach might miss such vulnerabilities that arise from the model’s regular interactions.

AI models, especially those incorporating some form of online learning, evolve over time. A one-time red-teaming might not be enough. There is a need for continuous and dynamic testing methodologies tailored for AI, as models can drift from their initial behaviour due to continuous updates.

Ultimately, addressing biases is an ongoing process, pushing the boundaries and goals of AI to continually adapt and evolve.

As with before, it’s not and it shouldn’t be the machine’s design to craft this on its own.

The AI’s evolving finish line 

We often perceive AI as a problem awaiting a solution, yet we must not forget the rich tapestry of human experiences. The goal for AI’s future should not solely be to forge an infallible model but to consider how we might embrace its inherent imperfections, just as we do with humanity.

As we stand poised at the intersection of AI’s rapid advancement and its profound implications for humanity, our endeavour should be to co-create an AI ecosystem that mirrors the finest of human ideals, convictions, and hopes. In doing so, we must always remember that no machine can, or should, ever supplant human critical thought.

About the Author

Hamilton Mann - Author

Hamilton Mann is the Group VP of Digital Marketing and Digital Transformation at Thales. He is also the President of the Digital Transformation Club of INSEAD Alumni Association France (IAAF), a mentor at the MIT Priscilla King Gray (PKG) Center, and Senior Lecturer at INSEAD, HEC and EDHEC Business School.

The post The Path towards Trustworthy AI is no Tech but a Human Intelligence Test appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-path-towards-trustworthy-ai-is-no-tech-but-a-human-intelligence-test/feed/ 0
(Un)explainable AI: Should All AI Systems Be? https://www.europeanbusinessreview.com/unexplainable-ai-should-all-ai-systems-be/ https://www.europeanbusinessreview.com/unexplainable-ai-should-all-ai-systems-be/#respond Thu, 26 Oct 2023 14:32:09 +0000 https://www.europeanbusinessreview.com/?p=194773 By Hamilton Mann In the pursuit of harnessing the capabilities of Artificial Intelligence (AI), businesses and researchers grapple with paradoxes that emerge when aiming to achieve AI explainability. This article […]

The post (Un)explainable AI: Should All AI Systems Be? appeared first on The European Business Review.

]]>
By Hamilton Mann

In the pursuit of harnessing the capabilities of Artificial Intelligence (AI), businesses and researchers grapple with paradoxes that emerge when aiming to achieve AI explainability. This article delves into six primary AI explainability paradoxes: Generalization vs. Particularization, Complexity vs. Simplicity, Overfitting vs. Adaptability, Engineering vs. Understandability, Computational Efficiency vs. Effectiveness, and oriented-learning vs. self-learning. These paradoxes highlight the inherent challenges of achieving both model accuracy and transparency, shedding light on the trade-offs between creating an accurate depiction of reality and providing a tool that is effective, understandable, and actionable. Using examples from healthcare, stock market predictions, natural language processing in customer service, autonomous vehicles, and e-commerce, the article elucidates the practical implications of these paradoxes from a value creation perspective. 

The article concludes by offering actionable recommendations for business leaders to navigate the complexities of AI transformation, emphasizing the significance of context, risk assessment, stakeholder education, ethical considerations, and the continuous evolution of AI techniques.

The capabilities of artificial intelligence are evolving at an unprecedented pace, simultaneously pushing the boundaries of our understanding. As AI systems become more sophisticated, the line between transparent, explainable processes and those concealed within a ‘black box’ becomes increasingly blurred. The call for “Explainable AI” (XAI) has grown louder, echoing through boardrooms, tech conferences, and research labs across the globe. 

Yet, as AI permeates various sectors, we must grapple with a complex, and perhaps even controversial, query: Should all AI systems be made explainable? This issue, though seemingly straightforward, is layered with nuance. As we navigate the intricate landscape of AI, it becomes evident that certain systems, due to their very distinct purpose, must be designed with a certain standard of explainability, while others might not necessitate such transparency.

Referring to methods and tools that make the decision-making process of AI systems clear and interpretable to human users, the explainable AI idea is simple: if an AI system “makes” a decision, humans should be able to understand how and why that decision was made.

In healthcare, some AI models used to detect skin cancers or lesions provide visual heatmaps alongside their diagnoses. These heatmaps highlight the specific areas of the skin image that the model found indicative of malignancy, allowing dermatologists to understand the AI’s focus and reasoning.

By providing a visual representation of areas of concern, the AI system allows healthcare professionals to “see” what the model is detecting. This not only adds a layer of trust but also enables a doctor to cross-reference the AI’s findings with their own expertise.

In homeland security, some Security agencies use AI to scan surveillance footage and identify potentially suspicious activities. Explainable systems in this domain will provide reasoning by tagging specific actions (like an unattended bag) or behaviors (a person frequently looking over their shoulder) as indicators, rather than just flagging an individual without context.

By tagging and detailing specific actions or behaviors that are considered suspicious, the AI system offers insights into its decision-making process. This not only aids security personnel in quick decision-making but also helps in refining and training the AI system further.

In the legal domain, AI systems have been developed to analyze and review legal contracts and documents. One such tool, ThoughtRiver, scans and interprets information from written contracts used in commercial risk assessments.

The call for “Explainable AI” (XAI) has grown louder, echoing through boardrooms, tech conferences, and research labs across the globe.

As it reviews documents, ThoughtRiver provides users with an explanation for its analyses. For example, if it flags a particular clause as potentially problematic, it will explain why, referencing the specific legal standards or precedents that are pertinent. This not only accelerates the document review process but also provides lawyers with a clear understanding of the potential risks identified by the AI. 

The fact that an AI system can be explainable allows society to have confidence in the decisions that the system helps make. It’s a guarantee of control over the influence that AI can have in our societies.

Conversely, when an AI system’s decision-making process is opaque or not easily interpretable by humans, it is often classified as “black-box” AI. Such systems, despite their efficacy, might not readily offer insights into their internal workings or the rationale behind their conclusions. 

In Healthcare, Deep learning models have been used in hospitals to predict sudden deteriorations in patient health, such as sepsis or heart failure. These models can analyze vast amounts of patient data—from vital signs to lab results—and alert doctors to potential problems.

These technological advancements truly have the potential to save lives. However, this magic has its secrets that might elude human understanding.

While these models have proven to be efficient, the exact pathways and combinations of data points they use to arrive at their conclusions are often complex and not immediately clear to clinicians. This “black-box” nature can make it challenging for doctors to fully trust the model’s predictions without understanding its reasoning, especially in life-or-death situations.

Advanced AI systems are deployed in surveillance cameras in airports, stadiums, and other large public venues to detect potential security threats based on facial recognition, behavioral patterns, and more.

While the value of such systems offers real benefits for the safety of individuals and critical infrastructures essential to a country, it must also be recognized that understanding the decisions issued by the system can appear complex for humans to justify.

These systems process vast amounts of data at incredible speeds to identify potential threats. While they can flag an individual or situation as suspicious, the intricate web of reasoning behind such a decision—combining facial recognition, movement patterns, and possibly even biometric data—can be difficult to fully articulate or understand.

Some jurisdictions, in the US and China in particular, have started using AI systems to aid in determining the risk associated with granting bail or parole to individuals. These models analyze numerous factors, including past behavior, family history, and more, to generate a risk score.

While the goal of protecting populations could make such systems a real asset, they remain dangerous because the reasoning leading to the decision cannot be reconstructed by humans.

The decision-making process of these systems is multifaceted, taking into account a wide variety of variables. While they provide a risk score, detailing the exact weightage or significance attributed to each factor, or how they interplay, can be elusive. This lack of clarity can be problematic, especially when dealing with individuals’ liberties and rights. 

So, the question arises: why not simply make sure that all AI systems are explainable?

 

The question of regulating artificial intelligence, particularly in terms of explainability, is gaining attention from policymakers worldwide. China’s Cyberspace Administration (CAC) has released its “Interim Measures for the Management of Generative Artificial Intelligence Services,” addressing issues like transparency and discrimination. In contrast, the United States has currently a less prescriptive approach U.S. The country’s regulatory framework is largely based on voluntary guidelines like the NIST AI Risk Management Framework and self-regulation by the industry. For instance, Federal Agencies like the Federal Trade Commission (FTC) are already regulating AI within their scope, enforcing statutes like the Fair Credit Reporting Act and the Equal Credit Opportunity Act. In Europe, the General Data Protection Regulation (GDPR) mandates a “right to explanation” for automated decisions, a principle further reinforced by the European Union’s recently proposed Artificial Intelligence Act (AIA), which aims to provide a comprehensive framework for the ethical and responsible use of AI. As it stands, although many regulations are still works in progress or newly implemented, a complex, patchwork regulatory landscape is emerging, with different countries focusing on elements like accountability, transparency, and fairness. 

The implications are twofold: on the one hand, organizations have and will have to navigate an increasingly complex set of rules, and on the other, these regulations might actually foster innovation in the field of explainable AI, as this is a ground of multifaceted constraints.

As a matter of fact, we are faced with a series of paradoxes, which are nothing: that of performance, exemplified here by application cases of predictions that challenge our predictive framework, opposing perfection, illustrated here by our need to understand and control how AI formulates and concludes to certain predictions or decisions.

This trade-off between model explainability and performance arises from the intrinsic characteristics of different machine learning models and the complexities inherent in data representation and decision-making.

In addressing the challenge of explainable AI, we can identify six core paradoxes:

First, there is the Complexity vs Simplicity paradox. 

More complex models, like deep neural networks, can capture intricate relationships and nuances in data that simpler models might miss. 

As a result, complex models can often achieve higher accuracy. 

However, their intricate nature makes them harder to interpret. On the other hand, simpler models like linear regression or decision trees are easier to understand but might not capture all the subtleties in the data. 

In the realm of medical diagnostics, the Complexity vs Simplicity Paradox manifests in a notable way. While complex deep learning models can predict diseases like cancer with high accuracy by identifying intricate patterns in MRI or X-ray images, traditional algorithms rely on simpler features such as tumor size or location. Though these complex models offer superior diagnostic capabilities, their “black box” nature poses a challenge. Healthcare providers find it difficult to understand the model’s decisions, a critical factor in medical treatments that often require clear human understanding and explanation.

Innovators and data scientists are at the forefront of creating value by developing sophisticated algorithms that harness the power of vast datasets, yielding potentially life-saving diagnostic capabilities.

Within this framework, value is created and destroyed at multiple junctures. Innovators and data scientists are at the forefront of creating value by developing sophisticated algorithms that harness the power of vast datasets, yielding potentially life-saving diagnostic capabilities. This innovation benefits patients by providing them with more accurate diagnoses, which can lead to more effective treatments. However, this value creation is balanced by the potential destruction or stifling of trust in the medical realm. When healthcare providers cannot comprehend or explain the decision-making process of a diagnostic tool, they might be hesitant to rely on it fully, depriving patients of the full benefits of technological advancements. Additionally, this lack of transparency can lead to skepticism from patients, who might find it difficult to trust a diagnosis derived from an enigmatic process. Thus, while data scientists create value through advanced model development, that value is simultaneously at risk of being diminished if these tools cannot be understood or explained by the medical community serving the patients. 

Second, there is the Generalization vs. Particularization paradox.

Models that are highly interpretable, such as linear regression or shallow decision trees, make decisions based on clear and general rules. But these general rules might not always capture specific or intricate patterns in data, leading to potentially lower performance. Complex models, on the other hand, can identify and use these intricate patterns but do so in ways that are harder to interpret. 

The Generalization vs. Particularization Paradox is vividly evident in the field of credit scoring. General models typically employ simple, overarching criteria such as income, age, and employment status to determine creditworthiness. On the other hand, particular models delve into more nuanced data, including spending habits and social connections. Although particular models may yield more accurate predictions, they introduce challenges for consumers who struggle to understand the rationale behind their credit scores. This opacity can raise serious concerns about fairness and transparency in credit assessments. 

In this dynamic, value is both generated and potentially compromised by the tug-of-war between general and particular modeling approaches. Financial institutions and lenders stand to gain immensely from particular models; these models’ refined accuracy enables them to better assess the risk associated with lending, potentially reducing financial losses and optimizing profits. For consumers, an accurate credit assessment based on intricate patterns could mean more tailored financial products and potentially lower interest rates for those who are deemed low risk. However, the value creation comes at a cost. The very nuance that grants these models their accuracy also shrouds them in a veil of mystery for the average consumer. When individuals can’t ascertain why their credit scores are affected in a certain way, it can erode their trust in the lending system. This mistrust can further alienate potential borrowers and diminish their engagement with financial institutions. Thus, while financial technologists and institutions might create value through precision, this can simultaneously be undercut if the end consumers feel disenfranchised or unfairly judged by incomprehensible algorithms.

Third, there is the Overfitting vs Adaptability paradox.

Highly complex models can sometimes “memorize” the training data (overfit), capturing noise rather than the underlying data distribution. While this can lead to high accuracy on training data, it often results in poor generalization to new, unseen data. Even though simpler, more interpretable models might not achieve as high accuracy on the training set, they can be more robust and generalizable.

The Overfitting vs Adaptability Paradox is particularly noticeable within the scope of stock market prediction. Complex models may excel at “memorizing” past market trends, but often falter when applied to new, unseen data. In contrast, simpler models are less prone to overfitting and tend to be more adaptable to market changes, although they might not capture more complex relationships in the data. However, overfit models can lead investors astray, causing them to make poor financial decisions based on predictions that don’t hold up over time. 

In the intricate world of stock market prediction, the creation and possible erosion of value intertwine at the nexus of this Overfitting vs Adaptability paradox. On the creation side, financial analysts and quantitative researchers work tirelessly to devise algorithms aiming to unearth market trends and anomalies, aspiring to provide investors an edge in their investment strategies. When these algorithms are aptly balanced, investors stand to gain significantly, reaping the benefits of well-informed decisions that translate to lucrative returns. However, the precarious terrain of overfitting, where models are seduced by the idiosyncrasies of past data, puts this value at risk. Overreliance on these overfit models can mislead even the most seasoned investors into making suboptimal investment choices, leading to substantial financial losses. In such scenarios, not only is monetary value destroyed for the investor, but the credibility of quantitative models and the researchers behind them risks being undermined. It’s a stark reminder that in the realm of financial predictions, the allure of complexity must be weighed carefully against the timeless virtues of simplicity and adaptability. 

Fourth, there is the Engineering vs Understandability paradox.

For simpler models to achieve high performance, substantial feature engineering might be necessary. This involves manually creating new features from the data based on domain knowledge. The engineered features can make the model perform better but can also make the model’s decisions harder to interpret if the transformations are not intuitive. 

In customer service applications using natural language processing, the Engineering vs Understandability Paradox comes into play. Feature engineering techniques can be applied to process text into numerous features like sentiment and context, which improves model performance. However, while this can enhance performance, it can also make the decision-making process opaquer. This can pose challenges for managers trying to understand how the model is categorizing customer complaints or inquiries.

In the nuanced arena of customer service applications powered by natural language processing, the balance between crafting high-performing models and maintaining their transparency becomes a delicate dance of value creation and potential erosion. Here, data scientists and NLP experts create immense value by leveraging their domain knowledge to engineer features, aiming to refine a model’s ability to discern customer sentiment, context, and intent. This refined discernment can lead to more tailored and effective responses, resulting in enhanced customer satisfaction and trust. But therein lies the double-edged sword: while businesses and their customers stand to benefit from more accurate and responsive AI-powered systems, the increasingly intricate engineering can obscure a model’s rationale. For team leaders and managers overseeing customer service, this murkiness complicates their ability to intervene, train, or even explain a model’s decisions. Such lack of clarity can lead to misalignments in strategy and potential missteps in customer interactions. Thus, while the technical prowess of data scientists lays the groundwork for enhanced customer experiences, the resulting complexity threatens to diminish the trust and actionable insights that teams require to function effectively.

Fifth, there is the Computational Efficiency vs Effectiveness paradox.

Simpler, interpretable models often require less computational power and memory, making them more efficient for deployment. In contrast, highly complex models might perform better but could be computationally expensive to train and deploy.

Complex models in autonomous vehicles enable better real-time decision-making but come at the cost of requiring significant computational power. On the other hand, simple models are easier to deploy but might struggle with handling road anomalies effectively. A balance must be struck between computational efficiency and the safety of the vehicle and its passengers. 

In the rapidly evolving world of autonomous vehicles, the interplay between computational demands and real-world effectiveness carves out a pathway for both profound value creation and potential risks. Passengers and road users stand to benefit from vehicles that can respond adeptly to a myriad of driving conditions, promising safer and more efficient journeys. Yet, this promise carries a price. The more intricate the model, the more it leans on computational resources, leading to challenges in real-time responsiveness and potentially higher vehicle costs. Moreover, the reliance on overly simplistic models to save on computational power can lead to oversights when the vehicle encounters unexpected road scenarios, risking the safety of passengers and other road users. As such, while the technological advancements in autonomous vehicles present a horizon filled with potential, the equilibrium between efficiency and effectiveness becomes pivotal, ensuring that value is neither compromised nor squandered in the quest for progress.

Sixth, there is the oriented-learning vs self-learning paradox.

Some techniques that make models more interpretable involve adding constraints or regularization to the learning process. For instance, “sparsity” constraints can make only a subset of features influential, making the model’s decision process clearer. However, this constraint can sometimes reduce the model’s capacity to learn from all available information, thus potentially reducing its performance.

Oriented-learning models in recommender systems often focus on specific rules or criteria such as user history, making them easier to understand but potentially less effective. Self-learning models, in contrast, adapt over time and consider a wider variety of data points, possibly surprising users with how well the system seems to “know” them. In eCommerce, the real-world implication suggests that while understanding why a recommendation was made might be less critical than in healthcare, there are still concerns around privacy and effectiveness.

If a system knows too much, it risks alienating users who feel their data is being overly exploited. 

In the intricate tapestry of eCommerce, the duality between oriented-learning and self-learning mechanisms delineates a realm where value and potential pitfalls intersect. eCommerce giants and data scientists invest heavily in developing sophisticated recommender systems, with the aim of tailoring user experiences and fostering customer loyalty. For the consumer, this can mean a more seamless shopping experience, where product recommendations align closely with their preferences and past behaviors. The immediate value here is twofold: businesses see higher sales and consumers enjoy more relevant content. However, the balance is delicate. Oriented-learning models, while easier to explain and understand, might at times feel too restrictive or predictable, possibly missing out on suggesting a wider variety of products that users might find appealing. On the flip side, the allure of self-learning models, with their uncanny knack for personalization, raises eyebrows on privacy concerns. If a system knows too much, it risks alienating users who feel their data is being overly exploited. 

Herein lies the paradox’s crux: in the endeavor to create a perfect shopping experience, the very tools designed to enhance user engagement could inadvertently erode trust and comfort, starving the relationship between consumer and platform of its inherent value. 

All these paradoxes, which means trade-offs to be made, do exist because the characteristics that make models interpretable (simplicity, clear decision boundaries, reliance on fewer features) can also limit their capacity to capture and utilize all available information in the data fully. On the other hand, models that utilize all data intricacies for decision-making do so in ways that are harder to articulate and understand.

The balance or tension between achieving a precise, accurate depiction of reality and having a practical, effective tool for understanding, prediction, and intervention is a recurring theme. 

Philosophers like Nancy Cartwright have discussed how scientific models work. Models are often idealized, simplified representations of reality, sacrificing precision for tractability and explanatory power. These models might not be fully “true” or precise, but they can be extremely effective in understanding and predicting phenomena.  

How should business leaders manage these paradoxes in their AI transformation?

Here are some recommendations for tackling the challenges posed by the six specific paradoxes outlined. 

  • Recognize the Importance of Context while acknowledging the audience (Generalization vs. Particularization): Understand that not all AI applications require the same degree of explainability, and not all explanations are equally interpretable depending on the audience. For example, AI used in healthcare diagnoses may demand higher transparency than AI used for movie recommendations. 
  • Risk vs. Reward (Complexity vs Simplicity): Analyze the potential risk associated with AI decision-making. If an incorrect decision could lead to significant harm or costs (e.g., in healthcare or legal decisions), prioritize explainability even if it sacrifices some performance. 
  • Embrace Appropriate Complexity (Complexity vs Simplicity): When developing or purchasing AI systems, make deliberate choices about complexity based on goals. If the goal is to capture intricate data patterns, a more complex model might be suitable. But always ensure that the decision-makers who use the AI outputs understand the model’s inherent limitations in terms of interpretability. 
  • Ensure Robustness over High Training Accuracy (Overfitting vs Adaptability): Always assess and monitor the AI model’s performance on unseen or new data. While complex models might achieve impressive results on training data, their adaptability to fresh data is paramount, guarding against overfitting. 
  • Feature Engineering with Interpretability in Mind, not as an afterthought. (Engineering vs Understandability): If your AI application requires feature engineering, ensure that those features are interpretable and meaningful in the domain context and don’t add unnecessary opacity, addressing the balance between enhancing performance and maintaining clarity. While these can enhance performance, they shouldn’t compromise understandability.
  • Efficient Deployment (Computational Efficiency vs Effectiveness): When deploying AI models, especially in real-time scenarios, weigh the benefits of model simplicity and computational ease against the potential performance gains of a more complex, computationally intensive model. Often, a simpler model might suffice, especially if computational resources are a constraint.
  • Steer Model Learning for Clarity (oriented-learning vs self-learning): When transparency is vital, for AI applications where interpretability is crucial, consider guiding the model’s learning through constraints or regularization. This may reduce performance slightly, but it’ll enhance the model’s clarity and decision-making process.
  • Educate Stakeholders on Model Nuances (Generalization vs. Particularization): Regularly train stakeholders who will interact with or rely on the AI system’s general rules and specific intricacies, ensuring they’re well-versed with its capabilities and limitations, and potential biases. The incorporation of expertise from disciplines such as psychology, sociology, and philosophy can provide novel perspectives on interpretability and ethical considerations. Human-centered design thinking can guide the development of AI systems that are both more interpretable and more acceptable.
  • Embrace a Hybrid Approach (Engineering vs Understandability): Merge machine and human decision-making. While AI can offer rapid data processing and nuanced insights due to feature engineering, human expertise can provide the necessary context and interpretability ensuring clarity where the AI might be less transparent.
  • Prioritize Feedback Loops (Overfitting vs Adaptability): Especially in critical domains, ensure that there are feedback mechanisms in place. If an AI system makes a recommendation or prediction, human experts should have the final say, and their decisions should be looped back to refine the AI model. 
  • Uphold Transparency and Documentation (Complexity vs Simplicity): Maintain clear documentation about the design choices, data sources, and potential biases of the AI system. This documentation will be crucial for both internal audits and external scrutiny. This practice aids in navigating the complexity of AI systems by providing a simpler, more transparent layer for review.
  • Protect Individual Rights (oriented-learning vs self-learning): Especially in sectors like law enforcement or any domain dealing with individual rights, ensure that the lack of full explainability does not infringe upon individuals’ rights, for instance due to the AI system leaning heavily towards certain data features or constraints, overlooking the bigger picture. Decisions should never be solely based on “black-box” AI outputs. 
  • Define an Ethical Framework (oriented-learning vs self-learning): Leaders should establish an ethical framework and governance model that set the parameters and ethical standards for the development and operation of AI systems. This should cover aspects like data privacy, fairness, accountability, and transparency. Data ethics committees can be useful in this regard. Businesses have to be cognizant of the evolving landscape of AI-related regulations. Being proactive in this aspect not only mitigates risk but also could serve as a competitive advantage. 
  • Stay Updated and Iterative (Computational Efficiency vs Effectiveness): The field of AI, especially XAI (Explainable AI), is rapidly evolving. Stay updated with the latest techniques, tools, and best practices. Regularly revisit and refine AI deployments to ensure they meet the evolving standards and needs while ensuring models remain computationally efficient. This includes re-evaluating and adjusting models as new data becomes available or as societal norms and regulations evolve.

In conclusion, the goal is not to swing entirely towards complete explainability at the expense of performance, or complete performance at the expense of explainability. It is about finding a balanced approach tailored to each AI application’s unique risks and rewards, taking into account the human and environmental implications that are inextricably intertwined with the purpose of building trust.

This article was originally published in The World Financial Review on 25 September 2023. It can be accessed here: https://worldfinancialreview.com/unexplainable-ai-should-all-ai-systems-be/

About the Author

Hamilton MannHamilton Mann is the Group VP of Digital Marketing and Digital Transformation at Thales. He is also the President of the Digital Transformation Club of INSEAD Alumni Association France (IAAF), a mentor at the MIT Priscilla King Gray (PKG) Center, and Senior Lecturer at INSEAD, HEC and EDHEC Business School.

The post (Un)explainable AI: Should All AI Systems Be? appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/unexplainable-ai-should-all-ai-systems-be/feed/ 0
The Complex Equation of AI in the Work World https://www.europeanbusinessreview.com/the-complex-equation-of-ai-in-the-work-world/ https://www.europeanbusinessreview.com/the-complex-equation-of-ai-in-the-work-world/#respond Sun, 15 Oct 2023 07:35:49 +0000 https://www.europeanbusinessreview.com/?p=193997 By Hamilton Mann At the imminent dawn of our artificially intelligent societies, a profound question arises: To what extent will artificial intelligence reshape our professional landscape? It’s becoming clear that […]

The post The Complex Equation of AI in the Work World appeared first on The European Business Review.

]]>
By Hamilton Mann

At the imminent dawn of our artificially intelligent societies, a profound question arises: To what extent will artificial intelligence reshape our professional landscape?

It’s becoming clear that the role of humans in this dance of progress is increasingly being questioned. Whether embraced with enthusiasm or apprehension, the undeniable reality is that many stand on the edge of change, with AI poised to redefine professional structures.

In the automotive industry, autonomous vehicles are one of the major innovations. AI powers these vehicles, allowing for real-time driving decisions. Companies like Tesla, Waymo, and others are actively working on advancing this technology.

In healthcare, AI is used for early disease detection through the analysis of medical images. It can help spot tumors or other abnormalities in X-rays or MRIs long before a human eye can discern them.

In the financial sector, robo-advisors are automated platforms that provide financial advice and manage clients’ investment portfolios based on algorithms. Companies like Betterment and Wealthfront utilize AI to optimize investment strategies.

In agriculture, AI is employed for crop management and yield forecasting. Through drones and sensors, farmers can monitor their fields in real-time, detect diseases or pests, and predict the water or nutrient needs of their crops.

In retail, many businesses use AI for product recommendations. Giants like Amazon and Netflix suggest products or movies based on users’ preferences and purchase or viewing histories, using machine learning algorithms to continuously refine their suggestions.

Each of these examples showcases how AI can not only enhance efficiency and accuracy across various sectors but also create new opportunities and challenges in the job market.

As we move towards this inflection point, the core reflection goes beyond the mere substitution of roles. The challenge lies in understanding a new balance that’s somewhat difficult to predict: Will the jobs erased by AI be outnumbered by those it creates? Will it be a 2 to 1 ratio, 3 to 1, or perhaps an even more contrasting equation?

First and foremost, AI will not replace anyone – it will replace specific jobs currently held by individuals. It’s a subtle distinction.

We shouldn’t reduce human life solely to a job. To reduce the essence of an individual only to their profession is a gross oversimplification.

At the heart of this issue is an equation: the Ratio R=D/C where D represents the jobs Deleted due to AI and C denotes the new jobs Created by AI.

The perspective, often driven by fear, which paints a future where AI would erase more jobs than it creates, assumes R > 1.

However, other variables are often overlooked or underestimated, such as:

Jobs Maintained (M) without significant change, because they require a human touch that’s hard to automate since error and imperfection are part of the creation process and what adds charm (e.g., hairstylists, artisans) or because understanding human nuances is crucial (e.g., psychoanalysts, sociologists, career counselors).

Jobs that Shift (S) to become more tech-oriented or require a different skill set, like professions in the fields of medicine, graphic design, or journalism. Here, AI-based tools allow for data analyses or provide insights into emerging trends on subjects like corporate financial outcomes or sports statistics. These roles are adapting to incorporate more data analysis skills.

Jobs Enhanced (E) by AI but still requiring human intervention, like Sales Assistants, call center operators, or truck drivers, where AI could handle driving over long distances or on highways, but human intervention remains vital for more complex situations like city driving.

The Total (T) of jobs in an economy is better represented by the sum of D+C+M+S+E.

The ratio ‘R=D/C’ does not fully capture the potential footprint of AI on the job market by itself.

For the claim that a significant portion of jobs would be replaced by AI to be true, it would require R=D/(T-D) with R > 1.

But it’s not that simple.

The impact of new technologies on the job market has always been a topic of debate among economists, historians, and labor market experts. Historically, the introduction of new technologies has often led to a phase of disruption, followed by a period of adjustment, and ultimately a net job creation or new opportunities.

Based on these historical trends, several AI impact models can be considered:

Will it be a logarithmic growth? 

In this case, the impact of AI on job losses could be swift, eventually slowing down over time. This could be a reasonable assumption given that the first jobs to go would be those that are easily automatable, and over time, it would become increasingly challenging to replace jobs requiring unique human skills.

Or will it be an exponential growth? 

Even with technological advances, regulatory, ethical, or practical barriers might slow down a full AI adoption scenario.

In this scenario, the impact would accelerate over time, which might be the case if AI technology progresses rapidly. However, this could be overly pessimistic, as even with technological advances, regulatory, ethical, or practical barriers might slow down a full AI adoption scenario.

Or will it, in the end, be a more nuanced growth, a polynomial growth? 

Here, one would need to consider phenomena of peaks and valleys in AI adoption and therefore its impact on employment. For instance, a rapid introduction of AI might lead to many job losses, but as the technology matures and society adapts, new jobs might be created or transformed.

Given the history of technological innovations, from the industrial revolution to the digital revolution, a scenario combining elements from both the logarithmic and polynomial models might be among the most realistic. 

This would mean that the initial impact of AI on job losses would be quick and disruptive, but it would slow down over time (logarithmic). Then, as new uses for AI emerge and the technology matures, there could be fluctuations in how AI impacts work (polynomial).

To potentially grasp these dynamics, revisiting our equation where we left off, we might have R = D / (a * log(b * (T-D)) + c * (T-D)^2), where a, b, and c are constants to be determined empirically.

In addition to the trend of this non-linear impact, for a future where AI would eliminate more jobs than it would create (thus R > 1) to happen, this would imply that several undesirable contextual conditions and situations would have been made possible. Some of these would arise as a constant, others intermittently, with certain factors of concomitance between them:

Firstly, the assumption suggesting that there is a fixed Quantity (Q) of work to be done, and that if AI does more, there will be less for humans, should this time prove correct, given that historically, employment volume has never been a zero-sum game (referring to what economists call the ‘lump of labor fallacy’).

Next, AI systems should not only automate certain tasks but also Largely (L) replace, in a significantly impactful manner, the tasks that make up entire professional roles, representing a substantial portion of the job market. This would mean that within the scope of this share, these systems would perform these jobs without requiring human roles, supervision, monitoring, or additional human tasks.

Furthermore, the Development (D) and adoption of AI technologies would have to progress at such a fast pace—without plateauing after rapid growth due to physical, economic, or other constraints—that retraining or transition opportunities for the affected workforce would be unattainable.

Given the history of technological innovations, from the industrial revolution to the digital revolution, a scenario combining elements from both the logarithmic and polynomial models might be among the most realistic.

Also, it should be significantly more cost-effective for companies to implement and rely on AI solutions than to hire human workers. This economic inclination should be so pronounced that companies would prefer AI even if it’s not perfect, and there should be no regulatory restrictions curbing this Preference (P). For instance, if AI could autonomously write original movie scripts at a lower cost than humans, with quality equal to or even surpassing the most original human-produced scenarios, film production companies might favor AI over hiring script writers. And all this would occur in a context where no regulation would hinder this adoption for ethical or cultural reasons.

Moreover, the implications of AI for security (S), private Life and individual rights should not be deemed major societal concerns. This would imply that there’d be no ethical constraints limiting the areas where AI operates autonomously, especially concerning areas involving decisions that have a significant direct impact on human lives. One example is the question surrounding the use of facial recognition.

Additionally, the environmental footprint resulting from the Widespread (W) replacement of numerous human jobs by AI systems—due to the increased computations inherent in these systems and the infrastructure needed for their operation—would not be viewed as alarming or a potential threat to life on Earth.

Lastly, all the Negative Impacts (I) of AI on mental health, whether due to job losses, societal changes, or the heightened dependency on technology, which would ultimately impoverish social interactions essential for the physical and psychological well-being of humans, would have been widely overlooked or outright ignored by society.

The exact form of the function f(Q, L, D, P, S, W, I) describing how these variables interact would depend on precisely how these variables influence the ratio R, which would require a deep understanding of the interactions between these factors and their impact on employment.

Equally important, future technological trends, other than AI, some of which amplify the application areas of AI, as well as societal evolutions, can always introduce unexpected variables. These aren’t accounted for in models based on historical trends and may require regular adjustments or calibrations as new data or perspectives become available.

Including all these variables makes the equation more complex than it seems, or as some oversimplified views might suggest by reducing it to a Ratio R=D/C where D represents jobs Deleted due to AI and C represents new jobs Created by AI.

To what extent will artificial intelligence reshape our professional landscape? Will the roles erased by AI be surpassed by those it creates?

The answer to these questions is hard to predict. 

However, some certainties emerge:

Many parameters ultimately depend on what we decide to do in the coming years.

While some jobs will disappear, the ability to acquire new skills, to adapt and reorient, will be crucial. This makes education and training one of the major challenges of our century.

It’s up to us now to make the necessary choices to build the future we want.

About the Author

Hamilton MannHamilton Mann is the Group VP of Digital Marketing and Digital Transformation at Thales. He is also the President of the Digital Transformation Club of INSEAD Alumni Association France (IAAF), a mentor at the MIT Priscilla King Gray (PKG) Center, and Senior Lecturer at INSEAD, HEC and EDHEC Business School.

The post The Complex Equation of AI in the Work World appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-complex-equation-of-ai-in-the-work-world/feed/ 0
Banking on Data: the World’s First-Ever Common Currency https://www.europeanbusinessreview.com/banking-on-data-the-worlds-first-ever-common-currency/ https://www.europeanbusinessreview.com/banking-on-data-the-worlds-first-ever-common-currency/#respond Sun, 17 Sep 2023 13:13:49 +0000 https://www.europeanbusinessreview.com/?p=191809 By Hamilton Mann There is no shortage of descriptors when it comes to unveiling the considerable importance of data in our societies. While some refer to it as the new […]

The post Banking on Data: the World’s First-Ever Common Currency appeared first on The European Business Review.

]]>
By Hamilton Mann

There is no shortage of descriptors when it comes to unveiling the considerable importance of data in our societies. While some refer to it as the new black gold, this comparison is somewhat appropriate but not entirely accurate. Just as oil is vital for energy, data has become indispensable and inherent to the functioning of our digital and artificially intelligent economy. But unlike oil, which diminishes as it is used, data can be utilized and shared infinitely.

As odd as it may seem or appear, at the dawn of the 21st century, the entire world is undergoing one of its greatest societal transformations since the invention of currency, yet it is not truly regarded with the same level of significance.
Data is the world’s first-ever common currency. And like money, it plays and will play a fundamental role in the economy and society.

Data is a unit of measurement

As money serves as a standard of value, data serves as a unit of measurement for insights and business performance. As soon as companies began using databases to track their operations in the 60s and 70s, data became a unit of measurement. With the development of analytical tools in the 80s and 90s, companies began measuring their performances through data in much more sophisticated ways. This is particularly visible in sectors where energy data is used to monitor efficiency, forecast demand and optimise operational performance.

The ‘total quality management’ movement of the 80s required intensive use of data. Simultaneously, the development of systems such as integrated management software (ERP) enabled companies to track and measure aspects of their operations in unprecedented ways. Data now allows for unprecedented opportunities in capital funding, underscoring its transformative role as a pivotal asset in modern finance.
The most striking modern example is probably the rise of Silicon Valley big-tech companies.

These companies-built empires by measuring and analyzing user behaviors on a scale never seen before, making data not only a unit of measurement but also the very foundation of their business model.

Data is a medium of exchange

As currency facilitates transactions, data allows businesses to better understand their customers and tailor their offerings. It is exchanged between entities for various services, such as personalizing advertising. The concept of data as a medium of exchange dates to the advent of the first computer systems, but its widespread adoption and recognition truly took off with the emergence of the Internet and, more recently, the rise of e-commerce and online services in the 90s and 2000s. As more and more businesses began offering online services, they realized that the data generated by users was valuable for improving their services, creating new products, or selling it to third parties.

A prime example of this transformation is the ascent of the online data search economy. Each online search performed by a user provides information about user interests, behaviors, and desires deriving massive revenue from targeted advertising using users’ search data. Data has thus become a form of currency with which users “pay” for services.

Data is a store of value

As money retains its value, relevant and well-preserved data can offer long-term strategic benefits to a company, even years after its collection. Companies quickly understood that the information they collected about their users was valuable in and of itself, not only for improving their services but also as a source of revenue.

Customer data aids in understanding buying patterns, preferences, and habits to recommend products, leading to increased sales. Besides, just as money acts as a reserve of value, safeguarding wealth for future investments, data too holds intrinsic worth, anchoring the potential for innovation.

Without this reservoir of data, pioneering breakthroughs in AI technologies—enabling the development of systems from autonomous vehicles to smart healthcare diagnostics and real-time language translation—would remain beyond our grasp.
The recognition that data can be used as a store of value was a turning point, leading to the era of the so-called “Big Data” where companies of all sizes and from all sectors seek to capture, store, and analyze data in hopes of deriving future value from it.

Data is a representation of sovereignty

Owning and controlling one’s own data has become a vital component of digital sovereignty, just as having one’s own currency is a symbol of national sovereignty.
As nations have become aware of the strategic implications of data concerning its storage, cross-border transfer, and access by foreign governments, it has become integral to national sovereignty.

China is perhaps the most emblematic example of data as a representation of sovereignty. With the adoption of its cybersecurity law in 2017, China implemented strict data localization rules, demanding that “personal information and critical data” collected by core information infrastructure operators be stored within its borders.
Many other countries, from Russia to India, have since adopted similar rules, underscoring how possession, control, and access to data have become central in contemporary notions of national sovereignty.

Data is an economic policy instrument

As currency is regulated to influence the economy, data is used by governments and businesses to inform their decisions and strategies. Particularly with the rise of tech giants, governments quickly grasped the strategic importance of data for economic development, competition, and regulation.

With the introduction of the General Data Protection Regulation (GDPR) in 2018, the EU established strict rules on data collection, storage, and sharing, thereby recognizing not only its economic value but also its importance in terms of human rights and individual freedoms.

Discussions about competition, data monopolization, and the impact of tech giants on the digital economy are now at the heart of political and economic debates.
The use of data as an economic policy tool is also evident in the regulation of artificial intelligence, digital privacy standards, and antitrust measures against data monopolies.

Data is an element of credit facilitation

Currency allows for the granting of credits. Similarly, quality data can open opportunities for partnerships and funding for businesses. Data became a credit facilitation tool with the rise of financial technologies, or “fintech”, in the 2010s. The surge of peer-to-peer lending platforms and fintech companies that use advanced algorithms to assess creditworthiness based on a variety of data – from financial histories to online shopping habits – was the harbinger of this transformation.

China’s Ant Financial, the owner of Alipay, stands as an iconic example of this shift. With its “Zhima Credit” product (also known as “Sesame Credit”), Ant Financial offers a credit scoring system based on data analysis sourced from user activities on Alibaba Group’s platforms and other sources. This score can then be used to secure loans, rent apartments, and even for certain government services.

The use of data in this manner has revolutionized access to credit, particularly for individuals and small businesses who previously struggled to obtain loans due to a lack of traditional credit history.

Data is a foundation of the tax system

While currency is essential for tax collection, data is increasingly used to monitor tax compliance and prevent fraud. Data became foundational to the tax system as governments began using digital technology to collect, process, and analyze tax information. This shift also gained momentum in the early 2000s, with the increasing digitalization of public services.

The adoption of online platforms by tax administrations for tax declaration and payment was a turning point. The Internal Revenue Service in the United States serves as an example. Another example is India’s introduction of the Goods and Services Tax in 2017. In France, the implementation of tax-at-source in 2019 also stands as a symbolic representation of the use of data in the French tax system.
These developments signify how data has become crucial to modernize and streamline tax systems globally.

Data is foundational to trust and stability

Proper data management strengthens the trust of customers, partners, and investors, just as a stable currency bolsters confidence in the economy. Data became a key element of trust and stability with the advent of the digital revolution, especially with the development of blockchain technologies in the 2010s.

Bitcoin, created in 2009, is arguably the most prominent example as a decentralized currency where trust is established not by a central financial institution, but by network consensus. The value and stability of Bitcoin rest on the transparency and immutability of transaction data recorded in the blockchain. Thus, data, when processed and stored in a transparent and secure manner, can serve as the foundation for trust and stability in a decentralized system. More broadly, data holds the potential to create trust in various fields, from smart contracts to online voting systems and many other applications.

Data is a facilitator of international trade

Much like currency facilitates international trade, data plays a growing role in global commerce, with the transfer of data between countries becoming a key element of trade agreements. Integrated supply chain management systems, e-commerce platforms, and online payment solutions are among the major innovations that have helped facilitate international trade.

The rise of the dominant e-commerce global marketplaces is another prime example of how data has propelled international trade. Spanning multiple continents, they leverage user data to recommend products, predict demand, set pricing strategies, and optimize logistics. Sellers, from different corners of the globe, utilize their data-driven insights to forecast product demand, manage inventory, and target customers. Through its comprehensive logistics and fulfillment services, these companies use data analytics to streamline international shipping, customs, and storage processes, making it easier than ever for sellers to reach global audiences.

It underscores the indispensable role of data in simplifying cross-border transactions, predicting global market trends, and democratizing access to international markets for businesses of all scales.

Data is a vector for regulating liquidity

As monetary policy regulates the amount of currency, regulations on data determine how they can be stored, shared, and utilized. The rapid expansion of digital financial markets has enabled the use of real-time data to analyze and predict market movements, as well as to automatically regulate liquidity.

Investment banks and hedge funds were among the first to adopt high-frequency trading, using algorithms to execute orders at a speed and frequency that are beyond a human trader. The May 6, 2010, flash crash, often referred to as the “Flash Crash,” is a notable example of the consequences of intensive data use in regulating liquidity.
While this event highlighted the risks associated with an excessive reliance on algorithms and data for liquidity regulation, it also underscored the critical importance of data in the modern functioning of financial markets.

Overall, data has emerged as a pivotal factor driving global economic structures, paralleling the influence once held exclusively by traditional currency.

It underscores its central role in a multitude of sectors, from economic policymaking to international trade. Drawing on its historical trajectory and expansive influence, it becomes evident that our current understanding of data’s value is only scratching the surface.

As we acknowledge the transformative power of data, it’s crucial to offer recommendations to harness its potential responsibly, ensuring a sustainable and equitable global data economy.

Let’s delve into strategic insights to bank on this newfound currency of the digital age:

Building central data backbones for a modern data economy

Central banks, such as the European Central Bank or the US Federal Reserve, play a major role in regulating and stabilizing currency. There is no equivalent entity to regulate data on such a scale. Today, just as there are Central Banks for currency, Central Data Banks are necessary.

Currently, vast amounts of data are held by a few tech giants. A central data bank could help decentralize the ownership of these data, thus reducing the power and control concentrated in a few hands. A central data bank could ensure equitable access to information, preventing certain businesses or entities from monopolizing data for profit.

The central data bank would be responsible for overseeing institutions that hold, process, and exchange data, just as central banks supervise financial institutions. It would establish standards for data protection, their ethical use, and would ensure compliance with these standards through audits and inspections.

Determining the rate at which data should be universally accessible

Inspired by the interest rate benchmarks used by central banks in the financial world, the benchmark access rate to data (BARD) would serve as a regulatory mechanism to control access to data stored in a central data bank. This rate would represent a measure of the ease (or difficulty) with which external entities can access this data. The lower the BARD, the more affordable it would be for entities to view or use the data stored in the central bank. Conversely, a high BARD would mean that access to the data is more restricted and costly.

It would be a strategic tool for promoting Research and Innovation: when the bank wishes to stimulate research, innovation, or competitiveness, it could lower the BARD. This would allow researchers, startups, and companies to take advantage of the available data, thereby fostering technological and economic development.
The establishment of the BARD would be the responsibility of a regulatory authority, likely a governmental entity or an independent body mandated for this function.

Balancing concerns about data privacy with contingency planning for data security

Drawing inspiration from the mandatory reserves imposed on banks by monetary authorities, Mandatory Data Reserves (MDR) would refer to a minimum portion of data that businesses and institutions would be required to store within a central data bank. This mechanism would aim to ensure the security, transparency, and regulation of data flow.

Just as banks are required to hold a fraction of their deposits in reserve, entities that collect, process, and store data would be obliged to deposit a certain proportion of these data in the central data bank.

The amount of data to be kept could be defined in terms of a percentage of the entity’s total storage capacity or the total volume of data processed.
These deposited data would remain the property of the originating entity but would be stored securely and centrally for various reasons, including regulation, oversight, and security. Storing data in a central reserve would promote greater transparency and enhanced accountability for entities.

Navigating the fine line between data accessibility and data exploitation

Similar to the open market operations used by central banks to regulate the money supply, Open Data Market Operations (ODMO) would refer to the transactions initiated by the central data bank on an open data market. The goal would be to regulate the quantity, quality, and availability of data in the digital economy.

ODMO would allow the central data bank to actively intervene in a data market, where datasets are exchanged. This intervention could take the form of purchases to inject data into the market or sales to withdraw data from the market or generate revenue. The price of these datasets would be determined by demand and supply in the market, just like securities in financial markets.

By purchasing high-value or rare datasets, the central data bank could make them available to researchers, innovators, and decision-makers, thereby promoting innovation and informed decision-making.

Ensuring individuals are fairly valued and compensated for their data

Every citizen could have a personal data account with the central data bank where they can voluntarily deposit some of their data. These accounts would be protected and secure, offering citizens complete control over who can access their data and under what conditions. Access to certain data could be subject to a remuneration system for the data owners. Companies, researchers, or other entities wishing to access specific data might pay fees. A portion of these fees could be redistributed to the citizens whose data are used. This remuneration would be proportional to the use and value of the data in question.

The central data bank could establish a mechanism to assess the value of different types of data based on their rarity, utility, etc. Citizens could then have an idea of the monetary value of their data, encouraging them to knowingly share more valuable or rare information. At the end of each period (month, quarter, year), the central data bank could redistribute a portion of its profits to citizens in the form of a “data dividend”. This dividend would be a recognition of the collective value of the data provided by the citizens and would be distributed based on each individual’s contribution.

Lending data responsibly

The concept of “Data Lending Facilities”, inspired by the lending facilities that central banks provide to financial institutions, would enable the provision of data for specific uses over a defined period, grounded in the idea that data can be treated as an asset, akin to money.

In the modern data-driven economy, not all institutions necessarily have the resources to collect, process, and store vast data sets. However, they might need this data for specific projects, studies, or innovations. Rather than forcing them to purchase or access these data on a permanent basis, a lending facility would allow them to borrow this data for a limited duration.

This access would often be limited to a specific platform to ensure security and monitor usage. This could be useful for institutions that need specific data for a temporary research project but don’t necessarily require permanent access to such data.

Standardizing the relative value of different data sets

Just as currencies have relative values to each other in the foreign exchange market, data could also be valued and traded based on certain criteria. This would introduce a form of standardization and regulation in data trading. Several factors could determine the value of data, such as its relevance, timeliness, rarity, specificity, quality, etc.

Specialized institutions or departments within the central data bank might be responsible for the regular evaluation of data sets. A centralized platform could be established where entities can offer their data sets for exchange, similar to a stock exchange.

Just like with currencies, the value of data would fluctuate based on supply and demand. Rare but highly demanded data sets could have a high exchange rate.
Such a system could introduce a form of standardization in how data is valued and traded.

Covering the intangible risks of Data breaches

In many countries, citizens’ bank deposits are insured up to a certain amount. There is no equivalent to “insure” personal data in the event of a breach or loss. The model of data deposit insurance has also become crucial. Data Deposit Guarantee Funds (DDGF) could be considered. Just as banks contribute to a deposit guarantee fund to protect customers’ money in case of a bank failure, companies that store and process data could be required to contribute to a similar fund for data. In case of data breach or loss, this fund could be used to compensate the affected individuals, whether through financial compensation or services.

Moreover, similar to bank deposit insurance that covers up to a certain amount per depositor, data deposit insurance could guarantee the security of the data up to a certain “quantity” or “value”. If someone loses data due to a breach, a predefined set of this data (for example, the most sensitive data) would be guaranteed or compensated.

Guaranteeing human rights take precedence over the surge in data collection

For many, the current rules and related sanction mechanisms for human’s data protection violations don’t seem to fully reflect the significance and sensitivity of personal data. In the financial sector, sanctions are designed to be as preventive as they are punitive. They are calculated to have a major financial impact on the offenders, while also deterring them from repeating their wrongdoings. Financial institutions can lose their ability to operate, which is a grave consequence.
A similar measure in the tech world could involve the suspension of certain activities, or even the shutdown of parts of a service.

Furthermore, citizens should be better informed about their data rights and how their data are used. Strengthening individuals’ rights to request the deletion of their data could limit companies’ abilities to indefinitely store information without valid reason. This should involve providing clear information to every data owner about all users of their data.

And just as with international financial standards, there could be a benefit to having global standards for data protection and sanctions, thus avoiding “data havens” where companies might try to relocate to escape regulation. Close collaboration between countries would be essential to ensure the effectiveness of sanctions and prevent companies from merely shifting their operations.

Curbing the negative impact of data speculation in the market

Speculation is a well-known concept in the financial world, where players buy and sell assets hoping to realize future profits. While “data speculation” isn’t a commonly used term, the idea captures the essence of a growing phenomenon where data is collected, stored, and traded with the aim of profiting from its future use.

Companies might collect data without an immediate or specific use in mind, hoping that it might be useful or profitable in the future. This is particularly true for tech companies that have the capabilities to store vast amounts of data. Furthermore, just as excessive speculation can create financial bubbles, a “data bubble” might emerge, where the perceived value of the data far surpasses its actual utility.

In the same way that certain financial mechanisms impose limits on speculation, caps could be implemented to restrict the amount of data a company can collect without justification. Just as financial transactions can be taxed to discourage speculation, a tax on the collection, storage, or trade of data could be considered. Companies might be required to disclose the nature, quantity, and usage of the data they collect, thus allowing regulators and the public to monitor speculation.

Ensuring transparent reporting without hindering data-driven industries

The reporting obligation for financial institutions regarding suspicious activities aims to combat money laundering, terrorist financing, and other illicit activities. In the world of data, the notion of “suspicious data” is different, but the underlying principle – accountability and transparency – remains. This might include unauthorized access to databases, accidental exposures or data theft, unusual data access patterns, unexpected requests for large amounts of data, or data transfers to unknown destinations that might be deemed suspicious.

Regulations concerning reporting obligations vary considerably from one country to another. This can create confusion for international companies and allow some to avoid reporting by exploiting these inconsistencies. Moreover, in some places, fines or penalties for non-reporting or late reporting are minimal, offering little incentive for compliance. Promoting international guidelines or treaties on data breach reporting could help establish a minimum compliance baseline.

The emergence of data as a form of currency redefines traditional paradigms of value and exchange. This transformation unfolds with unmatched opportunities and risks, intertwined with pressing ethical concerns.
While financial regulatory mechanisms have been refined over centuries in response to crises and innovations, data, in its newfound monetary stature, is in its infancy.

Concepts such as transparency, fairness, security, and accountability, fundamental in the financial sector, can serve as cornerstones in designing regulatory frameworks for data. In essence, while acknowledging data’s uniqueness as a currency, the financial regulatory system provides an opportunity to learn from its effectiveness and its limits. 

By marrying these lessons with a nuanced understanding of data’s specifics, we can hope to establish a balance that maximizes the benefits of this new currency while minimizing its potential risks to individuals and society at large.

About the Author

Hamilton Mann - AuthorHamilton Mann is the Group VP of Digital Marketing and Digital Transformation at Thales. He is also the President of the Digital Transformation Club of INSEAD Alumni Association France (IAAF), a mentor at the MIT Priscilla King Gray (PKG) Center, and Senior Lecturer at INSEAD, HEC and EDHEC Business School.

The post Banking on Data: the World’s First-Ever Common Currency appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/banking-on-data-the-worlds-first-ever-common-currency/feed/ 0
The Rise of Dual-Sided Artificial Intelligence (DSAI) https://www.europeanbusinessreview.com/the-rise-of-dual-sided-artificial-intelligence-dsai/ https://www.europeanbusinessreview.com/the-rise-of-dual-sided-artificial-intelligence-dsai/#respond Thu, 10 Aug 2023 15:43:35 +0000 https://www.europeanbusinessreview.com/?p=188930 By Hamilton Mann In the rapidly evolving landscape of artificial intelligence, a « Dual-Sided Artificial Intelligence (DSAI) » is taking centre stage, highlighting both unprecedented advancements and profound challenges. At the heart […]

The post The Rise of Dual-Sided Artificial Intelligence (DSAI) appeared first on The European Business Review.

]]>
By Hamilton Mann

In the rapidly evolving landscape of artificial intelligence, a « Dual-Sided Artificial Intelligence (DSAI) » is taking centre stage, highlighting both unprecedented advancements and profound challenges.

At the heart of the DSAI concept lies a remarkable phenomenon—a symbiotic relationship between two intelligent entities, each striving to outperform the other. As AI technology reaches unprecedented levels of sophistication and efficiency, a complementary AI counterpart emerges by leaps and bounds, birthing a new generation of machine-to-machine ecosystem interaction and competition. 

This interplay between AI systems, positioned and acting as alter egos, is redefining the very fabric of AI advancement. 

With the DSAI rise, the AI ecosystem is experiencing a transformative shift, as AI systems not only collaborate with humans but also engage and will more and more engage in interactions and competition with their own kind.

Here are a few examples that illustrate the DSAI concept:

  • AI Voice Assistants and Voice Authentication: As voice assistants like Amazon’s Alexa or Apple’s Siri become prevalent, the need for voice authentication systems arises to ensure secure and personalised interactions. Voice authentication AI acts as a counterpart to voice assistants by verifying the user’s identity and enhancing security.
  • Cybersecurity AI and Malware AI: With the advancement of AI in cybersecurity, there is a simultaneous rise in the sophistication of malware and cyber threats. Cybersecurity AI systems are developed to detect and counteract these evolving threats, acting as a counterpart to the malicious AI, striving to maintain equilibrium and protect systems.
  • Recommendation Systems and Adversarial Recommendation Systems: Recommendation algorithms power various platforms, suggesting products, content, or services based on user preferences. Adversarial recommendation systems leverage AI to counteract biased or manipulative recommendations, ensuring fair and unbiased suggestions, thereby acting as a counterpart AI to recommendation systems.
  • Fraud Detection AI and Fraudulent AI: Financial institutions employ AI systems for fraud detection, monitoring transactions for suspicious activities. On the other side, criminals and fraudsters develop AI tools to evade detection and perpetrate fraud. Fraud detection AI acts as a counterpart to fraudulent AI, constantly evolving to identify and prevent new fraudulent techniques.
  • Automated Trading Algorithms and Market Surveillance AI: High-frequency trading relies on automated algorithms to execute trades swiftly. Market surveillance AI systems monitor trading activities to detect anomalies, market manipulation, or insider trading. The surveillance AI acts as a counterpart to automated trading algorithms, ensuring fair and transparent markets.
  • Chatbots AI and Anti-Chatbot AI: Chatbots are designed to engage in automated conversations with users, providing customer support or information. Anti-chatbot AI systems, on the other hand, are and will be developed to identify and counteract malicious chatbots used for spamming, phishing, or spreading misinformation. 
  • Content Generation AI and Content Verification AI: AI-driven content generation tools, such as text generators or deepfake algorithms, can create realistic text or media content. Content verification AI systems work as counterparts, aiming to detect and identify fake, manipulated content and content generated by AI versus humans to ensure content integrity and combat plagiarism.
  • Autonomous Vehicles and Traffic Management AI: As autonomous vehicles become more prevalent, Traffic Management AI systems emerge to optimise traffic flow, reduce congestion, and ensure efficient transportation. These systems act as counterparts to autonomous vehicles, coordinating their movements and maintaining overall traffic equilibrium.
  • Personalised Medicine AI and Adverse Event Detection AI: AI-powered personalised medicine algorithms analyse individual patient data to optimise treatment plans. Adverse event or drug detection AI systems work as counterparts, constantly monitoring and identifying potential adverse effects or complications to ensure patient safety and treatment efficacy.
  • Defense Drones and Counter-Drone AI: In the domain of defense, the deployment of defense drones for surveillance or combat purposes has led to the development of Counter-Drone AI systems. These systems aim to detect, track, and neutralise unauthorised or hostile drones, ensuring airspace security and maintaining the balance of power.

In each of these examples, the introduction of one AI technology leads to the advent of another aimed at maintaining equilibrium.

The ramifications of DSAI principles are profound, eliciting both enthusiasm and apprehension:

On one hand, this new paradigm presents an exciting frontier of machine intelligence, opening doors to unprecedented efficiency, unparalleled problem-solving capabilities, and streamlining decision-making processes.

On the other hand, it raises a host of profound concerns that demand strategic foresight and proactive measures.

The implications of DSAI are far-reaching and should capture the attention of industry leaders and policymakers to understand its potential benefits and drawbacks. 

In this aspect, one crucial aspect that stakeholders should be acutely aware of is the need to ensure human agency remains central within this landscape for ethical considerations. 

Striking the right balance between the power of AI and human judgment is vital to harnessing the potential of DSAI without compromising core values and ethical principles.

As the world races towards embracing the transformative capabilities of AI, it becomes imperative for leaders to tread cautiously and foster collaborative efforts to properly harness the principles of the DSAI to strive for responsible AI development and usage for the betterment of society.

From navigating escalating arms races to addressing ethical dilemmas, ensuring system stability to preventing over-reliance on AI and safeguarding privacy amidst data-driven landscapes, a multifaceted approach is needed to navigate this new frontier.

The benefits of DSAI are supported by five principal arguments.

  • Human-Centric Approach: DSAI enhances human capabilities by leveraging AI technologies as tools and collaborators, allowing humans to achieve more and better outcomes that surpass their individual capacities while continuously striving to outperform any others empowered with AI.
  • Balancing Biases: Counterpart AIs can be designed to address biases present in AI systems. By detecting and mitigating biased algorithms, DSAI promotes fairness, inclusivity, and reduces the potential for discriminatory outcomes in decision-making processes.
  • Robust Decision-Making: The presence of counterpart AIs enables multiple perspectives and viewpoints to be considered in decision-making. This leads to more comprehensive and robust outcomes, minimising the risk of undue influence from a single AI system.
  • Improved Security: DSAI enables the development of AI systems that actively counteract malicious AI counterparts. This enhances cybersecurity measures, protects against evolving threats, and ensures the integrity and safety of digital systems and networks.
  • Enhanced Efficiency: DSAI fosters competition and innovation, leading to continuous advancements in AI technologies. The emergence of counterpart AIs drives efficiency improvements, optimising processes and enhancing overall performance.

As we shift our focus towards the potential drawbacks of DSAI, it is crucial to examine the five primary pitfalls that cast a shadow over this promising concept. These pitfalls underscore the need for a cautious and measured approach, as the risks associated with DSAI demand corporate executives´ and policymakers´ utmost attention. 

  • Complexity and Interdependence: DSAI increases the complexity of AI systems, as they interact and compete with each other. This interdependence raises challenges in terms of system stability, interoperability, and potential cascading effects if one counterpart AI fails or malfunctions.
  • Over-Reliance on AI: DSAI may lead to an over-reliance on AI systems, where humans become overly dependent on AI for critical decision-making. This can reduce human skills and judgment, limiting our ability to address complex issues without relying on AI.
  • Ethical Dilemmas: DSAI introduces complex ethical dilemmas, as AI systems autonomously compete and make decisions. It raises questions about accountability, transparency, and the potential for unintended consequences or conflicts between counterpart AIs.
  • Privacy Concerns: The presence of counterpart AIs may raise privacy concerns, as AI systems collect and analyse vast amounts of data. It raises questions about data ownership, surveillance, and the potential for misuse or unauthorised access to personal information.
  • Escalating Arms Race: The emergence of counterpart AIs can lead to an escalating arms race, with each side continuously developing more advanced and sophisticated technologies. This may create an unsustainable cycle of competition, diverting resources and attention from other societal needs.

Comprehensively exploring these concerns is paramount if we aim to effectively ensure the responsible and ethical development of AI systems. To prevent the risks associated with DSAI, leaders should implement measures at Regulations and Economics levels:

At the Regulations Level:

Establish International Agreements and Regulations:

  • Advocate for international agreements and regulations to prevent an escalating arms race in AI development preventing the DSAI´s negative effects.
  • Collaborate with other countries and global organisations to set limits and guidelines to regulate DSAI and sustain responsible AI development and deployment.

Human-in-the-Loop Approach:

  • Implement a human-in-the-loop approach where humans are actively involved in critical decision-making processes alongside AI systems to master the upcoming DSAI ecosystems.
  • Encourage continuous human supervision, verification, and intervention to mitigate DSAI risks of over-reliance on AI and to address complex ethical dilemmas.

Privacy by Design and Data Protection:

  • Comply with relevant data protection regulations and ensure robust data security measures to address privacy concerns related to the collection and use of personal information anticipating the DSAI implications. 
  • Prioritise privacy by design principles when developing AI systems, which could be beneficial to other AI systems as a systemic « auto-compliance regulation » and harmonisation resulting from DSAI effects.

Interoperability and Standardisation:

  • Promote interoperability standards and protocols that enable seamless and secured communication and cooperation between different AI systems to shape a DSAI for a good landscape.
  • Collaborate with industry stakeholders and standardisation bodies to develop guidelines for ensuring system stability and minimising DSAI negative effects.

At Economics Level:

Ethical Frameworks and Governance:

  • Develop comprehensive ethical frameworks and guidelines for AI systems, including accountability, transparency, and fairness anticipating and managing both, DSAI´s positive and negative effects.
  • Establish governance mechanisms that ensure ethical decision-making and oversight throughout DSAI developments.

Responsible Resource Allocation:

  • Encourage responsible resource allocation by diversifying investments in AI research and development to address other societal needs from a DSAI perspective.
  • Foster collaboration between industry, government, and non-profit organisations to identify and prioritise areas where DSAI can have a positive social impact.

Cross-Sector Collaboration:

  • Encourage collaboration between government institutions, academia, industry leaders, and civil society organisations to collectively address the challenges posed by DSAI in our modern economy.
  • Facilitate knowledge sharing, interdisciplinary research, and collaborative projects to develop DSAI solutions and best practices. 

Responsible AI Leadership:

  • Build responsible AI leadership within organisations, with a focus on ethical decision-making, transparency, and accountability concerning how DSAI impacts society.
  • Invest in AI talent development, fostering a culture of responsibility and continuous learning to address the risks associated with DSAI.

While promoting responsible AI development, these measures ensure human oversight and involvement, safeguard privacy, and foster collaboration among different stakeholders for a more balanced and beneficial deployment of AI technologies for the greater good of society.

The various dimensions of the DSAI concept and its profound implications, risks, and opportunities for society represent a critical juncture in the evolution of AI. It demands a thoughtful and nuanced approach to steer the course of technology for the benefit of all.

Leaders need to navigate this uncharted territory with strategic acumen and a human-centric focus. 

The quest for equilibrium between man and machine has never been more pivotal to fostering a sustainable future.

About the Author

Hamilton MannHamilton Mann is the Group VP of Digital Marketing and Digital Transformation at Thales. He is also the President of the Digital Transformation Club of INSEAD Alumni Association France (IAAF), a mentor at the MIT Priscilla King Gray (PKG) Center, and a Senior Lecturer at INSEAD, HEC and EDHEC Business School. 

The post The Rise of Dual-Sided Artificial Intelligence (DSAI) appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/the-rise-of-dual-sided-artificial-intelligence-dsai/feed/ 0
Artificial Intelligence is Exclusive-by-Design https://www.europeanbusinessreview.com/artificial-intelligence-is-exclusive-by-design/ https://www.europeanbusinessreview.com/artificial-intelligence-is-exclusive-by-design/#respond Sun, 25 Jun 2023 18:11:29 +0000 https://www.europeanbusinessreview.com/?p=186141 By Hamilton Mann As humans, it is our responsibility to shape the trajectory of artificial intelligence and use the knowledge of machine learning to enhance human intelligence such that it […]

The post Artificial Intelligence is Exclusive-by-Design appeared first on The European Business Review.

]]>
By Hamilton Mann

As humans, it is our responsibility to shape the trajectory of artificial intelligence and use the knowledge of machine learning to enhance human intelligence such that it allows for diversity and inclusivity.

Throughout human history, we have excelled at creating products that meet the specific needs of certain individuals while excluding others. We have continuously honed this skill, striving to differentiate ourselves and design products that cater to targeted markets and specific audiences.

This mindset, shaped by our mental, moral, and ethical models, influences how we perceive and interact with the world for most of our lives. Undoubtedly, this approach conflicts with inclusivity and diversity. The better we become at designing and delivering products and services that perfectly suit a specific targeted audience, the more adept we become at discriminating against other non-targeted audiences, purposefully leaving them behind.

Artificial intelligence (AI), built upon our mental, moral, and ethical models, follows this same pattern—it is exclusive by design, not inclusive. And paradoxically, it is already omnipresent. 

The global artificial intelligence market was valued at $87 billion in 2021 and is projected to reach $1,597.1 billion by 2030. Its continuous and widespread adoption places it at the core of numerous organisations worldwide:

In an increasing number of hardware and software components.

In various industries such as automotive, healthcare, retail, finance, banking, insurance, telecommunications, manufacturing, agriculture, aviation, education, media, and security, to name a few.

In expanding roles and professions, including human resources, marketing, sales, advertising, legal, supply chain, and many more.

We are just scratching the surface. 

One of the key questions arising from the development of artificial intelligence is how to ensure that biases or segmentation models in the data powering AI do not lead to discriminatory behaviours based on characteristics such as gender, race, religion, disability, sexual orientation, or political views. This is one of the significant challenges posed by AI development.

Artificial intelligence is not so… artificial

With the exponential and rapid development of artificial intelligence, the temptation to use it for unprecedented differentiation and unparalleled targeting approaches to achieve economic growth and competitiveness is strong and will continue to grow. There exists a tension between the need for organisations and individuals to embrace diversity and inclusivity to foster greater equality in society and the global economic system that encourages and exacerbates behaviours where differentiation, and therefore discrimination, becomes a rule of the game leading to success. This tension is on the verge of intensifying due to AI’s potential to systemically codify these competition-oriented behaviours in our digital society, presenting one of the greatest challenges of our time.

Artificial intelligence is already permeating every facet of society:

  • Personal assistants have now become virtual, enabling the execution of tasks with a human-like level of conversational ability.
  • Market analyses are conducted by machines that produce studies such as competitor comparisons and generate detailed reports.
  • Customer behaviour, purchasing processes, and preferences are scrutinised by increasingly intelligent customer relationship management (CRM) systems capable of predicting customer needs.
  • Customer service is also provided by chatbots that can answer frequently asked questions on a website.

And this is just the beginning compared to the potential applications that are already emerging and rapidly approaching in the near future, including:

  • Autonomous vehicles (bicycles, cars, trains, planes, boats, etc.)
  • Robots assisting surgeons in operating rooms.
  • Content creation (videos, music, articles, etc.) entirely produced by machine work.
  • Public policies whose measures would be prescribed and performance predicted through the analysis of large volumes of data.
  • And much more.

Considering the societal implications for the future of humanity, artificial intelligence is far from being as artificial as it may seem.

We must decide whether we plan to use AI to eliminate visible and invisible inequalities to an unprecedented extent or if we unconsciously or consciously intend to amplify them on the same scale. As we enter the era of artificial intelligence, there will be fewer and fewer grey areas.

Artificial Intelligence opens a new era for human learning

The responsibility for shaping the trajectory of artificial intelligence rests squarely on our shoulders as humans. At the heart of the challenges faced in 21st-century learning lies the way we teach machines what they need to learn and how they learn it. It necessitates not only an ongoing pursuit of developing our own intelligence but also a deep understanding of how machines acquire their own.

Both human and machine learning face similar challenges:

  • Supervised learning versus unsupervised learning
  • Structured learning versus unstructured learning
  • “Few-shot” learning versus “Blink” learning (as Malcolm Gladwell puts it)
  • Long-term versus short-term learning with a trade-off between forgetting and retention
  • “Zero-shot” learning versus learning through “dreaming”
  • Visuomotor learning versus multisensory learning (AVK)

By unravelling the mysteries of how machines learn, we not only discover new avenues for learning that were previously unexplored and unimaginable but also revolutionise the standards by which we understand our learning process, ultimately enhancing human intelligence. However, let us not be mistaken. Intelligence and knowledge are not synonymous, and increasing our knowledge is a necessary yet insufficient condition for augmenting our intelligence. 

Enhancing human intelligence primarily involves expanding our capacity for questioning, challenging the status quo, nurturing curiosity, and fostering the emergence of new questions in our minds, leading to the discovery and rediscovery of what we think we know and who we are.

Artificial intelligence is far less intelligent than commonly imagined

Even without contemplating an artificial intelligence capable of replicating human emotions, there is an inherent distinction that sets AI apart—the comprehension and grasp of context. 

Context comprises numerous parameters, some evident to the naked eye, while others are more discreet, nuanced, and constituted by subtle signals and details that play a pivotal role in characterising a context. Considering the ever-evolving nature of any given context, it will take time before artificial intelligence can truly appreciate its emotional complexity.

Building the kind of AI that benefits society necessitates a visionary approach. It involves comprehending which tasks are and will be best executed by machine intelligence in contrast to those that are and will be better handled by human intelligence. It also requires recognising tasks that must and will continue to be carried out by humans, regardless of technological advancements.

The responses our societies develop to establish a framework in which artificial intelligence aligns with human intelligence will shape the future of humanity as a whole. This goes beyond numerous innovations and new forms of competitive advantages that will redefine market dynamics as we know them today. More importantly, it holds sociological implications and affects the world we leave for future generations.

Most often, when we contemplate “Machine Learning”, our mental model leads us to think of a strictly one-way approach in which we teach the machine and provide it with the means to learn autonomously in various domains.

Artificial intelligence heralds a profound transformation in the relationship between humans and machines. This evolving dynamic, which is already becoming increasingly critical and fascinating, is more bidirectional than ever before. Consequently, the question arises: what can machine intelligence teach us to enhance our human capabilities?

We must embrace new ways of thinking to enable machines to perform tasks that would be challenging, if not impossible, for us to accomplish in the same manner. Simultaneously, we have the opportunity to seize new avenues for learning and self-improvement in numerous domains that currently demand extensive effort and years of expertise, with true mastery often only attainable through human execution.

Artificial intelligence is seeping into the decision-making process

While artificial intelligence and the recommendations it produces offer unsuspected opportunities to enhance not only our own intelligence but also the nature of relationships and emotional attachments we may develop with machines in the future, it also raises delicate questions of Environmental and Social Responsibility.

At what point does the decision support provided by AI exert such a degree of influence that it silently decides on behalf of humans? This complex question is upon us. The answer, particularly considering the level of vulnerability that society may recognise in each of us at any given moment, in particular circumstances of existence, can be as nuanced as the individuals themselves. That is why applications, devices, and any technological equipment equipped with any form of artificial intelligence need to be explicitly transparent regarding the limitations of the parameters the algorithm considers or disregards, concerning potential implications that may pose a danger to oneself or others. This will help foster the responsible use of AI and prevent the risks of inappropriate or even prohibited use.

Artificial intelligence challenges us to make it explicitly explainable to all, in terms of the causality behind its results, in order to guide decisions that increasingly impact our lives and society as a whole. Paradoxically, as humans, we cannot explain everything about the reasons behind many of our decisions in a manner that the majority would understand and deem fair.

Artificial intelligence will profoundly change the value of work.

Some fear that artificial intelligence will replace humans. While the concept of science-fiction AI surpassing humanity like Terminator remains fictional, there is a paradigm that needs to be included in what the digital society harbors within itself: AI can be better than humans at certain tasks, yet it will not be better than humans at all tasks.

With the development of AI, we are experiencing and will continue to experience a transformation from the knowledge economy to the trust economy. This shift is motivated by the need for increased predictability, precision, and efficiency on one hand, and the need for more fairness, transparency, and sustainability on the other.

For the future of “knowledge workers”, digital technology, particularly AI, will bring about five types of changes that will disrupt society to varying degrees, depending on the predominant nature of work and work value in each continent:

First, some jobs will disappear. This is not new; similar phenomena have occurred during previous industrial revolutions.

Then, some jobs will be enhanced by AI. Again, this is not new; analogous situations have existed during previous industrial revolutions.

Next, some jobs will evolve to become tech jobs.

There are also jobs that are currently difficult to imagine because their utility is intrinsic to societal needs about which we know little or nothing.

However, we must not be naive: the development of AI already creates and will continue to generate the emergence of precarious jobs, stopgap jobs to compensate for the lack of intelligence in AI. For example, there are shadow workers who label tons of data in a frenzy of repetitive tasks to help AI learn and ensure that certain abhorrent and intolerable content is prohibited from being accessed through the platforms we use, as it violates the law. The long-term impact of viewing such content on the mental health of these “workers” must be considered.

Which of these types of changes brought about by AI will have the greatest impact on the evolution of work in our societies? It is difficult to predict. Nonetheless, while it is not the sole force driving the kinetic transformations that characterise our century, it will ultimately be our responsibility to decide.

Regardless, artificial intelligence has no ethics of its own.

Artificial intelligence lacks ethics; it simply pertains to ours and ours alone. Our ethical principles are, ultimately, an integral part of the functional requirements that consequently digitally encode the biases we intellectually possess. In a way, artificial intelligence inherits the ethical genes of its creator.

Making the invisible codes of our societies visible is perhaps one of the most transformative advancements that artificial intelligence will enable humanity to achieve. Such a level of transparency regarding the unspoken and unwritten will contribute to greater equality and profoundly redefine the citizens’ demand for justice in our societies. It is also an opportunity to ensure that the artificial intelligence that interacts with ours and coexists with us are as much as possible the product of collective intelligence or, at best, the receptacle of the wealth that can be produced by the synergies derived from human diversity in all its forms of intelligence.

The augmentation of our intelligence through that of machines will always, and even more so in the future, be confronted with the existential question of the human cause we assign to this intelligence to serve.

Therefore, we should strive to make “artificial intelligence” an intelligence inspired by the quintessence of the best in our humanity, excluding all the dark aspects of human nature. This is arguably the most dizzying yet most crucial question for the future of humanity.

It is an ethical question to which only our humanity has the power and responsibility to provide an ever-renewed answer, in order to build the future in which we wish to live.

About the Author

Hamilton MannHamilton Mann is the Group VP of Digital Marketing and Digital Transformation at Thales. He is also the President of the Digital Transformation Club of INSEAD Alumni Association France (IAAF), a mentor at the MIT Priscilla King Gray (PKG) Center, and Senior Lecturer at INSEAD, HEC and EDHEC Business School. 

The post Artificial Intelligence is Exclusive-by-Design appeared first on The European Business Review.

]]>
https://www.europeanbusinessreview.com/artificial-intelligence-is-exclusive-by-design/feed/ 0