Posts

The AI Rush: Organisations Holding the Key and Searching for the Door

A student in my (BPGP) class recently described how companies are investing heavily in AI tools and pushing employees to adopt them. These companies, he said, seem convinced that they are holding an important key, but are not yet sure which lock it opens.

The metaphor was funny, but also remarkably accurate.

Across industries, organisations sense that AI matters. There is urgency, investment, experimentation, and, increasingly, pressure to adopt quickly. Yet beneath this acceleration lies a quieter uncertainty. Many organisations still struggle to articulate where the real value of AI lies, what specific problems it is meant to solve, and how it meaningfully fits into existing systems of work and decision-making.

This is why metaphors matter. Good metaphors often reveal what people already intuitively feel but have not yet fully articulated. They translate ambiguity into aomething immediately recognisable. In this case, the metaphor captured something central about the current AI moment: confidence in the importance of the technology coexisting with uncertainty about its actual purpose.

In Science and Technology Studies (STS), scholars such as Bruno Latour, Langdon Winner, and Madeleine Akrich have long argued that technologies are never neutral tools. Technologies shape participation, distribute power, and organise social life. They do not simply enable action; they structure it.

One useful way to think about technologies is as doors. Every technological “door” quietly raises political and social questions: Who is allowed in? Who is excluded? Who gets to decide? What behaviours become easier, rewarded, or even mandatory? And whose participation becomes impossible in the process?

Seen through this lens, the student’s metaphor becomes even more revealing. If organisations believe they possess an important key but do not yet know which lock it opens, then the uncertainty runs deeper than implementation strategy or return on investment. The uncertainty concerns purpose itself.

What kinds of problems are organisations actually trying to solve with AI? What forms of work are they attempting to optimise or replace? What assumptions about efficiency, productivity, creativity, or expertise are being built into these systems? And what kinds of institutional “doors” are being constructed as AI becomes embedded into everyday organisational life?

These questions matter because infrastructures tend to become invisible once normalised. By the time their consequences become obvious, patterns of access, exclusion, and authority are often already deeply embedded.

The current AI race frequently frames adoption as inevitable. Organisations worry about being left behind, employees feel pressure to adapt quickly, and markets reward visible experimentation. But speed can sometimes substitute for clarity. Adopting AI without deeper reflection risks creating systems whose social consequences are only recognised retroactively.

None of these concerns are entirely new. Social scientists, historians of technology, and STS scholars have long examined how technologies reorganise labour, identity, expertise, and power. From industrial machinery to algorithmic management systems, technological shifts have always carried questions about participation, authority, and institutional control.

What feels different today is the scale and speed of adoption combined with the cultural narrative surrounding AI itself. AI is often discussed simultaneously as inevitability, opportunity, disruption, and existential necessity. Under such conditions, critical reflection can easily appear secondary to implementation.

Yet this is precisely the moment when reflection becomes most necessary.

One of the privileges of teaching working professionals is encountering moments when participants, drawing directly from organisational life, articulate insights that resonate far beyond the classroom. Sometimes a single metaphor captures an entire historical moment more effectively than a long theoretical explanation.

The image of organisations holding an important key without yet knowing which lock it opens may be one of those metaphors.

#IIMA #BPGP #VUCA #AI #workplace #AItools