Redefining AI: Large Models as Cultural and Social Technologies

This article explores the concept of large AI models as cultural and social technologies rather than autonomous agents, emphasizing their societal implications.

Redefining AI: Large Models as Cultural and Social Technologies

On March 13, 2025, the journal Science published an article titled “Large AI models are cultural and social technologies: Implications draw on the history of transformative information systems from the past.” This article argues that large language models (LLMs) should be defined as cultural and social technologies rather than autonomous agents. Their technical essence is closer to historical information processing systems like writing, printing, and bureaucratic systems, which reorganize human-accumulated cultural data for social coordination. Misinterpreting large language models as intelligent agents can lead public discussions astray, necessitating a shift towards a sociotechnical analysis framework to accurately assess their social impact and governance pathways.

Introduction

Debates about artificial intelligence often revolve around whether large models possess intelligence and autonomy. Discussions about the cultural and social consequences of large models focus on two points: their immediate impacts and the hypothetical future where these systems evolve into general artificial intelligence (or even superintelligent AI).

However, viewing large models as agents is fundamentally misguided. By integrating perspectives from social sciences and computer science, we can more accurately understand AI systems: large models should not be seen as agents but as a new type of cultural and social technology that enables humans to utilize the accumulated information of others.

Social and Cultural Institutions

Since the dawn of humanity, we have relied on culture. Beginning with language itself, humans possess a unique ability to learn from the experiences of others, which can be seen as a key to our evolutionary success.

Major transformations in cultural technologies have led to profound social changes: the development of spoken language into images, writing, printing, film, and video. As information spreads widely across time and space, new methods of information acquisition and organization (such as libraries, newspapers, and internet searches) continue to evolve. These developments have had a profound impact on human thought and society, both positively and negatively.

We have also depended on social institutions to coordinate individual information gathering and decision-making. These institutions can be viewed as a form of technology. In modern society, markets, democracy, and bureaucratic systems are particularly important:

  1. Economist Friedrich Hayek noted that market price mechanisms generate simplified representations by dynamically aggregating extremely complex economic relationships. Producers and buyers need not understand production complexities, only the price, which compresses vast details into simplified yet usable representations.
  2. The electoral mechanisms of democratic systems similarly focus dispersed public opinion into collective laws and leadership decisions.
  3. Anthropologist James C. Scott argued that all states (democratic or not) manage complex societies through bureaucratic systems that create classifications and organize information.

Markets, democracy, and bureaucratic systems have long relied on generating “lossy” (incomplete, selective, and irreversible) yet useful representations before the advent of computers. These representations depend on and transcend individual knowledge and decision-making.

Humans are highly reliant on these cultural and social technologies, but their feasibility stems from our unique capacity as agents. Humans and other animals can perceive and act upon a changing external world, construct new models of the world, update these models based on evidence, and design new goals. Humans can create and transmit new beliefs and values through language or print. Cultural and social technologies powerfully convey and organize these beliefs and values, but without individual capabilities, these technologies would be ineffective. Without innovation, imitation is meaningless.

Some AI systems (like those in robotics) indeed attempt to instantiate similar truth-discovery capabilities. While it is theoretically possible for artificial systems to achieve this in the future, all such systems currently fall far short of human capabilities. We can discuss the extent of our concerns about these potential future AI systems or how to address their emergence. However, this is distinctly separate from the current and near-term impacts of large models.

Large Models

Unlike more agentic systems, large models have made significant and unexpected progress in recent years, placing them at the center of current discussions in the AI field. This progress has even sparked the claim of “expansionism.” However, there is an essential difference between large models and agents, and expansion cannot change this fact.

Large models are not agents; they are a new way of combining the characteristics of cultural and social technologies. They generate summaries of the vast amount of human-generated information, but these systems do more than summarize information like library catalogs, internet searches, and Wikipedia; they can also reorganize and reconstruct (or “simulate”) information representations on a large scale, akin to markets, states, and bureaucratic systems. Just as market prices are lossy representations of resource allocation and use, government statistics and bureaucratic classifications imperfectly represent demographic characteristics, and large models are similarly a “lossy JPEG” of their training datasets.

Behind the agentic interface and anthropomorphic disguise, large language models and large multimodal models are statistical models: they decompose vast human-generated text corpora into specific vocabularies and estimate the probability distributions of long sequences of words. This is an imperfect representation of language but contains substantial information about its summary patterns. This allows large language models to predict subsequent words in a sequence, thereby generating human-like text.

Large models not only abstract vast amounts of human culture but also allow for diverse new operations: simple arguments can be expressed as elaborate metaphors, and complex prose can be compressed into plain language, among other transformations. Cultural information that was previously too complex, vast, and ambiguous to operate on at scale has been tamed.

In practice, the latest versions of these systems not only rely on vast corpora of human-generated and curated text and images but also depend on other forms of human judgment and knowledge. In particular, these systems rely on reinforcement learning from human feedback (RLHF) and prompt engineering. Even the latest “chain of thought” models typically begin with dialogues with human users.

Challenges and Opportunities

1. Challenges

Debates about artificial intelligence should focus on the challenges and opportunities presented by these new cultural and social technologies. The technology we have now impacts written and visual culture comparably to how large-scale markets affect the economy, large bureaucracies affect society, and even how printing transformed language. What will happen next? Like past general-purpose technologies in economics, organization, and information, these systems will affect productivity, supplement human work, automate tasks previously only humans could perform, and influence distribution, potentially altering resource acquisition patterns.

They may also produce broader cultural impacts. We do not yet know whether these impacts will be as profound as those of printing, markets, or bureaucracies, but viewing them as cultural technologies highlights their potential significance.

At the same time, these technologies create new possibilities for reorganizing information and coordinating the actions of millions globally. Ongoing debates about the economic, social, and political consequences of large language models echo historical concerns and expectations regarding new cultural and social technologies. Guiding these debates requires recognizing the commonalities of old and new arguments while carefully mapping the specificities of new technologies.

Such mapping is a core task of social science. Research into the past consequences of technologies can help us think about the latent social impacts of artificial intelligence and explore pathways to enhance positive impacts and mitigate negative ones through the redesign of AI systems.

However, a current obvious concern is that large models and related technologies may replace “knowledge workers” and whether “large models will homogenize or fracture culture and society.” Thinking about this issue in historical context is highly enlightening.

The design goal of large models is to faithfully reproduce the actual probabilities of text, images, and video sequences. Their inherent tendency is to be most accurate about the most common scenarios in the training data and least accurate about rare or entirely new scenarios, which may exacerbate this homogenization.

2. Opportunities

On the other hand, large models may allow us to design new methods to access the diversity of cultural perspectives they summarize. Integrating and balancing these perspectives could provide more nuanced means of solving complex problems. One way to achieve this goal is to construct a “social-like” ecology—different large models encoding different perspectives debating and cross-fertilizing to generate mixed perspectives or identify gaps in human expertise. We may need new systems that diversify the reflections and roles of large models, producing distributions and diversities akin to human society.

Such diversified systems are particularly important for scientific advancement. By connecting numerous perspectives in text, audio, and images, large models may help us discover unprecedented connections among them, benefiting science and society.

The impact of new cultural and social technologies on economic relationships also presents subtler yet intriguing pathways. The development of cultural technologies has sparked fundamental economic tensions between information producers and distribution systems:

  1. The contradiction between producers and distributors: distributors want to acquire information at low costs, while producers want to distribute information at low costs.
  2. Digitalization exacerbates these contradictions: the convenience of digital information distribution has sharpened this issue. The speed, efficiency, and scope with which large models process available information make these issues more pronounced. Concentrated power may make system owners more likely to capture benefits through efficiency at the expense of others’ rights.

3. Technical and Political Issues Amid Challenges and Opportunities

Key technical questions include: to what extent can the systemic flaws of large models be corrected? How do they compare to the flaws of systems based on human knowledge workers?

These questions should not obscure critical political questions: which actors can mobilize their interests? How do they shape the mixed outcomes of technology and organizational capabilities?

Tech commentators often simplify these issues into a binary confrontation between machines and humans: either the forces of progress triumph over Luddite tendencies, or humans successfully resist the non-human encroachment of artificial technologies. This not only misunderstands the complexities of distribution struggles that predate computers but also neglects the various pathways that future progress may take.

In early cases of social and cultural technologies, a series of norms and regulatory frameworks gradually formed to reconcile their impacts. However, these checks and balances did not emerge spontaneously; they were the result of collaborative efforts by actors both within and outside of technology.

Looking to the Future

The narrative of general artificial intelligence (viewing large models as superintelligent agents) is promoted both by optimists and skeptics within and outside the tech community. This narrative misunderstands the nature of these models and their relationship to past technological transformations. More importantly, it shifts attention away from the real issues and opportunities presented by these technologies and overlooks the guiding significance of historical lessons in weighing pros and cons.

There may exist hypothetical future AI systems closer to agents, but large models are clearly not such systems. Like library card catalogs or the internet, large models belong to the continuum of cultural and social technology development.

Social science has explored this history in detail, forming a unique understanding of past technological upheavals. Close collaboration between computer science and engineering with social science will help us understand this history and apply its lessons: Will large models lead to cultural homogenization or fragmentation? Will they reinforce or weaken the social institutions of human discovery? Who will benefit and who will be harmed in this process?

These urgent questions are difficult to focus on in debates that analogize large models to human agents; changing the framework of discussions about artificial intelligence will help promote research.

If both computer scientists and social scientists understand that large models are only (but also) new cultural and social technologies, both sides will find it easier to collaborate by combining their expertise. Computer scientists can integrate their deep understanding of system mechanisms with social scientists’ knowledge of how large-scale systems have reshaped societies, expanding existing research agendas and discovering new directions.

Moreover, moving the debate away from the existential fears of “machines taking over” and the utopian promises of “everyone having a perfect artificial assistant” will reveal that the actual policy consequences of large models are likely to differ.

In this light, engineers and computer scientists have become aware of the bias issues in large models and are contemplating their relationship with ethical justice. They need to further question: how will these systems affect resource distribution? What are the actual consequences of their polarization and integration in society? Can we develop large models that enhance rather than suppress human creativity?

Answering these questions requires an understanding that encompasses both social science and engineering. Shifting the debate on artificial intelligence from agency to cultural and social technology is a crucial first step in building this interdisciplinary understanding.

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.