Between late 2025 and the first half of this year, GitHub—one of the critical platforms where much of the software that sustains the internet is organized, reviewed, and published—began showing clear signs of structural stress: availability degradations, pull request incidents, search problems, delays in internal processes, and the inevitable need to reinforce critical parts of its infrastructure. The company itself acknowledged in October 2025 that it had initiated a plan to multiply its capacity tenfold, but by February of this year that original estimate had fallen short, and it already needed to scale its infrastructure to support thirty times its current load. The stated reason was an abrupt change in how software is built: since the second half of December 2025, repository creation, pull request activity, API usage, and the rest of its public services grew exponentially.
The importance of the GitHub case lies in its preconfiguration as a test case for this new reality where AI agents are beginning to take a predominant role. What first appears as an availability problem can extend to any system built on a premise that until now seemed obvious: that the key user on the other side was human. Public APIs, forms, free tiers, administrative panels, support flows, stores, repositories, authentication systems, wikis, corporate chats, and collaborative tools were designed for a population of users who act according to human cadence. But agents introduce another operational regime. The internet is beginning to receive, at scale, users who don’t tire of trying, aren’t intimidated by a form, don’t forget an open task, and don’t stop when human attention, in general terms, runs out.
Much of the web and its systems became anthropomorphized. Which is expected. Which is reasonable. Systems adapting to their users. Humans write slowly, alternate between tasks, take time to interpret screens, then rest, and then make mistakes, get bored, and abandon. That fragility functioned for years as an invisible regulator of demand. Agents come to pulverize that regulator. Agents can open tickets, query APIs, compare results, generate variations, try alternative paths, and retry operations with a persistence foreign to the economy of human attention. That’s why the crisis manifests first at the infrastructure level, but one shouldn’t be naive and think this change will stop there. Eventually, it will reach any service whose design depends on a premise now called into question: that the key end user is human. And it will do so across all layers of the system: infrastructure, interfaces, business models, governance. The GitHub case is just the first tremor.
At this point, the so-called dead internet theory acquires a more transcendental reading if separated from its conspiratorial version. According to that theory, much of the network would no longer be populated by people, but by bots, synthetic content, and automated interactions that simulate human activity. It’s important to highlight, however, that the internet was conceived as a place where humans and machines could communicate: from the beginning it was made of servers, protocols, crawlers, caches, search engines, filters, recommendation systems, and platforms. For philosopher Bruno Latour, the social is not composed only of humans, but of networks of actants: human and non-human entities—people, machines, rules, institutions, devices—capable of producing effects within an assemblage. From that perspective, AI agents are not an invasive anomaly, but the intensification of the hybrid, technical, and distributed character of the internet. What is being reconfigured now is the specific weight of the non-human actant. Prominence within the assemblage is beginning to redistribute.
But the current difference lies in the degree of delegated agency. Classic bots crawled pages, indexed content, published spam, or executed limited routines. Recent agents interpret objectives, divide tasks, consult tools, observe results, and create and synthesize plans of increasing complexity. They act on human interfaces with such intensity that they alter the infrastructure that receives them. They search for us, program for us, compare prices for us, test vulnerabilities for us, claim support for us. And increasingly, each of these agents will find that whoever they interact with—whether receiving the response to a product search, or the code review they generated, or receiving offers, or whoever responds to their support claims—will be another agent.
Philosopher Paul Virilio helps us enrich the perspective on this phenomenon. His thesis that every technology invents its own accident allows us to read these failures without minimizing them as contingencies or engineering flaws. The ship brings with it the shipwreck; the airplane, the plane crash; the agentic web, the automatic saturation of anthropocentric systems. The accident doesn’t appear only when a platform goes down, but when it reveals under pressure the type of fragility that its own promise contained. If the promise of agents consists of executing tasks at scale, their characteristic accident consists of also executing at scale the demand, the friction, the retry, the exploitation, and the abuse. The outage then ceases to be a marginal exception and begins to function as a symptom: infrastructure starts experiencing the specific accident of an internet traversed by entities that act without pause and at an unprecedented scale.
The logical consequence is that humans will cease to be the central operative user of the internet. They will continue defining desires, objectives, and value criteria, but many concrete actions will be executed through agents. This displacement will transform all constitutive layers of distributed systems. First it will fall on infrastructure: more traffic, more events, more compute consumption. Then it will reach interfaces: how much sense does it make to invest effort in visual experiences optimized for humans if the decisive user will be an agent that prefers an API, a protocol, a semantic contract, or a structured channel of actions? Finally, it will reach business models: free tiers, usage limits, and engagement metrics will have to distinguish between the increasingly irrelevant human user and the much more relevant agentic user.
The GitHub drama thus constitutes a resounding warning that makes explicit the transit between two incompatible eras of the internet: one organized around human visitors and another that will have to be organized around these invaders. Bruno Latour helps us understand that these entities are part of the sociotechnical assemblage of the internet. And so, the key question seems to be, as Paul Virilio warns us, how we are preparing for the inevitable accident we are approaching.