Google declares the end of the World Wide Web.

No company is quite so inseparable from the World Wide Web as Google, which made searching the internet an eponymous verb. The web made Google rich, too, but this week Google relegated it to a submenu. In a design of its next-generation home page that the company showed at its annual I/O d

Searching now returns a blancmange of content in special pull-out boxes, apps and features, some of it artificially generated. The days of lists of links are over — if you want to see web pages the tech giant now offers a “new ‘Web’ filter” that will refine your search to only see web pages. This may startle Generation Xers whose first taste of the online world was via a web browser, but the web has become a legacy format like the DVD.

The high hopes of the web really began three decades ago. In the spring of 1994, online communities consisted of services such as Compuserve, or bulletin boards, which were either expensive or difficult to use, as were the first internet searching tools like Gopher. But in the mid-Nineties, users launching the new Mosaic browser could soon discover how simple it was to produce these early mixed-media pages: now, anyone could publish anything.

This caused particular excitement among progressives. Here was a mechanism to bypass “big media” and the false consciousness it purportedly created for the masses. Radio had also started as a two-way communication system, but now the web offered publishing at a radically lower cost. It gave a pamphleteer the same profile as Condé Nast. The promise of the web explains the subsequent determination to keep the internet “free” or “open”.

The media’s fetishisation of the web was not always shared by the public. For example, in 2002 the BBC conducted a public poll to nominate the greatest living Britons, and Corporation staff insisted that the protocol’s co-inventor Sir Tim Berners-Lee should be one of the 100 nominees. To their dismay, he came 99th.

Today’s teenagers — and I’ve polled a random sample — neither know nor care what “the web” is. They were born into mobiles and social media, and see no interest in reviving it as a semi-ironic cottagecore medium, like the cassette tape. Web utopianism is strictly a Gen-X media phenomenon.

But, in reality, Google’s interest in the web has been diminishing for a very long time. Articles lamenting its demise have been appearing since Wired’s tastemaker in chief Chris Anderson proclaimed the web “dead” in 2010. Berners-Lee regularly issues manifestos to “save” the web, and nobody pays any attention. Today, over 80% of Facebook’s two billion daily users access the social network only via a phone. Businesses no longer feel obliged to create websites. Much of what’s left is tawdry and dying.

Google is currently erecting a wall between the searcher and the information they seek, using Generative AI, which the company believes creates more useful results such as summaries. This barrier, consisting of what Google’s former research director Meredith Whittaker calls “derivative content paste”, causes problems: what’s generated may or may not resemble the original, thanks to additional errors and “hallucinations”. The new barrier also removes the creators of original material from the value chain. The world was never as exciting as we were promised by the web utopians; now, it will be blander than ever.

I have a web-server that has been running since 2008. It started out as a vanity website, written back in the days when a large number of websites belonging to individuals, before the monetization of the web. As that process developed, privacy issues began to rear up and my site went through a series of contractions, to the point were I almost shut it down. But it remained useful to me for several reasons. First, I had written a number of web applications that were very useful to me, personally. Writing a computer application for particular platforms takes a lot of work and must be constantly monitored for compatibility with continually evolving operating system changes. Web applications put that burden on browsers, who provide an applications programming interface which works (pretty much) across various platforms. My applications work on my native Linux machines, my wife’s Mac, our phones and tablets, etc. They enable me to interact with my own records, resources, references, stream my own music, transfer large files to and from wherever I am in the world – without Google, Amazon, Facebook, or any other corporate or government entity looking over my shoulder.
As security became an issue, I changed the site to require authorization to access most of it. The existence of most of it became invisible without authorization. But I left a small number of publicly accessible pages on the site. I had a pretty decent weather station I’d built and had online since I started the site. And I published some codes I’d written that a few people found interesting and led to some email discussions (and one exchange wherein a Chinese student tried to get me to solve his take-home Lisp programming final exam problem for him).
I regularly monitored my server logs – a record of the request traffic it receives. As the monetization of search took off my server traffic exploded. A large amount of it was the big search engines – both foreign and domestic. It got to be ridiculous. In any given period, only a very small amount of the traffic was from “real people” (me, my family and friends, and an occasional stranger steered there mysteriously by search); all the rest was search engines scraping the site – which changed rarely. There are methods one uses which are supposed to control them somewhat, and the big commercial domestic ones seem to obey them. Most of the foreign ones just ignore them.
But the biggest growth in traffic I saw from about the mid-twenty-teens was from hackers and from commercial operations looking for ways to exploit my site or data and sell it. I spent a lot of time learning how to track and classify these and the hackers. The emergence of geolocation techniques (which use multiple world-wide servers to triangulate actual latitude, longitude locations of IP sources based on transit delays) helped tremendously in this endeavor. China, Russia, the UK, and, curiously, locations around Washington, D.C., turn out to be the largest single sources of attacks on my U.S. located site. But there are waves of attack origin that temporarily roll through (lately, Ukraine, Hanoi, Tehran, Sweden and Hamburg, Germany have been prominent).
How my comment here ties in with this article is this: a while ago I relaxed my constraints on the big search engines. And what I discovered was that, while they came back around and sniffed at it, they just moved on without bothering to index it. They simply don’t care about individual sites like mine anymore. They would if I was posting ads on it. Or cross-linking to sites that posted ads. Or selling something. Or buying something. But just to put up information about various topics without any of that? Sorry, not interesting anymore. I am a non-entity to Goggle (in more ways than one). They just are not interested in the content anymore – not if will not be useful to generate clicks to their advertisers.
And I realized that this is what I’ve been noticing with search for quite sometime. It is very difficult to find non-monetized websites with search. There was a time when you could if you dove deep – meaning kept going through page after page of links. Eventually you’d get past the heavy advertising and find a few real, interesting topical pages. Now, after several pages the search engines simply say you’ve reached the end of their search results. The end of the Internet! It used to be a joke. Now it’s a reality. At a time when there has never been more websites online it has never been shallower.


Mau Mau

49 Blog posts

Comments