The History of the Web logo

Unraveling the web's story


Beware the cloud of hype

view of cityscape

I’m not going to necessarily weigh in on the AI thing. History, to me, is much more interesting in the long view. So I’ll be doing a bit of fence sitting on that one until some more of it plays out.

I think that the source of moral qualms, even in the face of some utility, has been well documented by others. But Paul Ford, in a recent Wired article, comes at AI from a slightly different angle. Everything that AI does, it does “badly and confidently.” Which leaves Ford is in awe of AI for just how shameless it is.

And as he points out, behind the wheel of this whole AI experiment are people largely unfit for the job.

But the current crop of AI leadership is absolutely unsuited to this work. They are themselves shameless, grasping at venture capital and talking about how their products will run the world, asking for billions or even trillions in investment. They insist we remake civilization around them and promise it will work out. But how are they going to teach a computer to behave if they can’t?

It is a shamelessness driven almost entirely by a fortress of hype erected around the entire industry. The “leaders” in this space can do no wrong. They will drive things forward at any cost, and they are protected by vague platitudes and imagined potential that everyone promises is just around the corner.

So even though I’m not yet ready to comment on the eventual historical arc of AI, my experience studying the web’s history has made me somewhat a student of hype, and I can comment on that. The web has, in fact, been through its own cycles of hype-driven shamelessness before. A few times actually.

In a recent open letter, Berners-Lee drew a similar connection between AI and previous iterations of web commercialization.

5 years ago, when the web turned 30, I called out some of the dysfunction caused by the web being dominated by the self-interest of several corporations that have eroded the web’s values and led to breakdown and harm. Now, 5 years on as we arrive at the Web’s 35th Birthday, the rapid advancement of AI has exacerbated these concerns, proving that issues on the web are not isolated but rather deeply intertwined with emerging technologies.

Berners-Lee is recognizing a pattern that has become all too familiar for those that have worked with the web these past few decades.

The push and pull in market share between Microsoft and Netscape was positioned as a genuine browser “war”—complete with an explosive announcement timed to the anniversary of Pearl Harbor. Browser makers told us that the new guard was here to replace the old. Operating systems were obsolete. Browsers were all that mattered. Within a few years, they said, we’d never hear from Microsoft or Apple ever again. They put Marc Andreessen, one of the creators of Netscape, on the cover of Time, where made outlandish claims about the early web’s potential. On one side of the coin, shamelessness. On the other side, hype.

A few years later, during the dot-com boom, operating systems were no longer the only target. The entire business world was. Money, it was said, was no longer a factor. If you could get enough eyeballs, the money would follow. An anchor on ABC went as far to say that “the Internet is so revolutionary that the usual roles for valuing a stock, such as revenues and earnings, no longer apply.”

That time has been called schizophrenic, “irrational exuberance.” It’s a period of time we would do well to learn from now. There was a frenetic energy that pulled everyone in and made every upstart company feel like they had to be a part of it. Hoards of people would come up with some random idea, bolt “.com” to the end of their name, and start calling investors. One CEO recalls setting up shop on his first day and thinking “should I buy furniture, or should I talk to an investment banker and go public?” Today, how many companies are adding AI to their product to boost their value, without giving much though as to why?

In the dot-com days, upstart entrepreneurs just out of college were given glossy profiles in magazines as leaders of a new revolution. A bit of showmanship ADF played to the audience, talking up the convergence of new media and commercialization in terms people didn’t fully understand. It’s the same showmanship Ford identified in our current crop of AI leadership.

At each level, there were brazen proclamations and little to back it up. This hype was driven by an excited class of early adopters and pioneers who truly did feel like they were right on the precipice of something great, driven by financial incentives that, if you ever took some time to really think about, didn’t quite add up.

And some in the media bought in. Financial columnist Allan Sloan had something to say about that in the early 2000’s, when everything was going bust.

Why were so many journalists caught up in the mania? “Because it was fun,” Sloan said in a 2002 interview. “It’s fun to have access. It’s fun to be part of the new thing. It attracted readers. It attracted ads. It created buzz. It made you hot and trendy”

That could be written today, and it could be written about AI. That’s not to discount the many journalists casting a critical on AI and giving it a well-researched look. But many more in the larger media world have simply bought in, very few questions asked.

The rise of dot-com companies was pitched as a no consequences gold rush. We were on the precipice of a fictional future where everyone would be cashing in on the web. The reality was quite a bit more slow, and boring. Business on the web consolidated, as we now know, and left most people holding the bag. There’s no knowing exactly what will happen with AI technologies, but it wouldn’t be unreasonable to expect something far more boring and centralized than what’s being promised.

When you see such outlandish shamelessness look for the cloud of hype that lies beyond it. Remember the lessons from the web.

Leave a Reply

Your email address will not be published. Required fields are marked *