Technocratic Panic at the Millennium and the Real Threat Beneath the Code

It’s been over seventeen years since the world ended. Or, rather, it was supposed to be The End of the World as We Know It. If that sounds like a big deal that’s because, at the time, it truly was.

This is how things were supposed to go down.

On January 1, 2000 at the stroke of midnight, networked computers around the world would begin to fail. The first failure would come from small isolated, systems, maybe a tiny electrical grid in the middle of nowhere, or a failsafe at a nuclear plant. It wouldn’t be long before these small instances produced a domino effect into much larger ones. Within days, maybe even hours, the electric grid would be completely wiped out and, with no access to electricity, there would be no way to get the systems back online. The world would go black and in almost an instant, we’d be blasted back to the Stone Age.

This was a real fear for a good amount of the technology community. For some, it even bordered on a certainty. All of this because of a nasty bug in computer software that was labelled Y2K.

Many of the earliest computer languages were built in academic and research environments, when computers took up the better half of a full wall and they only accepted complex mathematical notations as valid code. That all changed when Grace Hopper created FLOW-MATIC in the mid-1950’s, a language with a much more approachable, English-like syntax.

An example of FLOW-MATIC code

FLOW-MATIC eventually gave rise to a whole generation of new programming languages inspired by its ease of use and approachability. In 1959, a group of computer scientists pulled from some of the largest computer manufacturers in the world used FLOW-MATIC as the basis for a new, common programming language that was interoperable and machine independent. The result was COBOL, which for many years was something of a standard.

In the 60’s, computers were limited. English-like processing made things easier, but it also had to abstract away some of the complexity and conserve vital bytes of memory wherever possible. The designers of COBOL made a deliberate, and potentially short-sighted decision. Instead of representing dates in a four digit notation (i.e. 1960), they would lop off the “19” and convert all years to two digits (i.e. 60). That might seem like an unnecessary micro-optimization but at the time, literally every byte counted and dates were used constantly in code.

Forty years later, the millennium was ending. Computers running languages like COBOL were faced with a pretty serious issue. At the turn of the millennium the year would register as a double zero, 00, instead of 2000. In general, computers don’t like when you feed them a zero when they are expecting a date. For some systems, it was enough to trigger a failure.

And that was the problem of Y2K.

Programmers, smart ones at least, had known about the issue all along. Eventually, the problem would catch up with us as all. But as the millennium entered its final decade, most systems were still saddled with the old code. Meanwhile, the world had become much more dependent on technology, and networks and, of course, the World Wide Web. Banking, power plants, electrical grids, the stock market, it all depended on networked computers, most of them running legacy COBOL code. The theory was that it just took one. If just one of these essential cogs were to fail because of Y2K, it could pull everything else down with it. And this theory didn’t even account for an entirely different problem with “embedded systems,” small bits of software running old code in everything from household microwaves to medical equipment like pacemakers.

The solution, it turned out, was fairly simple. Each line of code had to be updated in all that COBOL code, one by one, to include the full date notation. But simple doesn’t mean easy. Systems are, by design, interconnected. So every bit of potentially dangerous code, the whole world round, would have to be updated before the year 2000. Yet a concerted effort to fix Y2K didn’t truly get started until the mid to late 90’s.

As a reaction to this monumental task, a small group of people in the technology field decided to completely lose their minds. They panicked, claiming that there was not enough time to get the work done and spread the news that the end of the world was indeed nigh. A newsgroup message board created in the fall of 1996 to discuss potential solutions to the Y2K problem quickly transformed into a survivalist outreach board. Users posted their plans to move out of the country, stock up on supplies, and hunker down to wait things out. Experienced programmers, people who had worked for the field for decades, were all of a sudden reaching out to any press outlets that would run a story to tell them all about their new plans to build a shelter, forage for food, and tan the hides of bears.

This group, and they weren’t even a majority of programmers, acted as a lightning rod for worldwide hysteria. If the computer scientists were worried, the reasoning went, shouldn’t we be too? Is anybody going to be able to fix this? Soon, others began to embrace the survivalist trend. In some circles, the end of the world was treated as a foregone conclusion.

Researching Y2K, I found myself drawn to one particular fact. No one really takes about Y2K these days. Chances are, there’s a few readers that haven’t even heard of it. Which is odd because at the time, it’s impact was seismic. In the years leading up to the new millennium, it was basically all that anyone could talk about. Time ran a cover story calling Y2K the end of all things. Then U.S. president Bill Clinton mentioned it in his State of the Union Address. People around the world were talking about it at their dinner tables.

The Time cover was… not subtle

Technologists like to imagine doomsday scenarios. Not that they’ll admit that. You will sometimes hear from some very smart people that we are on the precipice of a major crisis, or inches away from an instantaneous turn into darkness. What’s more, they will claim that the worst possible outcome of technological decisions are a logical and inevitable end. That they are, in effect, unstoppable. What they will likely fail to mention is that the problem is of their own design. Technology is not in itself an inevitable, unstoppable force. It was built by people. The problems were created by people. The solution si not more technology. You have to bring it all back to people.

It was people that rallied together to stop Y2K. All in all, the world spent 300 billion to fix the issue. Governments and large corporations turned to their programmers for possible implementations. Some of them took on the massively important role of project manager to help chunk up the work and keep things moving. And developers went through their old code, line by line, and fixed each and every issue. Y2K compliant was the word of the day, and as the millennium approached the world was quite possibly ready.

When the clock did finally strike midnight on January 1, 2000 most people weren’t even paying attention to Y2K. Those taking shelter in their bunkers were probably the first to learn that, in fact, just about nothing went wrong. An alarm was triggered here, a bus ticketing machine stopped working there. But aside from a few minuscule and isolated incidents, everything was just fine.

To this day, we have no idea if the fears were mostly unfounded to begin with or if the collaborative effort spearheaded by programmers around the world reversed potential damage. We may never know. We do know the worst never happened. Governments and corporations and network admins had avoided disaster. And sometimes, when we’re so focused on the problems we think are inevitable we miss an entirely different kind of trouble.

While the world was still patting itself on the back for avoiding the largest technological disaster in history, something much simpler but somehow more sinister was in the works. This new issue would use technology to spread, but at its core, it was a social exploit. That can be much harder to stop.

On May 5, 2000, people around the world began receiving an email with the subject line ILOVEYOU. Inside was a simple message that read, “Kindly check the attached LOVELETTER coming from me” with a simple attachment.

The ILOVEYOU Virus email

If you happened to open that attachment, instead of a lovely letter a script would fire off that would immediately begin rewriting all of the media files on your computer with versions of itself (so it would run again if you ever opened them). Beneath the surface, it opened up a password stealing application and began trying to steal all your computer’s passwords, emailing them to an undisclosed address whenever they were found. And when it was all done, the virus would hijack your email contact list and email copies of itself to all your friends with the same subject and message

It got the name from it’s subject. The ILOVEYOU virus.

The ILOVEYOU virus was devastating. In just a day’s time, it infected tens of millions of computers and caused an estimate 10 billion dollars in damages. Government agencies had to shut down their mail server altogether just to stop it from spreading. Before anyone realized what had happened, the ILOVEYOU virus was everywhere.

There’s a lot of reasons why the virus worked. It used a Visual Basic Script common on Windows computer, so the suspicious executable file extension was hidden whenever the email was opened in Outlook. It also spread itself to everybody on your contact list incredibly fast. It was undoubtedly widespread. But the real reason it worked is because it didn’t attack our computers directly. It attacked our innate social and human desires. No matter who the email was coming from, there’s a bit of curiosity that follows an email with the subject line “ILOVEYOU”.

In the days before spam and viruses were everywhere, who wouldn’t click on an email like that? Wouldn’t you?

Eventually, the effect was reversed. And although the perpetuators for the virus were never formally charged they were caught. No major data was breached. All in all, it could have been a lot worse. But with all the worries about computers shutting down when they reached a double-zero, it was a simple, social exploit that actually brought things crashing down. The ILOVEYOU virus was meant to pique your interest just enough to be dangerous. If we’re not careful, it’s the people that will get us every single time.

All of this may feel familiar.

Right now, there are plenty of the tech world proselytizing about the end of the world. They are pointing their fingers in every direction. Maybe it will be sentient AI that dooms us, or a some technologically laced super-virus. They act like these outcomes are inevitable. That it’s impossible to slow down a bit, maybe take stock of where we’re at, and figure out a way to build technology ethically and without bringing about the end of the world.

These on in a million doomsday scenarios have not come to pass. Instead, it was once again a social exploit that reaped far greater disaster. In the recent past, technology has been used to tamper with elections, compromise privacy and weaponize misinformation. Our human desires to be part of a group and connect with others has been turned against us to divide and split us. The loudest and most bigoted voices have been amplified a hundredfold, while reason’s been pushed to the fringe. Technology has been the tool. People have been the target.

If Y2K has taught us anything is that the solution to problems of technology is never more technology. When Y2K was coming to bring everything down, a whole lot of people banded together to stop it. They didn’t try fixing everything while technological progress raced full speed ahead. They slowed everything down, found a solution, and took the time to fix things. It won’t be easy, and it will take time, but problems in technology can always be repaired. The machines won’t do it for us. We have to make it happen for ourselves.

Sources