#Ideas

November 7, 2006. Adobe gifts 135,000 lines of code to Mozilla in the form of Project Tamarin, a Javascript Virtual Machine compliant with the still-not-fully-implemented ECMAScript 4. Mozilla would try a few times to incorporate Tamarin into Firefox, but would eventually abandon that idea for performance reasons.

Still, it injected some enthusiasm into adopting more modern Javascript standards, pushing the web forward at a time when it really needed that.

#Bookmarks

I quoted from Interface Culture by Steven Johnson recently.

He also once said this in an interview (emphasis mine)

I suspect, I’ve been incredibly energized by all the grassroots Web 2.0 applications that have exploded over the past few years, most of them descendants of Firefly in one way or another. (Someone — and come to think of it, it’s probably me — should go back and track all the core ingredients of today’s Web that were visible at Firefly circa 1996.) 

That parenthetical is my current project. It’s taking me in all sorts of directions.

In Zen and the Art of Motorcyle Maintenance, Robert Pirsig muses—among other things—about the intersection of art and technology, and what is lost when the two diverge and we seek out only the cold refuge of science.

We have artists with no scientific knowledge and scientists with no artistic knowledge and both with no spiritual sense of gravity at all, and the result is not just bad, it is ghastly” (Time for real reunification of art and technology is really long overdue)

I think in our modern web, the small, personal web is where I see art and technology merge again. There is a lot of possibility in that.

#Bookmarks

Jamie Zawinski makes the case that enabling DRM in the browser was the open web’s original sin. When the W3c, and subsequently Mozilla, caved to pressure from commercial interests, it opened a crack in the founding ideology and spirit of the web and allowed a near endless commercial erosion of the open web.

Fortunately, Zawinski has a solution.

In my humble but correct opinion, Mozilla should be doing two things and two things only:

  1. Building THE reference implementation web browser, and
  2. Being a jugular-snapping attack dog on standards committees.
  3. There is no 3.

#Bookmarks

Some eerie echoes of today’s AI trends in the discourse around “agents” from the mid-90’s. Always with the same old promises. Swap out “personal AI assistant” for “agent” in this brief aside from a New York Times feature of a personal agent website created all the way back in 1996, and it’d be right at home in any major publication today:

As soon as you give your agent your personal tastes about restaurants, plays or clothing, it begins fetching things for you — even things you didn’t know you wanted. Your agent knows that you like Latin jazz, for example. A record company with access to the data sends you an ad: a Tito Puente CD is about to come out. Knowing that music and food tastes sometime overlap, your software agent even recommends that you try a new Caribbean restaurant in your neighborhood.

Next month, you are traveling to Boston for the first time (in arranging the airplane tickets, your agent has already found the cheapest fare, reserved your aisle seat and ordered a vegetarian meal). Knowing you like small, intimate Italian places, your agent searches a pool of people whose taste is similar to yours, and then recommends three places in the North End: Massimino’s, D’Parma and Terramia.

Jamie Zawinski posted about Wikipedia and it’s representation as a source of truth, even when it isn’t. I followed a link to an interview with the author Emily St. John Mandel, who requested the interview simply for the purpose of going on record that she was divorced.

Elan Ullendorff flips the premise on search engines entirely.

We don’t need a better large search engine. Instead, we need to cultivate what I would call “folk search algorithms,” a set of tools and practices that, whether by chance or design, are not influential enough to move markets:

As a search engine scale, Ullendorff argues, it falls victim to manipulation and an endless cycle of bad actors gaming the system and algorithms struggling to keep up. A better search, with the principles of the small web.

Reading the original text of ‘Information Management: A Proposal’, the initial proposal from the internet’s inventor, Tim Berners-Lee, doesn’t really put you in the moment. Until now thanks to an insane quest by John Graham-Cumming to take the original file created by Berners-Lee and properly open it in today’s modern software terrain. When he discovered that even Berners-Lee could no longer access his original word file, Graham-Cumming embarked on a mission to emulate the 1990’s Word software, allowing the document to be viewed in its original context, providing a captivating insight into history.

In 2011, early Facebooker John Hammerbacher was quoted as saying:

The best minds of my generation are thinking about how to make people click ads. That sucks

Given Facebook (sorry, I mean Meta’s) latest statement about artificial general intelligence, an all the enthusiasm poured into AI by Microsoft and Google and others, I feel as if that statement can be slightly amended now.

The best minds of my generation are thinking about how to make computers generate things that make people click ads. That sucks

Today I’m looking at the work of Faruk Ates, who created the first version of Modernizr back in 2009. With the help of several other developers in a remarkably short period of time, Ate’s initial prototype transformed into a fully-featured library that empowered developers worldwide to utilize HTML5 and CSS3, accelerating the adoption of these new standards.

The work on Modernizr was a collaborative effort, with developers from different browser makers and practitioners coming together through mailing lists and blogs. They worked at a rapid pace, transforming the tool in the spirit of cooperation. It’s a testament to the strength of open source and the open web that the fluidity of communication can lead to rapid development.


I am a bit distressed about the web. Sometimes, I panic about it. And it’s why I look back so often to try and capture the long view. But when I peak up to loo around a lot of what I see—or rather, what is surfaced to me by broken down algorithms that hides beneath the surface a much longer tail that sadly most people never see—is all buttoned up and plain and unadorned and professional and (frankly) boring.

Maybe that’s just the web splitting in two. The web is over thirty years old, basically an elder millennial if we want to call it that. And at some point, it was going to need to grow up, develop some consistency, and figure out a way to make money. But I didn’t think we’d have to ditch our punk rock digs, unique interests and unconventional spaces for a suit and tie and a job selling ads.

I started thinking about this more this week when I pulled from some archives a site called Rotten Library. It was an offshoot of rotten.com, an early web purveyor of morbid curiosities and vulgar fare. The Rotten Library was a unique take on Wikipedia, offering detailed and lengthy encyclopedic entries on a variety of topics from the rotten.com domain. These entries were often written in a playful and casual tone, and they inspired many.

I mention Rotten Library, not as a delightful nostalgic throwback, but because it effectively illustrates a simple point. On the fringes of the internet, where things are small and specialized (even when they’re grim or shocking), there’s something far more captivating than the sanitized, controlled environments we’ve established on the modern web. And it is still very much out there, and I believe it is growing.

I hope to turn my attention there for the near future in my research. It is utterly fascinating.

It occurs to me that, much like the web, what’s absent from the next wave of AI tools are any sort of concept of transclusion. Translcusion would have the sources of data traveling along the same pipes as that data itself, and make attribution actually possible. Can you imagine if LLM were actually accountable for providing the root of each source?

Ted Nelson never quite cracked that technological nut, and we are so well past it that nobody even thinks about it anymore.