Computer programmers like to squabble. I suppose this is true in any profession, but it is most certainly true for programmers. Don’t believe me? Just ask a programmer if you should set up your web services using SOAP or REST. Then grab a cup of coffee, because it’s going to be a while.
It would probably help to back up and explain what web services are. Web services are a way for machines to network with one another over the World Wide Web, and relay information in order to make changes to data. Basically, it’s a way for computers to talk to other computers. And on the web, that all started with XML.
XML was developed by eleven contributors at the W3C in 1997, inspired by the dream of a truly semantic web. The web, as it stood in 1997, allowed for publishing documents that humans could understand. This, by design, was extremely easy and forgiving. But a simple HTML web document could not be understood the same way by computers. The goal of the semantic web was to change this by adding a new layer in which computers could talk to one another, and the web itself could become a programmatically linked network of collective knowledge.
Or at least, that was the dream.
And XML was the solution. The markup language could be used to create web documents that were both machine-readable and human-readable. To make this possible, it provided an easy way to structure data with built-in types and tight, enforceable rules. In order to make the computer-readable part work, the language itself had to be far less forgiving then HTML. Errors, for instance, would mean that a page didn’t load at all. But many saw this as a strength. In any case, XML was officially published as a W3C recommendation in February of 1998.
But for a lot of developers, XML didn’t go far enough towards allowing computers to talk to one another. So a few employees at Microsoft got together and started working on a Simple Object Access Protocol (SOAP). SOAP standardized the communication between servers, finally closing the loop between clients (what the user sees) and servers (where the data comes from). However, thanks to politics and in-fighting, things got a bit held up within Microsoft.
Dave Winer, one of the employees working on SOAP, didn’t much care for that. He ended up releasing his own lightweight version of SOAP called XML-RPC. It didn’t include everything, but it was enough to get servers talking to clients using HTTP, the standard protocol of the web, and the XML markup language.
Web services had arrived.
After some time, SOAP was finally able to find a powerful ally in IBM. Together, Microsoft and IBM pushed a spec for SOAP through the W3C, and it became an official recommendation May 8, 2000. At that point, SOAP was already being used in some places, but its official standardization gave larger organizations the confidence to adopt the new technology.
SOAP had a lot of rules. Most people actually saw that as a good thing. Standardizing web services meant that it was easier to hop from project to project and build independent modules on top. SOAP developers knew that the hard problems had already been worked out for them, so they were free to put their focus on the details. In the end, SOAP helped developers create API’s, programming interfaces, the middle link that lets users retrieve and update data from web servers. Pretty soon, SOAP was running API’s for some of the largest organizations out there, such as Oracle, HP and Sun.
Roy Fielding, however, had some problems with the SOAP methodology. When SOAP was released, Fielding was working with Tim Berners-Lee on the newest specification of HTTP, that protocol I mentioned earlier, version 1.1. At the same time, he developed his own set of principles for web services called Representational State Transfer, or REST. He published the first specification for REST as his PhD dissertation at UC Irvine in 2000.
Fielding did not advocate for rigidness. In fact, REST was not a full set of technologies, but more a collection of design principles that tried to take advantage of built-in HTTP methods (methods you may have heard of, like GET, POST and DELETE). REST’s main idea was that for each piece of data, the URL would stay the same, but the operation would change depending on what method was used. For instance, asking “http://yoursite.com/posts” to GET might return a simple list of posts, but a POST request to that very same URL would instead create a new post.
But back to SOAP. As the 2000’s continued, it picked up a lot of steam. At the same time, the web itself was growing. And it started opening up. Emerging companies like Salesforce, Ebay and Yahoo created what were known as open API’s. This allowed any developer (not just internal ones) on the planet to connect to these sites programmatically and access their data. And all these API’s used SOAP.
But Fielding, now at Sun, gathered support from a community of REST evangelists and started pushing back.
Fielding and his supporters argued that REST was simpler and more elegant, built specifically for the web. They emphasized the systems flexibility in the face of the rigid standards of SOAP.
Of course, SOAP advocates had something to say as well. They called out REST supporters for what they believed was a massive oversimplification.
The REST folks echoed back. These newly formed open API’s were their target, and they were able to gain some ground. Soon, companies like Amazon, Yahoo and Google switched to REST. And lots of other companies followed.
Still, both sides declared they were the victor, emphasizing their points of adoption – SOAP in the enterprise and REST in open API’s – and deemphasizing where they had lost out.
Eventually, things evened out. Developers went back to preaching practicality, pointing out the times when SOAP may be a better choice than REST, and vise versa. Critics backed down a bit, and people went back to just doing what was best. But when it comes to technology, nothing proves more volatile then dogma.