I’ve seen lots of links today to Pete Lacey’s post “S is for Simple”
. This more or less makes fun of just how complex SOAP turned out to be with all its layers of XML schema, options, and other such mess. The general point is dead on and I’ve been a big fan of simpler REST-style mechanisms for wiring things up. The SOAP stuff works great when you stick with a single vendors tool-kit, especially just cranking it all out in Visual Studio, but wiring up dissimilar platforms is still a mess.
Pete’s post points out that SOAP doesn’t really use HTTP and mostly just tunnels through it. It doesn’t put anything meaningful in the URL or use HTTP response codes in a meaningful way. He then points out that the SOAPAction HTTP header is mysterious and no one knows what it is for.
If I remember things correctly, SOAPAction is at least partly my fault. During the era when SOAP was being developed there were several different faction inside Microsoft involved with Internet protocol stuff. The faction that I was more associated with was more directly involved in the development of HTTP and HTTP extensions like WebDAV while another set of people had come from an RPC background and were developing SOAP. To be fair this was a classic case of a couple of groups of people by in large trying to work with each other, but not taking the time to really understand the other groups view-points, perspectives and expertise, and this was probably worse on the HTTP-fan side.
In any case, we were working with the SOAP guys to try to make SOAP more integrated with HTTP rather than just tunneling through it. HTTP has mechanisms of namespace, feature negotiation, authentication, error reporting and more, none of which SOAP used. On the other hand the SOAP guys were just trying to build their SOAP features and figuring out how to interact with all this HTTP stuff seemed like it was just going to delay them, plus it would make it harder to apply SOAP over other infrastructures (not that I’ve heard of anyone doing SOAP over SMTP or anything in real-life).
So we were left with trying to come up with practical arguments with why SOAP needed to follow more HTTP rules to be successful in the marketplace. For better or worse the only argument we really came up with was that HTTP protocols often have to go through HTTP proxy servers to get in and out of firewalls. By simply tunneling everything we pointed out that the administrators of those firewalls might lock down the traffic and not be able to differentiate between SOAP traffic, web-browser form submissions, etc. We didn’t want the proxy to have to parse all the XML in the request to tell what was happening.
Initially we were asking for SOAP to be handled over a different method from POST. Our argument was that HTTP methods were the extensibility mechanism for HTTP protocol stuff and since POST had another function, it was not appropriate to reuse it for a very different type of thing. The counter-argument was that there were various HTTP stacks and proxies that didn’t handle methods other than the built-in ones and by using a different method we would limit the reach of the protocol. The compromise was the SOAPAction header which a proxy could use to tell the difference between normal web-browser form submission and SOAP traffic, and differenetiate
between different types of SOAP requests. In theory this would give the administrators some needed control of their firewalls.
Cue forward a few years, and it was probably a mistake. I haven’t heard of anyone using it for anything useful and it just creates extra complexity and another thing to get wrong trying to interoperate between different implementations.
One last note- the Internet community has a long history of slapping the “Simple” term on things more as wishful thinking than reality.