Last week Microsoft announced that they would be abandoning the ACE and dynamic entity (“property bag”) model for the SQL Server Data Services cloud data storage system. They would also switch from their REST data API (used in ADO.Net Data Services) to the old-school “Tabular Data Stream” wire protocol.
While Microsoft’s promise of more relational support was always a distinguishing feature of their cloud DB service, and while they tried to spin the news in that direction, it feels a lot more like when they abandoned WinFS and announced that, really, everything you could do with WinFS would work fine using NTFS and a whole heck of a lot of indexing. Maybe sorta true … but feels like a big step back.
Of course, big customers – large enterprises with SQL Server databases and lots of SQL code – would not want to see a change in their data layer and would prefer this move. But accommodating them is assuming that they are ready to become first-version customers of the data cloud at all. And I doubt this for two reasons.
First, any move to the cloud involves a trade-off of control which some companies are loath to make even if they are confident the system will work. Which is problematic because:
Second, anyone who has dealt with big databases knows that there is no magic. Despite the quest for automagic autoscaling self-tuning databases, no one, so far as I know, has made one that does all of this for really large enterprise applications. There are just too many application specific variables, not to mention poorly written app code that can cause trouble in proportion to the amount of resources you give it access to.
I do believe Microsoft has the engineering brainpower to try the problem, and are as likely as anyone to succeed. It’s just that I haven’t seen any evidence of a specific strategy or technology. Maybe if I were a bigger customer … but seriously, if Redmond had this problem solved (and it’s one of the biggest out there), they would either patent it or publish lots of white papers. Either way, it would be publicized and reviewed. A trade secret? maybe, but which Fortune 500 CIO is going to jump on that bandwagon and the cloud and the outsourced data stuff all at the same time?
To the extent that these large database apps could be made to behave without human intervention, there is likely to be a tradeoff in resources, and when you’re paying per GB or per compute-cycle, that equals a side order of more cost to go along with the entree of new greater risk.
The point is that the ACE/dynamic entity/REST model is well understood, performs, utilizes resources in a known manner. Not appropriate for every app. Not relational in the formal sense if at all. Not easy to migrate to. But it goes like the devil. So you’re getting something concrete in exchange for your risk and your dollars. Unlike a magical SQL Server instance in the sky.
Maybe there is magic in there, and I’ll be proven wrong. Or maybe 99% of the customers’ database needs are so small that it’s a non-issue, and Microsoft is really just competing with the thousands of hosting providers that will host actual individual SQL Server instances for you on a large server. But this change still seems to raise more questions than it answers.