Friday, October 31, 2008

Great Business Model: Pseudo-Business, Pseudo-Freemium Online Software

I can't come up with a better name for this model, but, not to worry, you'll recognize it right away. In this period of renewed discussion of "how to make money," I'm trotting out my favorite -- perhaps the best one for a startup today.

A couple of examples are and (prior to its acquisition by Google).

Let's take the explanation in two parts. Pseudo-Business Software is software used to conduct business, but which is not necessarily sold directly to businesses. Put another way, it is priced and and offered in such a way that individuals and small work groups inside of businesses can buy and use the software directly, without larger purchasing approval and without IT department approval. It offers a businesslike function so that it is easy to justify on an individual expense report -- and it's cheap enough that some folks may happily pay for it themselves just to be more effective at work, the same way they might shell out $25 for a DayRunner or $50 for a nice portfolio without thinking twice.

YouSendIt fits this model: it offers an oft-needed business function -- transferring large files. It lets employees bypass the tortuous and unwinnable debates with IT over why and whether attachments fail, how to share files with others, etc. Pay a few bucks and if you can get to the web at work, you're good to go. Easy to expense or even pay for on your own. It's enterprise software sold cheaply and one user at a time.

JotSpot (now Google Sites) -- in its original freemium wiki form -- fits as well. Easy to justify as project groups scale and each monthly increment is a small charge. Instantly bypass all the broken collaboration infrastructure your company can't get right.

At the same time, these products are Pseudo-Freemium software. If freemium is software that offers one level of functionality or resources for free, with more available for a price, then pseudo-freemium is like freemium except that the free version is not terrifically usable as a business solution except to make the customer comfortable with the product.

YouSendIt has, and JotSpot had, free versions. Unlike many consumer freemium use cases where many users happily use the free version and never need to upgrade, these pseudo-freemium products are were specified so as to be more like a mini-free-trial.

IIRC, the JotSpot free account was limited to 5 users and 20 wiki pages, while YouSendIt free limits the file size to just shy of what I always seemed to need. There are countless other examples, ranging from single-user hosted source code services to online storage that offers only minimal free space.

The free version gives you the "warm fuzzy" of a free never-goes-away account and lets you see that the product works as advertised and won't embarrass you with some crazy or unprofessional aspect when you're asking your boss to sign for it on your expense report. These have always been the easiest-sell, no-brainer for-pay services that I've subscribed to. In general, they appeal to that certain purchasing area in people's minds -- next to the planners, a beer in the airport restaurant, or a nice tie -- as modest costs of being a professional that are either expensable or should just be paid for oneself.

Monday, October 27, 2008

Azure -- and the Other Clouds Players -- Should Lean Forward

Since I covered Azure pretty well two weeks ago, there's not much to add except the name and the open question of which parts of the platform can be run in-house, on AMIs, or anywhere outside of MSFT data centers (via a hosting partner). And Microsoft hasn't really addressed that either (I have questions in at PDC) so the answer appears to be "not yet, stay tuned."

Now that the semi-news is out of the way, I am a little disappointed that all the cloud players haven't leaned in more, in terms of providing added-value capabilities beyond scaling. Elastic scaling is valuable, but it's a tradeoff. You are paying significantly more to be in the cloud than you would be to host equivalent compute power on own machines, or on VMs or app server instances at a consolidated host.

If you have reasonable projections about your capacity, then you're wasting money on the elasticity premium. You do get some nice operations/management capabilities ... but for apps that really need them, you still need to bring a bunch of your own, and you're taking on someone else's ops risks too.

For some businesses, these costs make sense. Here are some value-added features that would make the price persuasive for more people outside that core group:

  1. Relational and transaction capabilities. Microsoft does get the prize here, as they are the only ones offering this right now. Distributed transactions and even joins are expensive. So charge as appropriate. It's a meaningful step beyond the $/VM-CPU-cycle model that dominates now.
  2. Reverse AJAX (comet and friends). Here is a feature that is easy to describe, tricky to get right and multiplies the value of server resource elasticity. It's a perfect scenario for an established player to sell on-demand resources, and could be a differentiator in a field sorely lacking qualitative differentiation.
  3. XMPP and XMPP/BOSH (leveraging the reverse AJAX capability above). XMPP is clearly not just for IM anymore, and may evolve into the next generation transport for "web" services. Not to mention, having a big opinionated player involved may help at the next layer in the stack, namely how a payload+operation gets represented over XMPP for interop.

Those are just a couple of ideas that spring to mind -- I'm sure there are much better ones out there. To make the cloud more of a "pain killer" than a "vitamin" for more people, some new hard-to-DIY features are the way to go.

Monday, October 20, 2008

hquery: Separate data from HTML ... without templates!

Out of a bunch of cool presentation at least week's San Francisco Ruby meetup, my favorite was Choon Keat's demo of a working implementation of his hquey project -- a lib that lets you use Ruby, CSS, and jQuery patterns to bind data to views ... in the DOM, on the server side.

He had previously blogged about the motivation -- the core idea is that 10-15 years into the web application era, we are largely using template languages for integrating data into HTML. We've come a long way toward avoiding procedural code in our templates, enforcing MVC, etc. But aside from collectively agreeing to avoid a set of 'worst practices,' we're still inserting data into HTML in a manner reminiscent of 1998.

For well designed pages, with CSS classes and/or IDs, it should be possible to specify the data binding using CSS on the server, without any special binding tokens or markup. hquery does just that. So a designer can create full mockups with dummy data, and hquery can swap in the live data

The current demos and syntax might be touch verbose, but this is just an intial proof-of-concept and Ruby lends itself to easy re-arrangement an API if you need to.

This sort of radical division of dynamic data from HTML, while still using standards and not introducing yet-another-meta-templating-scheme reminds me a little of the old idea of using XML+XSLT to create pages. We all know how popular that ended up.

The hquery approach seems about fifty times more accessible to the present community of developers ... so the question is: how do we make this more popular and build community interest?

Thursday, October 16, 2008

Listen to Early Windows 7 Feedback, Even From Developers

At the upcoming 2008 Professional Developers Conference, Microsoft will be showing Windows 7 to developers.

My bet is that the 160GB portable hard drives they are handing out to distribute preview bits will actually contain Virtual PC images of Windows 7 in various states or configurations. Such a setup will be more convenient to try, even if it does narrow what aspects of the OS can be seen.

In any case, Microsoft would do well to pay attention to the feedback it receives from these developers.

We know all of the reasons why geeks can make poor proxies for "real end users." Nonetheless, I recall the 2005 PDC, when Microsoft gave us the latest beta of Windows Vista. A chorus of complaints arose from many who tried the new OS. It's way too slow; it doesn't work with the hardware we have; we can't explain the 10-odd different SKUs to our customers.

Do these sound familiar? They should, because they're uncannily similar to the problems "real end users" found -- and continue to find -- with Vista.

At the time, the 'softies at the conference, who are generally open, approachable, and humble with regard to technical matters, didn't want to hear these complaints about Vista. I was rebuffed more than once: the SKUs haven't been ironed out yet; the beta build is a checked debug build, so of course it's slower. Well, maybe. But I found it to be little slower than the release build on the same hardware. Either way it was unusable.

I think everyone's learned from the Vista experience -- and that includes Microsoft, ISVs, consumers, PC builders ... and Apple.

Let's try it differently this time around, starting with feedback from PDC.

One last thing: it would make sense to release the Windows 7 preview to the general public at the same time. Why? It'll be on the file-sharing networks instantly, where there is a greater chance of folks downloading a trojaned image, etc. So it will help everyone to have an official distro from Redmond instead.

Monday, October 13, 2008

What's In Microsoft's "Strata"[?] Cloud OS


Just for fun, let's do a little educated speculation on Microsoft's "cloud os" initiative. It's not too hard to make some good guesses -- Microsoft's existing and unreleased products telegraph a lot about what they are likely assembling. For example, the semi-well-known "COOL" and Visual J++/WFC gave you most of what needed to know to imagine the real .Net platform.

There are lots of pieces out there -- certainly enough to comprise a pretty interesting cloud stack and application model.

Since Microsoft -- and platform vendors in general -- like to go all out, let's imagine this stack reaching from real hardware up through virtualized hardware up to application servers and then to client components and the end-user's browser or alternative on the other end.

Let's start in the middle of this stack and work our way out.

What would the "middle" look like? Well, what makes a hosted account different from a cloud platform? Some answers: storage and bandwidth may not be elastic; clustering the app is neither automatic nor declarative, but requires programmatic and operational work; the database is typically a SQL Server instance (perhaps a mirrored failover cluster) with all of the usual capabilities and scaling constraints.

So imagine a hosted account with a few changes that address these limitations.

First, swap in an alternative implementation of sessions, that supports clustering, proper caching, etc., with zero config. Add a lint-like tool to warn about code that isn't properly stateless. And an asynchronous worker service for any long-running, background, or scheduled tasks that could be "fudged" with threads or events in a controlled Windows Server environment, but won't work that way in the cloud.

Next, replace the datastore with something like ... SSDS, and a LINQ provider so that in many cases code won't need to be changed at all. The interesting thing about SSDS, of course, is that unlike other non-relational cloud datastores, Microsoft has said the roadmap will offer more relational capability (subject to constraints, no pun intended). So apps that need real relational behavior might have an easier time moving to this new datastore.

So, without much new, we have a flavor of that is more cloud-centric and less server-centric.

Now on the hardware and VM end of the stack, bear in mind also that -- to add value and sell the Server product, as well as to service enterprises which would like cloud architecture but need parts of the "cloud" to stay inside the firewall -- the whole enchilada is likely to be available as a service (or its own SKU) on Windows Server.

In fact, a number of Microsoft products related to modeling data centers, virtualization, and automated migration of services and machine images suggests that a key thrust of the "cloud os" might be that a customer can easily move services from individual servers up to a private cloud implementation and on to one (or more -- perhaps an opportunity for the hosting partners) public cloud data centers... provided they are coded to conform to the API.

ADO.Net Data Services (aka Astoria) already supports AtomPub, the format Microsoft is using or moving to for all of its Live services, so minimal wrappers (not to say minimal effort in the API design) could turn this into a platform API. A simple using directive brings in a File object and API that works with Skydrive instead of My Documents.

Last, look at the client end of things. Right now, we have serving web pages, and we have web services for Silverlight clients. There is also a project (named "Volta", and which has just recently gone offline while a "new version" is in the works) aimed at dynamic tier splitting and retargeting apps in terms of the client runtime. Hmmm... Sounds like a critical part of the front end of the cloud os stack.

In order to provide a RIA experience via Silverlight (or even desktop experience for a cloud-os edition of office), promote the client os product by offering a best-of-breed experience on Windows clients, and at the same time offer a legitimate cross-platform web-browser-consumable app, a piece like Volta is critical, and makes complete sense.

Microsoft tends to hunt big game, and I doubt they are interested in a me-too web app environment. They really intend to offer a cloud os, allowing developers to code libraries and GUIs that are outside of the web paradigm. These bits can run as .Net on Windows ... as .Net in Silverlight on Mac or (one day) Linux ... and as Javascript apps in non-.Net-capable browsers.

The big question in my mind is timing -- how far along are they on the supportable, RTM version of this stuff. Whether this is relevant -- or even becomes a reality -- will depend on how fast they can get this out of beta.

It seems that when Microsoft is quite close to production with a platform they can grab enormous mindshare (recall the release of the .Net platform). If this is an alpha look, with no promised timeline, things are a lot more tenuous. If there is a 1.0 planned before mid 2009, this could make things interesting.

Sunday, October 12, 2008

Time for a Wireless Coverage Map that Shows Utilization, Not Just "3G"

It's easy to get distracted with mobile protocol (HSDPA vs. EVDO) or "generational" system (GSM-3G vs. EDGE) speed claims. In fact, that's the most common conversation that mobile operators, hardware manufacturers, and customers have.

But it's only a piece of the puzzle. The other big ones include latency (the time required to establish a connection and get the packets flowing) and congestion (the instantaneous demand for bandwidth relative to the current capacity in a location).

Not to downplay the value of 3G+ data speeds, it is still instructive how well a "slow" connection can work when congestion is low, and how badly 3G can work when congestion is high.

Apple iPhone customers have complained about the high congestion experience. The other day I had an interesting low congestion experience.

I was in a corner of San Rafael where my phone could only negotiate GPRS -- but I had the air to myself. Subjectively, the browsing experience was better than a typical EDGE connection on the same hardware, and similar (for modest amounts of data) to the 3G experience.

Demand on the network changes constantly as users do different things, and the effective capacity changes due to everything from weather to RF interference to upstream network congestion. So it's not easy for an operator to make a priori statements about actual speeds or actual congestion ... hence they talk about the protocols they offer and their "optimal throughput."

But congestion/capacity issues are a first-order concern in many areas, so I propose some mechanism be created so that customers and operators can have an informed negotiation about service.

I'd like to see a coverage map, for example, that doesn't just show "3G" areas in a certain color -- but also color codes the average real utilization over the past 90 days. Sort of like shopping for an airline ticket and looking at that column that shows "on-time percentage." It lets a customer separate the hypothetical performance from what can actually be expected.

That information would also motivate operators to invest in capacity and infrastructure where the demand is, rather than trying to extend that "patch of orange" on their coverage map to one more town for sales purposes.

Wednesday, October 08, 2008

Engineering Departments Should Take Their Medicine Too

Without getting into gloomy predictions, it's clear a lot of companies will be feeling some pain soon, if they aren't already feeling it.

Software development groups at tech companies should take some medicine too -- cutting costs, becoming more competitive, improving morale, and attracting good talent at the same time.

How is that possible? Here's one way:

Unless your company is brand new, and you've got geniuses agile-ly coding your hot frog-sticker-collectors' social network, your project has a bunch of history, cruft, and mistakes in it.

It's nothing to get upset about, that's just the way of the world when developing software over the course of years in an organizational setting.

Yet many companies don't make any effort -- or actively resist any effort -- to identify those legacy problems and mitigate them. Perhaps it's fear of blame (why didn't you see this before? why didn't you say something? aren't you guys supposed to be experts?) or fear of appearing backward-looking rather than forward-looking (we'll make do with all that stuff; now can we shrink wrap and ship your latest proof-of-concept and call it next quarter's product?)

Suppose that, instead, your development group listed all the crufty bits that bug them. Stuff that maybe made sense at the time, but just isn't right any more -- wrong code, wrong architecture, wrong network protocol, wrong database, wrong format, whatever. Suppose the team got to rank these in order of annoyance factor, and impediment to productivity. Then, picking the top handful, they got to decide how to use the latest and greatest (and in many cases less expensive) technology to refactor and fix those modules.

A shortsighted manager might complain that 'if it ain't broke, don't fix it' -- it's a waste of resources.

But we know better than that.

In many projects a significant percentage of resources (up to two-thirds in extreme cases, based on my research and experience) can be spent wrestling with these "legacy cruft" issues. So, from a simple economics standpoint, it's definitely "broke" if you're spending $1 million per year on a dev group that could theoretically deliver the same functionality on $333,000, or 3x as much for the same $1 million.

A project that removes these old bits becomes more competitive. Why? The competition, particularly if it's a newer company or on a newer platform, isn't running with these parking brakes holding them back. Why should your team? If you can release the brake, you can deny your competitors something they definitely view as an advantage against you.

Moreover, these moves can boost morale and make your company more attractive to prospective employees in several different ways.

Getting rid of old morale-busting code makes everyone feel good.

Using a newer technology -- not as a novelty but because it's solving a real problem better than existing code -- is appealing to developers who want to learn new skills.

Doing a refactor of critical code without breaking existing code, wrecking release schedules, or introducing excessive ops downtime is a challenging and rewarding skill, kind of like working on the engine while plane is flying -- and top developers relish this kind of "black-diamond" assignment.

Finally, it tells everyone, inside the company and out, that this isn't Initech, where an engineer will have to work on 15-year-old tech for the next 15 years.

Tuesday, October 07, 2008

TextMarks and AppEngine Make Building SMS-Enabled Webapps Simple

TextMarks has been on my radar for a little while. It's a company offering free (ad-supported) access to SMS shortcode 41411, via a secondary keyword of your choice. They also have pro options with no ads, and shorter (or reserved) keywords.

Not having a great idea for a SMS-based app (I think I'm biased from having web-enabled, push-email PDAs for too long), but wanting to kick the tires, I decided to build an iteration of the now-classic "to-do list" app.

Meanwhile, I've been spending a little time with the Google AppEngine. When a hosting account came due for renewal, I decided to see if I could replicate it using AppEngine. AppEngine covers all the basics, and makes it easy to stream BLOBs out of the datastore as well. But if you host files (anything over 1MB), those are not allowed in AppEngine, either as static files or as datastore objects. Too great a magnet for bandwidth or data hogs I suppose. So those files live somewhere else.

But AppEngine makes a fine place to put a small SMS-driven app (assuming you don't need the background processes or scheduled processes that AppEngine doesn't allow).

Registering a TextMark is simple. Set the URL to the one you want called when 41411 receives your keyword, and add dynamic data from the SMS into the URL request GET params using escape sequences (e.g., \0 means "the whole message after my keyword"). So your URL might look something like\0).

Go into the "Manage" tab, pick "Configuration" and turn off the various options related to "posting messages" -- these aren't necessary for your app. If you aren't planning to asynchronously send messages out to your users (hard to do with AppEngine, as mentioned above), you can also turn off the subscription options.

Now you just need a handler on AppEngine to receive those calls and do something. Whatever the handler renders will be truncated and sent back to the user as a reply SMS, so you get a semi-web-like mechanism for confirming that you're done an operation, or returning results.

Create your AppEngine app, a handler, and yaml that ties 'em together, per the tutorial. Now you can just modify the basic HelloWorld to do your bidding instead.

Here is my little memo handler -- it files memos under the phone number of the person sending the text, and forwards a list of all the memos to an email address if it receives the command "export (email address)." Don't put anything private in there, since there's no PIN/authentication (although it would be easy enough to add) ... so anyone who knows your phone number and can build a URL by hand can just ask for all your notes :)

Of course, it's easy to test out your handler with just the browser address bar, but TextMarks also provides a nice emulator so you can send messages to your handler -- and see what the response will look like with their 120 char (free account) limit and their ad tacked on. And there are a bunch of other neat things you can do, like create short term stateful contexts, where a user can just text back Y/N or a response to numbered "menu" and the messages will get to your app automagically.

Friday, October 03, 2008

zBoost Cell Extender: Refinement Post

This is a refinement, or follow-up, to my first post about cell signal boosting with zBoost:

While the base station allows decent communication over a specific area once a data or voice call is established, it seems to have little or no ability to propagate the signaling part of the GSM protocols when there is no connection established.

This symptom means that the network is unaware of a handset, and a handset thinks there is no service. As a result, it often will not even try to make a call.

Once the handset decides there's a network -- usually by picking up a very faint signal from a real tower -- it will attempt the call and then discovers a near-perfect connection.

zBoost may not be responsible for this behavior -- the product docs specifically say that you have to have some real signal level in order to use the repeater. I had imagined this restriction was solely due to the zBoost device needing a tower to talk to (it cannot bridge to another backhaul) -- but in fact it may also be due to zBoost's inability to simulate the idle signaling between the handset and the tower. That is, you need a tower to make or get a call, but the zBoost can boost the data in the call stream.

So what does this all mean?

Well basically that if your RF visibility to a tower is marginal, as mine is, zBoost may still be able to help. But you don't want to rely on this to reach 911 or make any other kind of need it right now phone call, as it might take you a few minutes of moving around or monkeying with the phone to get it to realize there's any service.

Likewise, if you absolutely positively cannot miss an incoming call, this may not be the solution for you, although, interestingly, incoming signaling (calls and SMS) seems to be stronger/prioritized traffic from the tower than idle updates, and so gets to the handset more frequently, without any kind of booster present.