Sunday, March 30, 2008

Take Silverlight Offline with Prism

Microsoft has been cagey about taking Silverlight "offline" to create cross-platform, desktop-installable RIA-style apps to compete with Adobe AIR. But if you are interested in building an offline or occasionally-connected app with Silverlight, it's not too hard to do.

If you're targeting Windows, then it's tennis-without-a-net:

  1. you can create an "HTML Application" (HTA) and embed Silverlight
  2. or you can create a Windows Forms app and embed Silverlight 1.0 in it as an ActiveX (this might be possible with Silverlight 2 as well)
  3. or you can create a Windows Forms app that hosts the WebBrowser control, and load your Silverlight 2 app in there

Playing around with that last scenario, it was easy to get full communication between the Win Forms app (which has privileges based on the user who's running it), the DOM doc inside the browser control, and the Silverlight control inside the document. On Windows, that pattern allows a lot of options.

But what if you want a cross-platform app that works the same on Windows, Mac (and Linux, if the Moonlight initiative pans out)?

Take a look at Mozilla Prism: Prism gets you the basic desktop integration for running your app. Now you just grab your Silverlight files and make that the "web site" you're going to bundle via Prism.

Out of the box, Silverlight gets you offline isolated storage in the file system. Prism's separate Mozilla profile should keep the offline storage space separate from the user's general browser cache.

And with LINQ over objects and LINQ to XML, that may be enough for your data management needs. Silverlight doesn't come with a relational API or database like AIR does, though I expect someone will cook one up -- or write an adapter to SQL Server Express (if the odd restriction on Socket destination port numbers is relaxed.) And of course Silverlight doesn't have the native PDF support that AIR does (although XPS support will presumably show up at some point).

Before any of this can happen, all the related software has to get closer to production. Silverlight (2b1) has to stop crashing Firefox (3b4) when it terminates, and that combo also needs to get better at handling things like the SHIFT key.

Monday, March 24, 2008

Sprint Keeps Fighting the Good Fight with 'Titan' Mobile Platform

Sprint has long been the most developer-friendly of the major carriers, and they're raising the Java bar with their new Titan platform.

Titan is a significant improvement over J2ME, and Sprint is doing the smart thing by making sure all Titan-capable devices run the same implementation (something Sprint actually started doing a few years back with their J2ME VM, in an attempt to ameliorate the write-once debug-everywhere problem).

But sadly, Sprint's leap forward (and Sun's, if they succeed in getting their SavaJe-derivative JavaFX mobile platform shipped) is just more trouble for the mobile application industry: it's yet another incompatible platform that must be supported, but which doesn't replace any existing platform.

No one wants to build or fund a company (or application project) that can only reach a subset of Sprint subscribers (those with the Titan handsets). It's a non-starter. While the iPhone has enough sex appeal to drum up an entire investment fund just for its own apps, Titan has no hope of accomplishing the same thing.

So a "great new mobile app" can only choose to support the legacy platforms and ignore Titan, or support the legacy platforms and also support Titan. Anyone else see the problem here?

Wednesday, March 19, 2008

Stored Procedures and Code in the Cloud

For modest-sized Internet applications, the allure of cloud services has two elements.

First, there's simplicity of implementation and maintenance -- the hassle of real-world ops is the sort of problem startup CEOs dream of having, while startup engineers (and engineering budgets) are ill-equipped to deal with it. Second is the promise of easy scalability -- another problem the CEOs dream of, and the engineers secretly hope will become Somebody Else's Problem.

Storage in the cloud is conceptually easy. Especially with ActiveRecord patterns that ignore (at their own peril, but that's an article for another day) 35 years' worth of learnings about data integrity in the relational model. And for those who need to write things more complicated than 37signals' latest masterpiece, true structured data services in the cloud are coming.

What about application logic, though? There's raw EC2, which works on the level of provisioned VM images, and makes you design for clustering, manage your instances (while they're up), and keep your dynamic data somewhere else. Fabulous infrastructure but non-trivial to use.

Folks like heroku have value-added application-level services above EC2, which offer the elasticity with less hassle.

But what about going even higher level, and defining a unit of work, or a service module that can be deployed into a scalable container, preferable "nearby" the data it needs?

Real world example: in a recent project, I needed to be able to run Dijkstra's algorithm on big (250,000+ nodes) graphs in a persistent store. It would be great to use SimpleDB or SSDS (Astoria) for storage, but what about running the algorithm? It's not practical to extract a representation of the graph over the network, and then run Dijkstra on it just to find some interesting nodes each and every time. Changing the algorithm or using a different one? Maybe ... But what I really wanted to do was create a small module that I could ship over to the data, and run there. Even better, I'd like to be able to compute on the data "in place" in a storage facility, rather than extract. Conceptually a bit like a stored procedure.

I believe that the solution -- and an easier way to start shipping computing into a cloud facility -- is to create a module definition that one can code to, and then just upload. I think Python or Ruby would be ideal languages as they are popular, truly cross-platform, and not encumbered with a IP issues. Plus the modules would be provided as source, so that they could be scanned for, e.g., insecure or computationally intensive uses of stuff like eval. In fact, given Google's investment in Python and in interesting tooling in general, they may already be most of the way there.

I just need a better name than "stored procedures" -- that one's not getting any points for cool.

Sunday, March 16, 2008

Well, That's Embarrassing (VASGen Bug Fix)

The other day I was using VASGen, my free tool for generating ActionScript 3 code from UML via Violet, and realized the getters and setters on generated properties had the wrong visibility.

Doh!

Not sure how I missed that one before, but it was a trivial fix. Visit the VASGen project page to get the latest (0.2.1) code.

Rails is "Slow," But Railshosting.Org May Be Scaring People Away Unnecessarily

Rails is slower (by many metrics) that a lot of web frameworks. But that doesn't mean it's not both great and cost-effective for many kinds of applications.

Here's a problem, though: if you're someone thinking about putting up a small-to-moderate-sized app on Rails, and you Google either "rails host, "rails hosting," "host rails," or "hosting rails," you will find RailsHosting.Org as search result #1 or #2.

RailsHosting has a section called "What is the right plan/stack for you?" and it basically paints such a terrifying picture that it might scare a lot of newcomers away. Is that hyperbole? You tell me: according to rh.org, a shared hosting plan with FastCGI is only good for 1,000 hits per day. For 2,000, you should be using a virtual dedicated server or an actual dedicated server along with Mongrel, Pound, and/or mod_proxy_balancer. 5,000+ hits/day? You need a dedicated server.

What? what?! whaaaat?!?!? If I'm a developer in an enterprise and I see numbers like this, it's game over, forget it.

Happily, these numbers aren't right. Based on my testing with actual shared hosts, it should be possible to run a site at least up to 10,000 hits/day (assuming that load ranges from 0.33x to 3x at different times of day) with a decently configured and run shared hosting plan -- even with a non-optimized app, little or no caching, etc. If the average host can't handle that, it's not a Rails problem, it's a shared hosting provider problem.

Now, if you want 10,000 hits/hour with lots of writes per page into the same database, and peak rates higher, than you're gonna need a different approach. Rails may still do it for you, but it won't be plug-and-play on a shared host.

In any case, server technology, virtual datacenter technology, and "point-click-buy" Mongrel and dedicated memory has improved since RailsHosting first put this data online.

So, notwithstanding the fact that my Samsung Blackjack (running .Net Compact 3.5) could probably serve ASP.Net faster than the average hosting provider can serve Rails, the first information kiosk on the Road to Rails should offer a less pessimistic and more realistic view!

Large File Uploads with ASP.Net Require a Config Setting

I'm just putting this out there to save someone else an extra search: last week, a colleague and I were discovering that an ASP.Net app was throwing exceptions when we tried to upload files around 5 MB in size. Turns out that one's by design: here's an article that covers two relevant points:

  1. There's a configuration setting (detailed in the article) which limits uploads to 4MB by default.
  2. The uploaded file needs to fit in the memory used by the worker process that's handling that particular HTTP request -- all at once; apparently, it can't be configured to 'stream' to a file object or the like. ASP.Net "health management" of the worker processes can get in the way of really large uploads because it gets anxious if the worker process memory consumption exceeds certain thresholds. There are additional config params around this feature that you can tweak if you need to.

Sunday, March 09, 2008

Giving the Right Stuff Away with Your Freemium / Free Trial

Freemium maybe a neologism, but the model has been around since the days of shareware, and the free trial concept is prevalent not just in 'web 2.0,' but in everything from end-user software to enterprise software to dev tools.

After all these years, it's interesting to see how far off some vendors end up when they try to decide what exactly they want to give away and why.

Let's look at "why" first: there lots of reasons to offer the free SKU, among them

  1. The obvious: get users hooked somehow and get them to buy the paid SKU
  2. Build a really big user base with an appealing free product, never mind how many actually upgrade. Variant (a): your ability to give stuff away is limited by specific costs (e.g., bandwidth, burn). Variant (b): you are intentionally burning investor cash to give away more than you can "afford" to in order to build your user base.
  3. Build mindshare by giving away something (else) that has exchange value (i.e., is worth real money). This is the most unusual, but examples include sites giving away iTunes song tokens, or software companies giving away a license to a tool or server product if you join one of their sites and do some activities.

Once you know why you're giving something away, you can use that knowledge to inform what and how.

Online, there is a big difference between giving away space (e.g, remote storage of documents) and time (a trial period that ends). If you offer a time-limited trial of an online service, you're discouraging your users from making a commitment of their time or energy, or from storing their valuable data. Why? Because their initial assumption is that if they don't pay, then at the end of the trial period they will not be using your product anymore. Paradoxically, this initial "pessimistic assumption" makes them unlikely to commit enough that they really get hooked and decide the service is worth paying for.

Free space is a much better (and more popular) model: it tells users to relax, get comfortable ... as long as they don't exceed a certain allotment, they can use your service for free forever. Something about free forever makes folks more comfortable. Eventually they really love -- nay, need -- the service, but their free space allotment is full. It's a smaller cognitive leap to toss in a few bucks for some more space, bandwidth, or logins at that point.

There's a big difference between a long trial period, a short trial period, and a "number of uses" period in an offline product. A short trial period makes it difficult for people to settle in before having to take the leap. If they project-switch a lot, they could end up spending a big part of the trial period working on something unrelated and never getting to really eval the product.

A long trial will pick up some casual users who treat it as their own licensed product. Especially if, a couple of months after the trial expires, they can expect to get a beta trial of your next version. A number-of-uses period fixes some of the "short period" problems, but can discourage users from getting comfortable by launching your app just to play around with it.

The idea that the trial period should be purely trial, and not enough for the user to accomplish a real task, is a poor idea. The trial should allow a well-intentioned user to get something done so that he or she can ascertain that the product actually solves a real problem.

There's also a big difference in how you "degrade" functionality for a free version. Online, an ad-encrusted free version is nice because it is very clear what the core product does, even if it may encourage more parasitism than you ultimately want. Removing features, on the other hand, can make the core product look weak and unappealing.

In the offline software world, degradation may take some thought. One time, I was testing video conversion products for Windows. The DirectShow architecture notwithstanding, some software is robust at processing video, even when there are issues in the video or the DShow filters, and some software explodes violently whenever anything unexpected happens. So it's necessary to thoroughly "trial" any app that claims to convert a big range of formats.

Some trial packages added a watermark or logo to the output. Other trials didn't add a watermark, but limited the length of the video clip you could process. Still others had calendar-time-limit features.

Of these approaches, the "limited-length clip" approach was a total loser: I could see the beautiful output quality produced in a few cases, but by making it impossible to convert a lot videos for real viewing in the trial, the authors made sure I couldn't learn whether the app works in the general case.

The watermark apps were somewhat better -- I could use these for a lot of "fooling around" projects and find out if they worked or not. And the calendar-limit apps were the best: within the limit, I used them as though they were my solution ... until one or another demonstrated through failure that it was not.

In the end, the obvious tip is to think hard about your trials. But more to the point: if you're doing free trial or freemium software or services, the free/trial experience should really be designed into the customer lifecycle experience from day one.

Wednesday, March 05, 2008

Application Models, Part 2

A little while back I described the idea of an "application model."

The application model is a key concept that keeps getting muddled and confused with partially overlapping concepts like languages (e.g., Paul Graham's discussions of "languages for smart people" vs. "languages for the masses") and platforms (a legacy platform vs. a "great new" platform, etc.)

So besides being a useful term for which I'm not sure there is an equivalent, what is the role of application models?

First, application models -- not languages or platforms or vendors -- argue, persuade, and sometimes win.

Whether it's Visual Basic (the original), or C++ and OpenGL, or Rails, the question is "How well does the application model solve the world's (ok, domain's) problems for the folks who actually do the solving?"

And the cool thing is that application models are very malleable in a way that languages and platforms aren't -- because the they include (well, really their complements or negations include) all kinds of APIs and libs that people cook up.

Huh? Like this: the application model for JSP/Servlet apps (including "model 2") stunk. So folks created all sorts of libs (Struts, Tapestry, etc.) and templating engines (FreeMarker, Velocity, etc.) in order to create a better-shaped "hole" -- a better application model for what were still fundamentally Java Servlet apps. Still a hassle. Nice underpinnings, great performance, needs some work. We got IoC containers and the application model became friendlier. For small, quick apps, Rails had a lot of appeal, so we got Groovy and Grails as options. In the .Net world, we have C# adopting elements of Ruby (yes, they're really Smalltalk or Lisp elements) and ASP.Net offering the "MVC Framework" which -- with LINQ -- creates a flavor of Rails.

There is no intellectual property violation going on here. Platforms and languages are adapting around good application models. Some of this adaptation is vendor driven, some is community driven, some involves new languages, some involves new tools, and some involves APIs. But the common thread is a desire to alter an application model in a particular direction.

Application models are what get mindshare and what make provide for the continuity of commercial application development patterns over time. More on this point later.

Before I get there, it's worth pointing out that the application model also bears heavily on the aspects of application development apart from core feature development. When we get into these aspects, non-functional requirements, or orthogonal (really non-collinear) concerns, the application model determines (or at least posits an optimal or easiest path for) integration. How the UI attaches, how app-level security is managed, how different services are integrated -- these all fall under the sway of the app model, and if we want to make these things better, the app model is what we should be talking about.

Over time, the adaptation of application models (by the all of the parties mentioned above) is the story of the evolution of commercial software. Commercial software evolution is often described as a sequence of machines, or OSes, or platforms, or languages. It most definitely is not these things, any more than it is a story of vendors fighting and dying and rising. The thread of continuity, and the real underlying story, is about people adapting application models, piece by piece, over the years.

Saturday, March 01, 2008

The Pecking Order of Free Email Services

Why is it that hotmail.com and yahoo.com email addresses are viewed with a certain disdain by technical folks, but gmail.com has a different and higher status?

Is it that Google is (or was?) more respected than Yahoo or Microsoft?

I have actually seen services that specifically say that yahoo addresses are not allowed but gmail is ok. Gmail obviously isn't less anonymous, so it's not about pretending one address is somehow traceable and another isn't.

Is it a spam filter issue? Is the Yahoo spam filter so bad that folks' emails get lost in there, when they should have landed in the Inbox? Or does it have to do with other people spamming/spoofing from real or pretend Yahoo accounts? It's hard to imagine there is a server-based solution to that problem that gmail can implement but Yahoo Mail cannot, given the open standards for email and history of the spam problem.

There's also a fashion aspect. It seems perfectly respectable for a pro geek to use a gmail address for work, while a yahoo or hotmail (is that Windows Live now?) address somehow smells a little suspect. In particular, hotmail.com is below yahoo.com in respect, and other anonymous inbox services fall even further down.

There's definitely some groupthink at work here... if you have ideas as to what/why/whence it is, please comment!