Tuesday, February 26, 2008

Thinking ahead. Not thinking ahead.

For some reason I thought about DirecTV and downtime the other day.

As in, I don't think I've had a minute of downtime on my DirecTV/TiVo in over 5 years. More reliable than the electricity where I live, the cable, the DSL, cell service, even the analog landline, which tends to stay up.

One time I stayed at a vacation spot during a huge winter storm and the DirecTV stopped working. I went outside with a broom and knocked a few inches of snow off the dish and it came right back up (the dish still had a decent covering of snow and ice, just not as much).

Satellite thing works pretty well. The client box has never crashed, acted up, or rebooted. And they haven't bricked it, even temporarily, with an update.

On an unrelated and more disappointing note, I've gotten two emails from Microsoft Exchange users at two different companies now, where the email includes a link whose target looks something like http://go-to-exchange-and-get-a-redirect-to/here?url=http://place-i-really-want-to-go

Which fails as soon as you send a link like this to someone whose web browser is outside the firewall and can't connect to the "go-to-exchange-and-get-a-redirect" part. Of course you can copy the destination and edit it yourself, but the idea of a link is it should be clickable.

Surprisingly bad behavior. Even if it's optional or the result of poor Exchange admin, it should be "easy to do the right thing" and apparently it's easy to do something odd instead when configuring outgoing HTML email in Exchange.

Monday, February 25, 2008

Application Model, Part 1

As programmers, we participate in an evolving discourse around the tools, languages, platforms, and APIs that we can use. Sometimes this discourse is purely academic or reflects personal preferences; often, though, the goals are broad: higher quality software, developed more efficiently, with more reliable outcomes than what we do today.

But one critical and useful piece of terminology seems to be missing from the conversation. I propose the term 'Application Model' to describe the essential 'shape' of how a basic program would be arranged, given a particular platform, libraries, tools, and possibly even architecture.

One way to think about this is to imagine a full stack, with a non-trivial Hello World program. The non-trivial Hello World program is a basic program that exercises the core purpose of the stack in the way that the stack intends. So, for example, a simple blogging or e-commerce app might serve as the non-trivial Hello World for a web application programming stack. For a big, high-level graphics library (like WPF), maybe rendering Hello World in a specific font, extruding, rotating to an arbitrary angle, and texture mapping with a streaming video source is the non-trivial Hello World.

Now, remove the platform, libraries, etc., and, assuming it's constructed the way the stack intends, look at the shape of what's left -- of the actual Hello World app. That arrangement or shape of the code is the Application Model.

Another way to think about it in general is to consider the set-wise complements of the API/platform/language stack, in a collection of 'typical applications' running on that stack.

The complement will be a little different for each app, but across the collection, a certain shape should emerge at the boundary.

This boundary, the edge between 'app developer' code and 'stuff that app developer code uses' is different from the formal interface mechanisms (linking/loading; object interfaces) because it is shaped by the practices, patterns, and intents of the platform.

Let's describe a few examples in case it's all too abstract so far:

What is the 'application model' for a C app meant to run in a console? A main function where the execution starts, #includes to access additional platform/API/library capabilities, and a routine to set up signal handling. Compile/link/run. Those characteristics make up the application model. From there on out, it's features and "business logic."

How about a COM Win32 raw (non-MFC) C++ app with a little GUI?  Well, that's a lot more complicated. You need the basic WinMain stuff, the GUI and message loop stuff if you want a real window not just a ShowMessage, then you need the OLE/CoInitialize stuff, and the basic COM dance (CoCreateInstance, QueryInterface) to get the COM object you want. Then you can do myObj -> HelloWorld. Compile/link/run. With the bonus that you don't need to link DLLs (including COM objects).

Win32 via .Net and C#? You're back to void just a main function and a couple of using directives. Compile, don't link, run.

Raw J2EE Hello World? not a ton of code, but you need a lookup service for your other services, you need to do lookups, create bindings, and ask for HelloWorld from whatever service(s) you are demo'ing. Reference other code using import and classpath references. Compile, don't link, but put together descriptors, and subpackages, wrap in an EAR file, deploy, possibly configure app server, run.

J2EE using an IoC container and libraries? access other Java code the same way, but a lot of the other steps drop out. Maybe the packaging work is even automated (and requires less inside anyway), depending on which framework and/or appserver you're using.

The idea is that the 'application model' represents (1) the typical development experience on the platform as well as (2) the core knowledge that a developer will need to have or acquire to be productive.

Different combinations of platform, language, and supporting frameworks create different application models -- so it's a matter of comparing entire stacks and looking at the shape of their photographic negative -- that's the space that will be inhabited by developer work.

Tuesday, February 19, 2008

Public Domain Reprints Offers a "Transactional Web" Mashup

Some time ago I wrote about how the "programmable web" has really turned out to mean the queryable web, because real transactional APIs have not been readily available.

Public web services (with a few exceptions, mainly around payments and online file storage) are a way of retrieving state. Using web services to execute useful "write" transactions into other systems has turned out to be the business of enterprise SOA, not late-night hacker camps.

The concern, of course, is over users "getting it wrong" when insufficient semantic data is available. Although that's a bogus argument (many industries and services have solid semantic models that are readily accessible from web service descriptions), most folks seem to be waiting for the legendary semantic web to come along.

But "most" isn't "all," and I was glad to see O'Reilly write about Public Domain Reprints, a transactional mashup that submits books to print-on-demand services where you can later order a copy.

Although Public Domain Reprints is just formatting and "preparing" the book -- you still need to order a copy yourself from the service of your choice -- the mere adding of an item into a print-on-demand service's catalog is fairly exciting given how rare these "commit data"-type open web integrations (mashups) are.

At this stage, it doesn't really matter what sort of integration mechanism is used -- it needn't be a web service; it could be emailing a PDF into a dropbox, or simulating a file upload from a web form. Obviously, having a "strong" interface (including some kind federated authentication, so that a mashup can act on my behalf using a restricted token and not my password) will make more services feasible.

I continue to hope that we will see more services that don't just pull down a bunch of data and create a nice report, but actually reach out to services I use and improve my life by doing a little bit of work for me.

Thursday, February 14, 2008

Path of Least Resistance == Most Rails Apps Send Your Password In the Clear

The bulk of developers will follow the gradient down hill, even many of the ones who think they're rebels (I'm pretty sure this scene from Life of Brian was about Steve Jobs and the cult of iConform).

Too bad for them, right? Well, no. Too bad for us as a software industry with a reputation for troubled delivery. And too bad for us as citizens and consumers, who suffer from bad software not only in dramatic ways but in having our private data (passwords, SSNs, etc.) sprayed all over.

Elegant and minimalist principles are great; still, if you give most software developers a knife without a sheath, it's only a matter of time before the blood starts flowing from somewhere.

I was surprised when I realized that Rails has no built-in authentication module; more surprised when AWDR "teaches" you to roll your own authentication; and most surprised when I realized that acts_as_authenticated (nice, but intentionally not sophisticated) is de facto standard for auth in Rails.

Hmmm... I wonder how many of those Rails sites I use all the time actually bother to protect my credentials? If the originating or following page isn't kept in SSL, you can't tell by looking for the little "lock icon," but you can tell be having a look at where those login forms are posting to.

I looked at the first two pages (24 apps) on happycodr.com, a site that claims to be "the unambiguous showcase for sites designed with Ruby on Rails," and which is linked directly off of the Apps page at rubyonrails.org.

Eight of these sites are content-only and don't offer logins. Of the remaining 16, fully 12 send their passwords in the clear. Two of the four which don't are AOL properties, required by AOL to use a combination of AOL SNS or OpenID, and SSL.

Of the 14 Rails sites built by non-AOL companies and taking a password, 12 (85%) just send all their users' passwords in the clear.

Some of these organizations think that because their services may not contain sensitive data, it doesn't matter. But most people use a very limited set of passwords (or just one!) for all online sites. When someone keys a password into a site I build, I protect it because I assume that is also their online banking password, or their email password, etc.

To be fair, it is no secret that HTML forms are vulnerable in transmission, and many Rails resources point out that the easiest solution to this is to employ SSL. But clearly the message is falling on deaf ears.

And these developers aren't even trying to cleverly roll-their-own system. They're not hashing the username/password on the client (vulnerable to MITM, replay, impersonation, and other problems but better than nothing); they're not using a nonce and hashing (still broken but better). They may not even realize there's a problem.

To be fair to Rails, it's not the only platform in this fix. I was stunned when Microsoft rolled out a nice set of prebuilt User/Role/Profile/Login components in ASP.Net 2.0, without building any security in. I figured these widgets would get dropped into a lot of sites without a second thought.

It's a little "wider" issue for Rails, though, because, Microsoft has been strong in enterprise development where there are IT departments and policies that may wedge SSL in there before an app makes it to production. Whereas Rails is used more widely in smaller organizations where there is not likely to be anyone with the authority to impose security on a development team.

For ASP.Net, where many developers never look under the hood of the components, and both the client-side script and server-side post-back handling is accessible inside the ASP.Net stack, I would recommend loading the login widget with the best possible pseudo-security using JavaScript crypto for non-SSL scenarios, and then still requiring the developer to actively add a web.config setting to allow running in non-SSL mode. This has the added benefit that, when the app is moved to a production environment for the first time, with a new web.config, the error will pop up again as a reminder right at deployment time.

For Rails, things are a little more complicated. It's going to come down to better educating, and perhaps a little more structured thinking about what I call the "application model" (not platform nor architecture). Which I will write about next time.

A Couple of Tiny Rails SSL Helpers

Here are a couple of helper methods for ensuring forms are set to submit via https in the production environment (but not in dev), and for redirecting back out of SSL afterward. Since SSL can be resource intensive on the server, it's usually good to hop back out into cleartext unless the nature of the application (e.g., financials) warrants encrypting the whole session.

To create a form that uses, SSL, just replace form_tag with form_tag_using_SSL_in_production.

In application_helper.rb:

def production?
ENV["RAILS_ENV"]=='production'
end

def form_tag_using_SSL_in_production form_args, &block
form_args[:protocol], form_args[:only_path] = 'https', false if production?
form_tag form_args, &block
end

form_for is the preferred helper ... if it's actually a "form for" a model object, which this one was not. I'll leave the analogous form_for_using_SSL... as an exercise for the reader.

When you're done with the relevant action processing, any content rendered is going to get sent back under the SSL connection. At some point (in my case, immediately) you want to redirect out of SSL. Just use redirect_and_drop_SSL the same way you would use redirect_to.
In application.rb:

def redirect_and_drop_SSL destination
destination[:protocol], destination[:only_path] = 'http', false if request.ssl?
redirect_to(destination)
end

You might be thinking this stuff is too trivial to post about, and anyone who needs to use SSL knows this stuff already. Unfortunately, that's not entirely the case, as I'll write about in my next post.

Wednesday, February 13, 2008

Whee! AOL Leaps Into the Whirling Saw Blades

You know that cliché about horror movies where the ingénue ends up off by herself in creepy surroundings, and the audience is saying "What are you thinking? Don't go in there!" ?

Well, I'd like to start a MST3K takeoff where the "film" is a tech show off of G4TV or a clip reel from the vlogosphere, and we have, say, a VC, a computer scientist, Ted(!), and a couple o' engineers who have done their time at Initech in the audience.

They could all yell "sure, why not go in there... looks like fun" when someone (like AOL this week) does something colossally foolish, like try and build yet another abstraction platform so everyone can write little apps that run on every cellphone, everywhere, like magic.

AOL says:

"The new open platform will help stimulate innovation by providing developers with ready access to the tools and source code they need to build and distribute applications across all major mobile device platforms and operating systems including BREW, Java, Linux, RIM, Symbian, and Windows Mobile."

Anyone who's ever done anything with cross-platform mobile knows that this is a really hard problem, and not the software kind (where solid R&D might yield results), but the business/social kind, where even the resources of megacorp like AOL are a drop in the ocean.

That's ok, because they have a plan for exactly how it's gonna work. If you're playing along at home, stop reading. Close your eyes and guess the innovative architecture for pulling this rabbit out of the hat. Ok, now you can look:

"The platform will consist of three components:

  • an XML-based, next-generation markup language;
  • an ultra-lightweight mobile device client;
  • and an application server."

I think the secret sauce must be the next generation markup language. See, the next generation markup language will become sentient, take what you write, process it into extraordinarily cogent Powerpoint, email it to all the VPs of all the mobile carriers, device manufacturers, and mobile OS vendors in the world, and then wait on a queue until they all agree to change their businesses so your stuff will work.

I've written lots of times before about the mobile app ecosystem and why, for now anyway, this kind of idea is just a complete non-starter. At least startups fail fast, since they have vaguely limited resources. Then you have players like Yahoo! that find themselves in a deep hole, and say "hmm... this is so big and so deep, we can roll an excavator down here and start the real digging!" They do things like deploy an implementation of J2ME in .net bytecode, so that people can run a mediocre (I'm being polite here, guys) MIDP app on their brand new Blackjack or HTC Touch.

In my new MST3K show, I'll keep the video clips short, so we can watch, laugh, and get on with our lives.

Monday, February 04, 2008

Quick Poke: Look at MS Live Labs' Volta

If you haven't seen Microsoft's Volta project, it's worth a look.

Volta is about two big things:

(1) Blurring the lines between applications tiers to allow declarative or even automated tier-splitting (yes, there are lots of issues with this, but the MS team insists it's aware of the historical legacy of the distributed computing "leaky abstraction" and won't muck things up by pretending it's not an issue).

(2) Automagic cross-compilation between, say, .Net IL and JavaScript.

Whoa, what's that last bit?!

After the initial flurry of press coverage, the team insists that running arbitrary .Net IL on any old JavaScript-capable browser is no big deal, just a convenient demo, don't focus on that part... But, seriously folks, a couple of months on since the project was made available to the general public, the tier splitting is looking less and less interesting, and the cross-compilation is looking more and more interesting.

Once upon a time, crazy people did crazy JavaScript stunts. Like this demo that emulates the hardware of an entire 80s-era game system complete with game, or this similar 6502-based emulator.

Emulating a modern VM like the JVM or CLR is not practical from a performance point of view. But JavaScript runtime as a compilation target is becoming increasingly popular: first we had GWT and Script#, which compiled Java and C# respectively to JS. But because they only compiled your code supplied minimal bits of the respective class libraries (Java API/.Net BCL), their use was restricted.

So here's a wicked end run around that: take anything that compiles to IL -- or any IL binary! -- and run it in a browser. According to the Volta team, it has been done in such a way that all of the object semantics from the original language are preserved. Unless they also port the Win32 API to JavaScript, it's not like we're going to see WinForms apps running on Safari. But this is being positioned as a fallback for, say, places where Silverlight isn't installed.

Go here and look at the demos. The perf is awful -- but that's not the point of the demos. So far as I can tell, this is the first commercial implementation of what we all kind of knew might be coming.

Using Google 'My Location' to Troubleshoot AT&T's Broken Network

The AT&T Wireless coverage in the San Francisco area is poor. It's not that the "more bars" commercials or the beautiful coverage map aren't technically accurate -- pushing enough buttons on a handset sooner or later results in bars on the display. Having calls go through and not get dropped, or having the data service behave, is another story entirely.

But this post isn't a rant about AT&T. None of the wireless carriers are perfect, and making changes in an asset-and-operations-intensive business is hard to do. (Think about all the legacy airlines biting the dust -- there are a lot of parallels in terms of the economics.)

This post is about looking at Google mobile maps' "my location" feature, which shows the location of your current cell tower, to take a few guesses at what might be going on.

The first thing I noticed when playing with "my location" is that, on freeways, there seems to be  consistent and stable handoff between towers. Not every spot on the freeway can see a tower, which might explain the fact that within 5 miles of SF there are major freeways with dark spots, where calls predictably drop. But in the covered areas, all is orderly and the service (both voice and HSDPA) is good.

Then the trouble begins: in some suburban and downtown (SF financial district) areas, it seemed as though the phone flits between towers willy-nilly, even if I'm standing still. The results are predictable: service is mediocre, and calls can drop even in downtown. The data service becomes intermittent deep the "3G coverage area" as the phone becomes confused whether 3G is available, whether to fallback to EDGE, or even GPRS. By the time the phone has done the appropriate renegotiation, it gets new information and changes its mind.

Cell phone networks are famous for sporting sophisticated hysteresis mechanisms to manage tower handoffs. Too much handoff, and the switching overhead and side effects dominate... too little handoff and, well, you drop calls. When a handset isn't moving, there is little reason (aside from site congestion) to do a handoff. But how do you know the handset isn't moving? It's a Catch-22, because without external input, the network can only guess by measuring changes in reception from nearby towers. If reception fluctuates, the client and network need to figure out if the phone is really moving, which way, and how fast.

The result is so bad in some cases, that it's almost worth having a button marked "I'm in my office" or "I'm at home" so that the user can explicitly tell the network what's happening. At my house, I'm hopping between three towers (one on the other side of a small mountain, with enough S/N to grab my phone, but not enough to even initiate a call) with the result that the phone is more or less a brick. There's no consistent data or voice service (20 minutes from downtown SF!), so the only possible change is "better."

Here's the interesting part: a lot of the AT&T network problems don't seem to occur on Sprint or Verizon (I've been on all the networks, and was an "early adopter" of digital in the U.S. back when Sprint didn't have any coverage at all in large places like, ah, Chicago.) I do not believe the difference is because Sprint or Verizon have more towers, or superior engineers. I have a feeling this is due to (1) differences between GSM and CDMA (CDMA seems to be technically superior, but business factors make GSM dominant) and (2) the fact that AT&T is a historical rollup of lots of other networks (e.g., a former TDMA or AMPS tower setup may or may not be the ideal place for a GSM tower and I doubt they were all relocated) and supports too many disparate protocols (GPRS, EDGE, HSDPA, UMTS).

Every time the phone changes its mind between HSDPA, EDGE and GPRS, it seems to have to renegotiate its presence on the network. Perhaps there's a software fault in there too, because once the phone starts changing its mind, your network connectivity is shot. Sometimes until you reboot the phone. Whereas with Sprint and Verizon, it was EV-DO all the way. Some mechanism ensured the phone never even tried to fallback, and the result was an overall smoother ride.

I actually use the Google maps data sometimes to figure out whether I'll have coverage, or whether I need to reboot my phone. Probably not what was intended, but a lot more useful than the little bars.