Tuesday, May 01, 2007

Lunchrtime on Mobilecrunch

I was psyched to fire up the newsreader tonight, pull the latest mobilecrunch stories and see my own "mobile 2.0" food ordering site, lunchrtime, as the top story.

Photoshop notwithstanding, I can't resist including a screenshot while this is still the latest mobilecrunch post.



What? Why? ... check out the about page. Is this going to be the next twitter? Probably not, but if the AdWords revenue covers the cost of the web services, I'll be pretty happy. If you like it, have fun, and maybe add in some restaurants near you.

Meantime, if you're up for some geekery, one of the things I loved about building the site was leveraging so many pieces of the asp.net platform to make it a little less code, a little more action with a very small amount of effort.

The entire app includes only around 300 lines of code and some markup. The total time to build, including the user-facing functionality, some quick admin bits, and the micro cms that I use to upload and serve pages with scanned menus like this? something like 70 hours. Total. It doesn't look as slick as it might, but if you've read this far you know I'm a developer and not a designer.

Here are the (mostly) asp.net pieces I used so I could be lazy, do my day job, and get this running:


It's a blast -- I really don't feel like I need to do much besides wave the baton, and all this stuff just comes together and starts playing in Visual Studio.

There are a bunch of solid web platforms out there today aside from asp.net. But anyone who thinks .net is somehow cumbersome, un-agile, un-fun, or not suitable for rapidly building and modifying a modern web app should really log out of the groupthink.

Saturday, April 28, 2007

Minor Update to Notebarn

Sorry for the hassle, but I have a slightly new version of Notebarn up on the project page now, which you may want to download if you have the original.

There was small focus-related bug that would manifest when the user returns to Notebarn after an Exchange sync. I thought it had passed a test earlier in the week (before the initial release) but I must have gotten distracted and messed up the test, because the bug was back. It passes the test now, and focus works the way it's supposed to. Honest.

It allows over-the-air update-in-place ... that is, you just run the new CAB file from the project page on your device, and Windows Mobile replaces the old version with the new one.

Tuesday, April 24, 2007

Free Smartphone "Notes" with ActiveSync/DirectPush Support

A former Blackberry user, I'm now running Windows Mobile Smartphone on a Blackjack. I love most things about Smartphone. But I miss the Blackberry "notes" integration with Exchange and Outlook. I used the notes feature constantly, managing all sorts of unstructured lists, and I liked being able to work on them on the desktop and know the changes would be propagated to the Blackberry and vice versa.

So I decided to write a little notes application that would sync with Exchange. If you like this idea and want to try the app, but don't want to hear any more geekspeak, you can get the app and instructions straightaway from here.

If you're still reading, here's what I did... The Pocket Outlook Object Model, which is the first stop for manipulating Outlook/Exchange data on the Smartphone, does not support note items. I wasn't sure if the Windows Mobile MAPI APIs covered notes, and I didn't really want to go that route as a MAPI novice, having heard many MAPI integration tales, none of the pleasant.

The Outlook Web Client accesses notes via AJAX, hence there is a web service endpoint of some flavor that can manipulate notes (Outlook can obviously get this data via HTTP too, since it can be instructed to operate in that mode). I could have sniffed/scraped that interaction ... but I wouldn't the get DirectPush I was jonesing for.

I took an easier route, and chose to store my notes as delimited strings in the body of a Task called "My Notes"

Since Exchange, Outlook, ActiveSync, and DirectPush take care of syncing my Tasks folder, I don't have to worry about that. And Tasks are first-class entities in the Pocket Outlook Object Model, so reading or writing one in C# is about 3 lines of code.

This approach worked well. I built a UI for the Smartphone using TextBoxes, put in the parsing logic, and I was set. On the Outlook side (or Outlook Web), I just open up the task called "My Notes" and there are all my notes, delimited by the same string. I can edit them, save, and the changes are propagated.

Only two wrinkles showed up.

The first is that Outlook 2007 is lazy about syncing the Tasks folder with Exchange, at least when it's set up to connect over HTTP. I cut my app out of the loop completely when testing this, by running Outlook 2007 in one window, and Outlook Web Client in another window. Web client read and committed changes to tasks immediately, while in order to refresh tasks in Outlook, even restarting Outlook did not always work. So, bummer... for now, it's an FYI.

The other interesting bit was the syncing logic in the phone client. The goal was to have the data saved and restored (when updated from the Exchange side) as automagically as possible. I've commented the relevant code, most of which lives in the Notebarn_Activated event handler, so you can take a look at the source if you like.

Any time you switch away from the app, or edit a note in full screen mode, your notes are saved, and any time you switch back, if Exchange has updated the Task data but you haven't, your notes are reloaded from the Task. I've tested various approaches and this one has worked the best for me.

I hope it works for you too. The project page (with source) is here.

Thursday, April 19, 2007

When "All-You-Can-Eat" is the Wrong Plan: a Plea for Metered-Rate Software

Why can’t I license productivity software on a metered-rate basis?

I had a mini-project that required vector-art manipulation. The assets I was using were Illustrator files, and I felt Illustrator would be the best tool for the job. But I didn’t have a license for Illustrator. I’ve always wanted Illustrator, but not enough to lay out the big bucks … after all, I would only fire it up a couple of times a month. Ditto for things like Photoshop, QuarkXPress, Premier and lots of other neat tools. If only I could install them and license them on a usage basis!

Without this sort of licensing, the options include:

  • Use some substitute app which might not be as good, as useful, or industry standard
  • Obtain a cracked/pirated copy of Illustrator
  • Find someone with a licensed copy and go use his
  • Install a new trial copy into a virtual machine (kinda gray-market, and a bad option for heavyweight apps anyway)

With metered licensing, here are potential advantages:

  • I get to use Illustrator, becoming (hopefully) more productive
  • My company benefits from my productivity and from lower licensing costs than if I needed to buy a full license
  • Adobe benefits because I develop my skills on their app, strengthening its hold in the market, and I don’t go install my 90-day trial of this
  • Adobe receives revenue from me where otherwise they would not

The metered approach is a total no-brainer: I can have all the apps I want installed permanently (not 30-day trials) on all my computers. At a cost of few cents or a couple dollars, it’s easy to pay as I go… and if start using the app heavily, the bills add up to the point that it’s clearly cheaper to buy. Maybe the publisher is even nice enough to apply some rent-to-own logic, so once I’ve dropped a few hundred buck, I am offered a discount to buy a permanent license.

It makes no sense to charge the same amount for Premier or Final Cut Studio regardless of whether the customer only edits a couple of videos a year, or is a professional editor who makes a living with it. One approach is for a publisher to create lots of SKUs at different price points for different user levels (Visual Studio Express Editions, Standard, Professional, Team System, Tools for Office) but that puts a huge burden on the publisher. Just let the users install the Ferrari version, and charge ‘em for what they do with it!

The infrastructure is already there — most of these apps are available on some kind of trial basis, and the entire install can be downloaded from the publisher’s site.

The per-copy tracking infrastructure is there for many publishers — activation codes are commonplace for expensive software.

Micropayments? Aggregate the charges on my “Microsoft” or “Adobe” or cross-industry-pay-per-use account, and bill me every quarter or whenever the total is high enough to justify it.

Tracking my activity in the app? That’s easy enough (LexisNexis has done this for a long time). Don’t charge me per minute or session, that encourages me to shut down the app, and makes it really expensive for a newbie to learn. Charge by use case: if I do something that every freeware clone also does, it’s cheap… but if I’m hitting the killer differentiating feature, whatever that is, make me pay.

This is free money for software publishers. Folks who absolutely need the app will already have it (hopefully legally), while pirates just bittorrent a crack. The metered system works for all of us in between, and I think that’s orders of magnitude more people than the publishers even imagine.

Tuesday, April 17, 2007

O'Reilly Web 2.0 Expo: Thoroughly Unimpressive == Very Encouraging

I spent some time at the Web 2.0 Expo yesterday, and as tech shows go, it was pretty weak:
  • The show floor was obviously, painfully small, even for a one of the smallest rooms in Moscone
  • Some of the BigCo sponsors seemed detached and irrelevant
  • The "food and libations" where hard to find, hard to pry away from vendors, and generally in short supply
  • The genuine web 2.0 plays seemed like they weren't sure why they were here; they weren't adept at engaging those of us who wanted to learn stuff that's not obvious from 5 minutes on the website
  • The tooling companies got a ton of attention, and were not prepared to leverage the attention... I went back to one tooling booth no less than four times to try and get a demo and conversation, but they were in long one-on-ones and had no bandwidth and no "backup chatters" (the ones who typically try to engage you at a show, triage you, and then if you seem important go the extra mile to find you the Chief Architect or the VP of Foobar to talk to)
  • The companies trying to sell services (consultants) have not figured out how to get attention and relevance
Wow. That sounds real bad. But I was elated, actually, and not because I'm working on a secret new thing and I wanted to feel superior to some competitors.

Instead, I realized that web 2.0 and the traditional tech conference are at odds conceptually. They don't make sense together, and the fact that this was obvious so fast means the web 2.0 ecosystem has not in fact been corrupted by so much froth that it starts looking like 1999.

I was really thrilled to see that by this yardstick there's a whole lot more runway for web 2.0 innovation before this plane floats up and away.

Lemme run through that list another time and tell you exactly what I liked:
  • Small show floow? Who needs a show floor? Web 2.0 exists on the -- gasp -- web! It's good to talk to the humans at these companies, but booth square footage is unimportant.
  • Big corps seemed irrelevant? well, many of them are irrelevant ... a web 2.0 meme is they became irrelevant as this wave of innovation took hold: Pricey license to get started? Whatever. Enterprise web stack with more classes in it than my app has lines of code? (I'm not exaggerating, this is a true story!) Get real. Proprietary SDK with a restrictive license agreement? No thanks, buddy.
  • Not enough free food and drink? This means there isn't hot and cold running money just yet, and the folks who absolutely must have hors d'oevres aren't there. Web 2.0 is more about Promax bars and BevMo (or Costco) than about catering.
  • Real web 2.0 plays seem flatfooted in the "traditional conference setting" -- of course they do. The traditional tech conference is really a marketing-oriented event. These companies were on the cluetrain from the beginning -- their marketing is via their users, and their web forums and wikis (with employee participation) are the best part of a tradeshow everyday, without the nonsense. Their schwag is the free service you can use right off their website.
  • Tooling is going to be increasingly important, and the best tooling companies will get their stuff together as the market matures. I think these companies got caught a little by surprise, and still have to work out a way to charge enough to stay in business, without charging too much to... stay in business.
  • Lastly, the consultants. I believe there is a place for consulting in the new web world (after all, I work for a consulting firm) but the traditional pitch is going to have to change. There are a number of themes I think make real down-to-earth sense around web 2.0 consulting, but they'll have to wait for another post.
When JavaOne turned the corner (in a bad way) and started looking like this show, it was clear the excitement was overwith. Java is an enterprise-style ecosystem. (Not that you can't build small agile things in Java if you really want, but seriously, do you really want?) It thrived on enterprise-scale players making huge investments with money, hardware, teams, code. Tons of mid-size software shops flourished, selling apps that run $100 up to $800 a seat, and there was money in it because it helped build even bigger enterprise systems. Lots of money, lots of free beer.

When JavaOne went small, it was a symptom of a serious illness. But then the virus took up a home in web 2.0 like some kind of endosymbiont and here we are.

Monday, April 16, 2007

Out of the Naming Shadows

When I saw the link to Silverlight (the artist formerly known as WPF/e), I was thrilled that Microsoft is apparently starting to move toward more compelling product names. Remember the firestorm maybe 20 months ago (publicized by Scoble, though he was really just the messenger) over “Why can’t Microsoft products have cool names?” … Folks talked about the constant confusion over .net being followed by even more boring names like “Windows Presentation Foundation” and asked why “Sparkle” had to become “Expression Interactive Designer.”

At first it seemed that the Microsoft branding group got defensive, citing trademark issues, cultural sensitivity, etc. as reasons to steer away from stronger names. But now it looks like they’ve come around: first we found out that Expression Interactive Designer would be called “Blend” – this doesn’t redline my engine, but it’s definitely a step up.

Now “WPF/e = Windows Presentation Foundation Everywhere” has the much sexier name “Silverlight” which makes me want to go grab a wizard’s cloak and conjure something. That’s a pretty encouraging reaction for a technology like this one. Besides the fantasy association, there’s also the sound of “silver screen” cinematics or a spotlight, which works for this product as well.

Saturday, April 14, 2007

GrandCentral Heralds VoIP Apps (Fer Real This Time)

I've been using GrandCentral and Gizmo for about a month now, and I recommend everyone check them out. Why Gizmo and not Skype? mainly, because GrandCentral integrates directly to Gizmo without going out to a "PSTN call in number" and back.

GrandCentral is rockin'! OK, their "Beta" is a genuine beta -- I've had some calls drop, calls not go through, voicemail never pick up... they have some bugs in the software and some issues in ops. But it's still a fabulous service. And when something does go awry there's a real live human on the other end of a chat widget. She's helped me a couple of times and is a real pro. Not to mention the longer the beta goes on, the longer you can enjoy the full power of GrandCentral, including unlimited calling into the PSTN, for free!

The "one phone number to rule them all" concept has been around a long time. The question is when to stop being a skeptic and when to start believing. Photorealistic graphics, good color printing, network-based apps with rich interactivity... all were promised several times and several years before becoming everyday reality. I'm not perfect at figuring out just when "this time around" becomes the time it really works. If I were, I'd be a VC instead of a developer. But I believe the time is now for VoIP and advanced apps for VoIP. Why? Besides Skype, and corporate adoption of VoIP, I think when Verizon sues Vonage over questionable patents in order to put the brakes on, and people actually care, then the time has come. Just like mobile apps had clearly crossed the chasm when everyone worried about NTP shutting off their Blackberries.

I tried to go "all in" by making the GrandCentral number the only one on my business card (due to a clerical error it didn't completely work). With my desk and my cell phones both ringing for business calls a client or coworker would be guaranteed to find me whether I'm in a conference room, at my desk, or out for a coffee -- plus I wouldn't need to check my office voicemail remotely, or force a client to deal with "I got voicemail... I'd really love to get feedback on this issue now, but is this really important enough to call his cell?"

For me, the remaining question isn't whether these services can be hits. They can. The question is how to manage an enormous number of communications accounts each with its own fees. I have a landline (originally for DSL, now because it's still cheaper for local by nearly an order of magnitude than cell or VoIP calls); a cell phone with minutes and a seperate data high-end data plan; a Gizmo phone that offers some free calls, but mostly charges 1.9 cents per minute (if you're keeping score, that's lower than a cell phone but higher than a landline); and GrandCentral, which is free in beta but plans to charge for connections into the PSTN. And this doesn't even include the cable modem account that gets IP to my house.

Each of these services plays a specific role and they all dovetail together to keep me connected to pretty much whatever I want. But there has to be some way to simplify this picture for mass adoption. Unfortunately, the only players in a position to do this are large telcos or cable companies, and they aren't known for innovation or customer service. In the Bay Area, for example, Comcast -- which offers "bundles" of cable Internet, TV, and VoIP -- bought radio time to hawk photo sharing as their great new thing. This is San Francisco in 2007, for godsakes, gimme a break. I'm not taking sides with AT&T either, who would also probably be happy to take $200 a month from me to sell me around half of what I need.

The other question is how number portability applies to the new services, in case this time around turns out to be more like the last time around.

Wednesday, April 11, 2007

ASP.net DB-backed Micro CMS in 50 Lines and 5 Minutes

I was working on a small web app where one area was intended to allow users to publish their own content. This area was not going to be ready for a while. In the meantime, I thought, there should be some way for users to put pages up on the site if they are inspired to do so. Images, too.

There are great lightweight CMS apps just for this, such as wikis. But with asp.net and SubSonic, I put this micro-CMS up in about 5 minutes. If you need a little area with WYSIWYG editable web content and binary uploads (images, PDFs, etc.) you might find it useful. n.b.: for my site, the created pages are meant to be be publicly readable and writable. If you want to restrict access, you will need to make some adjustments for that purpose. So here's how to brew it up:

Step 1: Create a table in SQL Server 2005

CREATE TABLE DataObject (
[ID] [uniqueidentifier] NOT NULL CONSTRAINT [DF_DataObject_id] DEFAULT (newid()),
[TextContent] [ntext] NULL,
[ImageContent] [image] NULL,
[MimeType] [nvarchar](50) NULL,
[CreatedOn] [datetime] NULL,
[CreatedBy] [nvarchar](50) NULL,
[ModifiedOn] [datetime] NULL,
[ModifiedBy] [nvarchar](50) NULL,
[IsDeleted] [bit] NULL CONSTRAINT [DF_DataObject_IsDeleted] DEFAULT ((0)),
CONSTRAINT [PK_DataObject] PRIMARY KEY CLUSTERED ( [ID] ASC ) )

You'll notice a number of convention-over-configuration moves that let SubSonic do some of my work: Created/Modified On and By as well as IsDeleted.

The main data fields are TextContent as ntext (i18n clob), ImageContent (blob), and MimeType. The PK and ID is an autogenerated GUID.

Step 2: Create a web page for editing and uploading content. Grab FreeTextBox for WYSIWYG HTML editing. Then toss in controls:

<FTB:FreeTextBox ID="FreeTextBox1" runat="Server" Width="600px" />
<asp:Button id="SavePageButton" runat="server" Text="Save"
OnClick="SavePageButton_Click"/>
<asp:FileUpload ID="FileUpload1" runat="server" />
<asp:Button id="Upload" runat="server" Text="Upload"
OnClick="Upload_Click" />
<asp:Label ID="LabelForNewURL" runat="server">(none)</asp:Label>

Add line breaks and formatting to taste.

Step 3: Add some code-behind to save the page.

protected void SavePageButton_Click(object sender, EventArgs e)
{
object id = Session[Globals.PAGE_EDIT_ID_CONSTANT]; // add some code in
// Page_Load to grab this out of the request,
// and load it into the FreeTextBox instance
DataObject o;
if (id == null)
o = new DataObject();
else
o = new DataObject(id); // this is SubSonic foo...
//read all about SubSonic, which rocks!
o.MimeType = "text/html";
o.TextContent = MyHtmlUtilities.CleanUpHTML(FreeTextBox1.Xhtml);
// Remove anything we don't want, like scripts
o.Save(User.Identity.Name);
LabelForNewURL.Text = "URL is http://www.mysite.com/Object.ashx?id="
+ o.ID;
Session[Globals.PAGE_EDIT_ID_CONSTANT] = o.ID;
}

Add some more code-behind to save an uploaded file.

protected void Upload_Click(object sender, EventArgs e)
{
if (IsPostBack && FileUpload1.HasFile &&
FileUpload1.PostedFile.ContentLength < MAX_LENGTH)
{
String fileExtension = System.IO.Path.GetExtension
(FileUpload1.FileName).ToLower();
if (allowedFileTypes.ContainsKey(fileExtension))
{
DataObject o = new DataObject();
o.MimeType = allowedFileTypes[fileExtension];
o.ImageContent = FileUpload1.FileBytes;
o.Save(User.Identity.Name);
LabelForNewURL.Text
= "URL is http://www.mysite.com/Object.ashx?id=" + o.ID;
}
}
}

This code omits the definition of MAX_LENGTH, a limit to the length of uploaded files, and allowedFileTypes, a Dictionary<string, string> that maps allowed file extensions to MIME types, like so: allowedFileTypes[".png"] = "image/png";

Now you have file-upload, database storage, and WYSIWYG editing in around 20 lines of code and 5 markup tags!

Step 4: Create the Object.ashx handler that serves these pages and objects. Add a new asp.net request handler like so:

<%@ WebHandler Language="C#" Class="Object" %>

using System;
using System.Web;

public class Object : IHttpHandler {

public void ProcessRequest (HttpContext context) {
DataObject obj = new DataObject(context.Request["id"]);
if (string.IsNullOrEmpty(obj.MimeType))
context.Response.ContentType = "text/plain";
else
context.Response.ContentType = obj.MimeType;

if (!string.IsNullOrEmpty(obj.TextContent))
context.Response.Write(obj.TextContent);
else
context.Response.BinaryWrite(obj.ImageContent);
}
}

That's it! Of course if you wanted to build in more features here, it's easy to do all sorts of stuff, from adding an attribute for a document/file name, to sorting or searching the database table to generate product thumbnails, automagic menus or site maps, etc. Ok, including all the SQL, asp.net markup, and the C# (except braces) that's at least 51 lines.

Addendum: I had to monkey with line breaks to keep the code readable outside of RSS readers, so never mind the line counts, I think the code is fun and the point is made anyway.

Monday, April 09, 2007

Mobile RIAs with FlashLite - Evaluation

Flash Lite is a promising route to RIAs on mobile devices. Adobe has recently strengthened their commitment to reaching mobiles, including strong device support in CS3, and planning for 1 Billion Flash-Enabled Handsets. The chart in that post also suggests 2007 is the year when "Adobe addressable handsets" clearly cross the 50% mark. Commitment to move the mobile platform toward a Flex-capable and ActionScript-3-capable runtime has been announced as well.

Wow. That's great. So how well and how easily does it actually work? I set out to do a quick evaluation of getting an interactive network connected app up and running on Flash Lite.

For my target platform, I picked Flash Lite 2.1, 320x240, on Windows Mobile, for these reasons:

1. Since this spec lies toward the high end of the Flash Lite ecosystem, I knew that if major things failed on FL2.1 and a 200MHz+ processor, they would be very unlikely to work on downversion FL

2. Real HW-level emulators are freely available for Windows Mobile, something that's not true for many mobile phones

3. I own a compatible device, and when working with devices, at a certain point there's just no substitute for the real hardware

For my "application," I wanted to see if I could implement a couple of use cases for a mobile 2.0 site for ordering food. This seemed like a reasonable real-world use for a Flash Lite application outside of entertainment; the use cases involve retrieving an order list from the web site, letting the user choose one, and then retrieving relevant destination restaurants where the order could be sent.

First I sketched out a simple Flash app which used the MX components for the onscreen text boxes, lists, and buttons, as well as for the SOAP web service client. I targeted Flash 7 (FL 2.x is essentially Flash 7), and built. After debugging on the PC, I tried it in the
WM 5 Smartphone emulator
(best enjoyed splashed over this SDK).

No joy. The movie starts to play, but then seizes up and the standalone Flash Lite player gives this error message: "ActionScript stuck" in both the emulator and real device. The culprit seems to be the mx.controls.List class, which is either too complex for the runtime to handle, or else triggers some low-perf detection code meant to ensure movies stop running altogether before they run badly. My web service connection via mx.services.WebService and SOAP was causing a similar problem.

This wasn't looking good, since those components date back at least to Flash MX 2004, and the hope was that a FlashLite 2.x app could be authored more or less like an MX 2004 app. Googling around, I found that the MX UI components do not, in general, work on Flash Lite 2 (a few do, but the kit as a whole cannot be assumed to work). This leaves the old-school approach of custom writing the UI widgets as part of the movie (not a bad approach for something so small) or finding widgets that would work. Since I wanted to complete this evaluation quickly, I looked for a widget kit for FL, and found Jesse Warden's Shuriken Components (see also his presentation here). For my purposes, this was perfect. Easy to use, and ran without a hitch on FL 2.

Next I needed web service connectivity. By adding this
<webServices>
<protocols>
<add name="HttpGet"/>
</protocols>
</webServices>

to an ASP.net app's web.config, you can turn on (Lo)RESTful behavior in addition to SOAP. (This switch enables not just GET but a simplified XML response format.) Next I used XML.load in ActionScript and gave it the appropriate URL to GET. This worked great on the emulator and device.

Then I added a couple of lines of "business logic" in AS, to allow additional dynamic interactions, so that I could bang on it without worrying about caching in the Windows Mobile HTTP stack or in Cingular's WAP-gateway-proxied network.

So far, so good. These patterns (movie-based or Shuriken UI + XML.load for web services) should work well on the high end of FlashLite (Windows Mobile devices range from about 166 MHz to 520 MHz; I was using a 220MHz OMAP). The next step will be to find some more constrained implementations that still support a the FlashLite "application" content type (e.g. a Nokia S40 device with FL 2.0) and try it out.

Thursday, April 05, 2007

Samsung Brings Software Game?

I've started using a Samsung Blackjack recently, and it's a great device. I'm not gonna review it here, because there are tons of solid reviews out there, and whole blogs and fansites just on this product.

I want to talk about the interesting partnership to be observed among the companies bringing this product out.

Samsung brings fabulous hardware to the device, which is pretty much what we expect from Samsung. Cingular brings the marketing and enough data bands (GPRS/EDGE/UMTS/HSDPA) to allow limited simultaneous voice and data, which is a genuine differentiator and makes an impression the first time you receive and respond to an email while you're on a voice call. Microsoft brings WinMo 5 Smartphone AKU 3+.

Samsung, though, has gone beyond the usual OEM manufacturer role. In addition to the promotional web site and the basic drivers for the hardware, they've bundled a set of apps carefully chosen to complement the Windows Smartphone OS and improve the value and usability of the phone.

User feedback has identified a number of weaknesses in Smartphone, some of which are addressed now in WinMo 6, some not. For example, in the past, users have screamed that the core app set is missing some basic PIM apps like a notepad. Never mind that you can get notepads online -- that's a step beyond a lot of consumers. So Samsung hops in with a mini productivity suite to fill in the holes: an enhanced filesystem browser, a notepad, calculator, and bunch of other stuff.

The stock Smartphone home screen has taken a lot of heat, so Samsung codes up a handful of alternates with different information densities and usability features. Need to read PDFs or Office docs? Samsung has licensed and supplied a version of Picsel Viewer -- frankly the video quality and smooth experience on this product is better than the Smartphone shell; maybe Microsoft should buy those guys!

The partners seem to be bringing their A game to the overall ecosystem around this device too: according to CrunchGear, Cingular will unlock this GSM device for free; Microsoft is working with Cingular to release an OS upgrade to WinMo 6 this year; and Cingular jumped all over early battery complaints with a pretty straightforward process for snagging a free extended battery.

Without being overly optimistic and wondering if a major shift is happening in the smartphone world, why does everyone seem to be trying so much harder than usual in this particular collaboration?

Tuesday, April 03, 2007

Why Does Software Suck? Why Do You Think?

My friend Andy over at Security Retentive sent me this link to Cigital discussing a twist on the meme of "software does snazzy stuff but it's built like crap."

I want to add two words to the "software sucks" discussion: organizational dynamics. If you start every conversation about software quality, process, security, predictability, etc. with these two words, you'll be on the right track.

Look at technology as just one competency of an organization, then zoom out and look at the whole organization. Now ask "do I believe that in an effort demanding some amount of precision, control, and predictability, this organization can execute?" With many organizations it's immediately obvious that the answer is "no."

If that question seems too vague or unfair, then zoom back in and take a look at specific capabilities inside the company. Look at product management. Look at marketing. Look at HR. Can you keep a straight face while they talk their talk?

Put it this way: think of a company where you have observed "software is crap" in action, whether it's a software product they sell, some line-of-business software they rely on (managing their type of widget or service), or a horizontal product they use to deal with the world (CRM, accounting). Now think of their customer service, or marketing or another department. If you can't take their marketing seriously, or they preach "great customer service" while delivering something else (maybe of great value, but in any case accompanied by awful customer service), then as an organization they are fostering cognitive dissonance at a group level.

The ability to tolerate cognitive dissonance is necessary for most organizations in order to allow flexibility and to prevent entropy from creating constant crises. But more than the "magic amount" of such tolerance just translates to being too comfortable with corporate doublespeak and BS.

In any case, dissonance and software is a bad thing because the machines that run software are remarkably anal. Compilers are adamant about not tolerating dissonance in the scope of a specific application. You can slip some shenanigans in when you're working in a scripting environment, but eventually the gap will manifest somewhere. And when an "enterprise app" is made up of various subsystems and databases, organizational cognitive dissonance leads to "HAL Syndrome" -- the ailment of the computer in 2001 which eventually starts doing undesirable things because it has been secretly given two sets of logically incompatible inputs.

Basically, a necessary (not sufficient) condition for software not to suck is that the organization producing it must have the capability to know when it is being hypocritical or producing conflicting messages, to recognize there is a problem, and to set some priorities which can resolve organizational discontinuities. Without that self-reflective and real self-modifying capacity, the software output may sometimes work, but has no chance of being predictable, reliable, high quality.

Thursday, March 29, 2007

NetBeans 6 m8 and Ruby: Get Yours Now

As of yesterday, NetBeans 6.0 milestone release m8 is out.

Having known NetBeans since its very early days, I'm surprised and excited that it seems to be a serious come-from-behind competitor in the contest of world-class IDEs.

Besides the core Java IDE elements, the platform is being pushed in a bunch of ways. My favorite at the moment is the Ruby support (which works with JRuby or the regular Ruby interpreter). The team has feverishly added a bunch of debugging features over the last few weeks, and the latest Ruby-support builds snap in nicely with the m8 release. (Some of the recent IDE builds didn't snap together quite so well.)

Here's a quick screenshot showing debugging stopped at a breakpoint, with all the goodies you'd expect -- stack trace, local variables, etc.



Fun stuff! At least for IDE junkies like me.

I had been watching to see where Ruby IDEs would come out... the "free Ruby IDEs" are, well, no offense, but they're awful compared to stuff like Eclipse and Visual Studio. Arachno Ruby is better, but I didn't love it when I tried it.

I've been using a trial of the SapphireSteel's Ruby In Steel, which is built on Visual Studio. Ruby In Steel is amazing, and I consider it the first really world class Ruby IDE. But it requires VS2005 Standard edition or above, which means money and a Windows box, both of which I believe will be impediments to longterm success. The money thing is sad, because the SapphireSteel guys seem wicked smart, and it's a bummer that we're living in an age where they probably can't/won't get away charging $200 or $300 per seat for this package. But with Eclipse, NetBeans, and Sun Java Studio Creator being free, and Microsoft's Empower prorgam giving away the farm (a.k.a. MSDN) for under $400 to legitimate startups, it's rough.

This NetBeans add-on is at the head of the pack. It's a really nice piece of work and clearly (read the changelogs) there's a ton of energy going into it. Nice work!

Another ASP.NET Adaptive Rendering Tip

For the background to this adaptive rendering thread, you may want refer to this previous post.

It turns out that in .net 2.0, the base "Mozilla" rendering profile sets the cookies capability to false. Why would it do this? Well, specific mozilla-derived browser profiles turn the setting back on, so it's not like asp.net thinks Firefox won't support cookies. Here's a clue, in the comments at the top of the mozilla.browser config file, showing some sample user agent strings that would match:

<!-- sample UA "Go.Web/1.1 (UP.Browser/3.1-UPG1 UP.Link/3.2; Mozilla/1.0; RIM957; Elaine/1.0 )" -->
<!-- sample UA "Mozilla/1.0[en]; Go.Web/1.1 (compatible; MSIE 2.0; HandHttp 1.1; Palm)" -->

Some old school mobile browsers were the targets here. Unfortunately, if users come to your site on, say, a Samsung Blackjack or another new device running WM5 and PIE, they end up in the same bucket. You can sniff this out by logging Request.UserAgent, Request.Browser.Browser, and Request.Browser.Type. I highly recommend doing this at least on one "entry" page for any mobile site. You may also want to log the computed capability for an attribute your app needs, like cookies (Request.Browser.Cookies).

Even though asp.net will automagically try to do cookieless sessions, I want to force cookie-based authentication on my mobile site so that users don't have to sign in every time they visit (those extra key presses are brutal on a phone). I know that more or less every mobile browser out there can support cookies now, so I'm not too worried about this decision.

The fix? Similar to the others, we'll just override this one attribute in our supplement.browser file:

<browser refID="Mozilla">
<capabilities>
<capability name="cookies" value="true" />
</capabilities>
</browser>


Another free bonus is that with cookies, asp.net doesn't have to do the URL mangling that is involved in cookieless techniques. So your users get URLs that make more sense in their history and work properly as bookmarks!

Yahoo Mail API Misses the Point

Within the last day, Yahoo has officially released the web service API for mail they demo'd at Hack Day last year. Earlier in the week, mailbox storage limits were removed. Everyone seems to be cheering, and talking about the obligatory Flickr mashup.

Before we get too drunk on champagne, let's take a look at the details. The API only allows retrieval of the mail message body for Yahoo Mail premium subscribers. This correlates to their policy of allowing POP/IMAP access only for premium subscribers too. They are missing the whole point of an open API here. There's only so much I can build if I know that Yahoo mail basic users (i.e., most people) will have degraded functionality.

The Yahoo motivation is, of course, to drive people to the web site to get their mail, where they can be exposed to ads, which keep Yahoo in business providing free mail. Ok, cool. But why not use this opportunity to explore some alternatives that keep the revenue coming, but allow Yahoo to really open the data service up, instead of having this silly $20 tax (i.e. "premium") on people too lazy to switch to a more open free service? (User inertia is a dangerous business model because it exhibits "tipping point behavior" -- one day, everyone's too lazy to switch and the next day all your best users are gone.)

So what are the more imaginative approaches Yahoo could have taken? Well, for one thing they could continue to insert text ads, or even image ads into mail sent or retrieved. They could specify terms of service that require another app (say a mail client or another web site) to display certain ad items when using Y! Mail content, or else risk losing developer key access.

They could innovate further by creating a loyalty program across Yahoo properties that informs whether a user gets certain service levels: for example, if I have lots of My Yahoo page views, or I click a lot of ads there, or I buy a ton of stuff from Yahoo Stores businesses, shouldn't that count for something? Certainly this sort of user is different from a freeloader who hypothetically never views a Yahoo web page, and just sucks his mail down into Outlook. How about if I have a full demographic profile in Yahoo, and I leave myself logged in with a persistent cookie, and I use Yahoo search a bunch ... thereby providing Yahoo a wealth of data that should raise the value of ads shown to me?

C'mon guys ... if you don't do it, someone else will.

Wednesday, March 28, 2007

Opera Mini as Abstraction Layer

Opera Mini is a free J2ME-based browser from the folks at Opera. This FAQ tells you how it works and this demo lets you try it. I've spent some time using it on my old Blackberry 7250 and it's a pretty impressive browser &em; far more capable than the built-in browser on the Blackberry (although that's not saying a whole lot). How does it work its magic? The J2ME client handles user I/O and screen rendering (including a fabulous small antialiased font that is built into Opera); the page rendering, on the other hand, is offloaded to Opera's servers.

Since latency is the bane of even the fastest 3G networks, having extra proxying in the path makes the actual go-to-page browsing way slower than usual for a mobile. On the other hand, you have that warm feeling that practically any page, even moderately AJAX-y ones, will actually render attractively and usably when they do show up. Contrast this to the Blackberry browser experience, where the data starts coming down very quickly, but the device spend 30 seconds thinking about how to render it, and then ultimately produces something barely usable. Since subsequent navigations seem to perform better than initial ones, I suspect the Opera server is chasing forward links and preparing them while you read the first page. Not a bad plan.

I started wondering: Would Opera Mini serve well as a "base platform" abstraction layer for mobile web apps? If I code to O.M., and I can expect to be able to run unmodified on handsets all over the world, there's significant value in that.

What are the costs of such an approach? Hmm... let's see...

  • Like all abstraction layers (Win32/POSIX/.net/...), the end user has to get it up and running. Sounds easy, but ask anyone who's ever wanted to deploy a "lightweight" .net or Java app and assumed users would already have the runtime ready to go...

  • Opera has a done a lot of heavy lifting, but what if they change their browser capabilities, and now there's another fragmented set of clients, just a layer up the stack?

  • Like all abstraction layers, there's a performance hit. In this case it's twofold: the memory and CPU usage of the Java runtime hits some devices pretty hard, and the proxy-through-Opera network access is painful even on EV-DO


In the end I think I'll take the weaselly way out for now. I wouldn't go all in, build an app, and say to the user "Opera Mini is a pre-requisite for this app. If you get it, you're golden; if not, no support for you!" I'd target a base of browser set with some kind of adaptive rendering. But I might well make Opera Mini the first-tier tech support response for any native browser problems.

Saturday, March 17, 2007

The Eagle has Landed at 601 Townsend St.

I spent last night at Adobe's Apollo Camp. The Apollo Camp was about exposing a group of developers to the (almost) latest build of Apollo; introducing some partner apps and the insights and clever tricks these apps have already spawned; and letting us meet, chat, question, and occasionally argue with the guys who build Flash, Flex, and Apollo.

Since this is a commentary blog, I'll let the tech news blogs cover all the great content that was presented last night. Much of it is probably in the blogosphere by now; a lot more is due to be released by Adobe via Labs very soon.

Apollo is going to be an important and prominent platform. It's not perfect (Adobe certainly doesn't claim it is right now either), and the truth is it doesn't actually do all that much. But it is a kind of fabulous connective glue for the front end. Metaphorically it reminds me of OLE/COM/ActiveX in its desktop ambitions. It's not defined to be a desktop object bus per se, but it's way easier to use, and it's cross platform. It is also reminiscent of web services and RSS in the middleware realm, in that it provides a connective mechanism simple enough but expressive enough to throw the door open for all sorts of creative mixing and mashing and integrating fun.

It's interesting that we're at this juncture with Adobe/Macromedia poised to leap into the RIA lead with this tech. Credit is due to some long-range strategic thinking. For example, we were told that Flash player has for years now been designed as an express install vector for Apollo. And moving to from AS2 to AS3 reminds me of the bold but necessary move from VB6 to VB.net that Microsoft took in '01.

More than a little is owed to luck, or lucky timing, as well: without the evolution of Agile and TDD, leading to effective development practices with dynamic languages (and popular day-to-day AJAX, Ruby, and Flash apps as a result), it would be hard to imagine a bright future for desktop apps built on ActionScript. But these things have come to pass, and AS3/Flash9/Flex/Apollo/Tamarin has shown up at just the right place, and just the right time.

Tuesday, March 13, 2007

Making Mobile Browsers Work Better with ASP.NET

ASP.NET is architected with a robust scheme for adaptive page rendering. This scheme allows "server controls" or metamarkup to be rendered appropriately to various browsers, and becomes particularly important when using the mobile web controls.

For an overview of how this works, take a look at ASP.NET Web Server Controls and Browser Capabilities (from MSDN/Visual Studio '05 docs)

There are also numreous ways of overriding this framework, from code (Page.ClientTarget) to XML config files, to defining new browsers, or altering attributes of the browser configurations that ship with the framework.

Modification is critical for mobile web development, because the device data that ships with ASP.NET 2.0 doesn't include any of the wireless devices shipped in the past two years or so. (For a list of what is included, check out this list and walk down memory lane)

Since most users of mobile applications will be carrying a device built in the last 2 years the built-in browser profiles will not cover current customer devices. The good news is that the browsers in these newer devices are more advanced than the earlier browsers, and make a credible attempt to process xhtml and javascript. So less divergence from a "basic desktop browser" profile is needed to address these clients.

Here is what I did to handle some problems with newer Motorola and Nokia devices:

If the framework cannot come up with a positive match for the User Agent string to either an individual browser definition or a family of browsers, it is bound to the "default" profile and identified by the name "Unknown"

All major desktop browsers are correctly identified, whereas many (maybe most) late-model mobile browsers are not identified. So if the browser is identified as "Unknown", we'll assume it's a new mobile device.

The framework will render for a base desktop browser, which works pretty well. But certain postbacks cause a processing error which can be circumvented by altering the "default" profile and adding the attribute "requiresPostRedirectionHandling" right into the default definition. Rather than editing the files that ship with ASP.NET (which may be impossible in a shared hosting environment anyway), the preferred approach is:

1. Add the special "App_Browsers" folder to your ASP.NET 2.0 app
2. Add .browser file to that folder -- mine is called Supplement.browser
3. Add the attribute into an XML element that specifies this should modify the existing default definition. Here's how the whole file looks:

<browsers>
<browser refID="Default">
<capabilities>
<capability name="requiresPostRedirectionHandling" value="true" />
</capabilities>
</browser>
</browsers>

The refID attribute refers to the Default definition that you can find in [WINDIR]\Microsoft.NET\Framework\[VERSION]\CONFIG\Browsers

For my application, this fix got my app running great on the MOTO RAZR devices, where I had experienced a number of problems before.

4. I also wanted to target newer Nokia phones. They have a family of browsers which are identified even when the individual phone model is not. So I added this element inside the 'browsers' tag to fix the Nokias:

<browser refID="Nokia">
<capabilities>
<capability name="cookies" value="true" />
<capability name="preferredRenderingMime" value="application/xhtml+xml" />
<capability name="preferredRenderingType" value="xhtml-basic" />
<capability name="isColor" value="true" />
<capability name="requiresPostRedirectionHandling" value="true" />
</capabilities>
</browser>

As you can see, I noticed that these devices are xhtml capable, but the Nokia default profile was sending them WAP content. I also pointed out to ASP.NET that these devices take cookies, are all color, and handle redirects differently from desktops.

I'm sure I'll find more issues and more fixes, but these have made a big difference once I figured them out.

Incidentally, save yourself a lot of pain and do not rely on either Motorola's ADK emulator or Cingular's online (ActiveX) emulators to test web content on these devices. Many difficult bugs appeared with these emulators, but on the actual devices, everything worked great. Nokia's emulator/SDK on the other hand was pretty solid and very helpful.

Thursday, March 08, 2007

I Have a New 6-lb. Cellphone

I was traveling last week with a pretty decked-out Thinkpad T60p, and decided it would make a great smartphone. It's got a huge screen, a 512MB 400MHz ATI graphics card, 2 GB of RAM, happily runs a few VMs, development tools, databases, and DVD-quality movies all at the same time. It runs 5 hours on a charge while doing that, can idle in sleep mode for days, and stays on the network with integrated WiFi and EV-DO.

Last time I checked, smartphones don't do any of that stuff, and if you continually use them to access data, their batteries will not even last 5 hours.

So crank up Skype or Gizmo Project and it's a heck of a phone.

And yet ... at 6.5 lbs, and a foot across, I'll get some odd looks when I clip it to my belt.

Where am I going with all this? there are two things we need here:

First, a new BIOS-level powersave mode that lets me receive calls -- routed to a Bluetooth headset -- when the laptop is sleeping, maybe even with a satellite UI ... wait, someone had this idea, it's called Vista Sideshow. So I'm asking for a Vista Sideshow VoIP phone gadget.

Second, a more flexible way to link multiple telephony devices to a single phone line. I want my Sideshow PC phone and I want a $49 cellphone that I can carry around when I don't want to carry the PC -- and I want them on the same number. And I'd like to link my Blackberry onto that number too, so I have my choice of high productivity, high voice quality, or a compromise. I'll bring the hardware and the subscriptions. Wait! My second wish is already coming true. It's called GrandCentral.

Now let me get this all rigged up and I'll report back.

Monday, February 26, 2007

BitTorrent's paid movie rentals are just silly

Not the concept. But these offerings are so predictable, and so predictably bad, in their pricing that each one is just another big delay before a service can finally be produced that offers real value to consumers, studios, content creators, and publishers/distributors.

Here are a couple of examples. First up, this Bittorrent service: from them, for $4, I get a 24-hour viewing period on a media file playable in some devices, with Windows Media Player.

Now, behind door #2, I have a video store where for a little over $3, I can get a physical disk that will play in far more devices, for a period of several days, most likely with better video and sound quality.

Maybe this demo (males 15-35) is too cool to go to a video store? I doubt it, but you also have Netflix. A conservative calculation (cycling 2 disks per week on a 3-at-a-time sub) yields about $2 per rental. Plus I can keep the disk as long as I like, and I have the luxury to not watch it in 24 hours if I’m busy. Although I do pay another convenience cost in that I am dealing with the Netflix queue, not an on-demand selection, this cost is essentially paid to Netflix; that is, it is a virtual subsidy of the Netflix operational model. The studios get no benefit from that at all, since the relative physical scarcity of disks is not in their model (they fix the original disk price and press as many as they can sell), but only enters into Netflix’ model (Netflix can only reasonably acquire, use, and then dispose of a modest number of disks for each film).

There’s also on-demand films from satellite/cable. Cost: $4. Terms of use? basically old school TV on VHS rules: I can record the show to my TiVo and make suitable personal copies (e.g. with my DVD recorder), which last indefinitely and which I am allowed to watch whenever I want. Downside? NTSC quality video.

Bittorrent is selling me a strictly inferior good at a higher price. Economics says they’re not going to succeed, and the studios will claim it’s because viewers are all crooked.

Not to pick on Bittorrent in particular – without doing a rundown of all of these work-alike online stores, let’s look at one more: Amazon Unbox made headlines for its onerous Terms of Service as well as its implausible pricing. Here are a few couple of typical price matchups all from amazon.com:

The Departed:
Unbox restricted download: $14.99;
actual DVD, widescreen with extras: $15.99;
BluRay high def 1080p disk $23.95

The Devil Wears Prada:
Unbox restricted download: $14.99;
DVD new $13.89;
DVD used ("very good condition") $8.76

Babel:
Download: $14.99;
DVD new: $14.24;
HDDVD or BluRay: $27.95

In some cases there are shipping charges, but there are numerous ways to avoid shipping charges on Amazon. For reference, iTunes new releases are around $12.99 and are also heavily restricted.

The point here is that not only do these ventures refuse to concede that more restrictions on media make it less valuable to the consumer, but they actually imagine that they are somehow innovating in a way that will let them charge more than the baseline cost (physical DVD/CD and accompanying rights are the baseline).

Nothing here suggests that the media should or must be "free" – only that Bittorrent president/cofounder Ashwin Navin and the studios are all yanking our chains when Navin says, "We're really hammering the studios to say, 'Go easy on this audience' ... We need to give them a price that feels like a good value relative to what they were getting for free."

Thursday, February 22, 2007

MSFT apologia

Ok, it's not a huge secret that I'm a Microsoft apologist (that is to say defender). Not that Microsoft hasn't made its share of mistakes and done some things wrong. But yesterday a friend, not a developer but a power user, lightheartedly referred to Bill et al. as "software bozos" and I felt obliged to point out a few things...

Microsoft produces great products under an unbelievable set of constraints. Customers want Microsoft stuff to work seamlessly on everything from cell phones to PCs to set-top boxes to web servers to XBox 360s; they want it to make sense to everyone from CEOs to doctors to my mom; they want it to be localized (support local language, culture, currency, calendar, phones) everywhere in the world, and to be accessible to the handicapped and to be secure even when an extremely unsophisticated user tries to do really dumb things.

They also want it to be inexpensive and to work on any cheap hardware you buy off the 'net and install it on (unlike, say, Apple, where the OS is only legal and supported on the hardware they're in the mood to offer this month); oh and besides being a general purpose operating system, customers like it that Windows is one of the most advanced 3D gaming platforms, competing with dedicated gaming consoles that cost just as much to build as a PC and need do nothing except play games...

Oh, and also, unlike pretty much any other OS I'm familiar with, customers (especially business customers) need it to be perpetually backward compatible, so that when they put a new Vista machine together today it'll still run line-of-business apps that were written for DOS 4.01 in the 80s, and somehow magically these old apps will print reports on the new color laser printers attached to the computer, that were never even dreamt of when the apps were written. And mostly this actually works.

Now let's say you live in America and you buy a new/upgrade copy of Windows every 4 years for about $200, and a new copy of Office for about $400. You're paying about $12.50 per month. And you get the security updates, and browser updates, media player, Virtual PC, development tools (if that's your thing) and all kinds of other stuff for free (or included in your $12.50 per month admission price if that's the way you want to think about it.)

I'm not sure there's anything else I pay $12.50/month for that even tries to think about solving problems on this kind of scale, let alone succeeds.

Lastly, someone will be tempted to point out that Microsoft's enormous presence in the client OS and office productivity space may inhibit all kinds of other software ecosystems from flourishing. There are a number of open questions about this. First, it is reasonable to believe that standardization at one level in a stack enables massive innovation at the next level up the stack, which would otherwise have been impractical. This goes for any platform piece -- Ethernet, Windows, *nix, Java, HTTP...

More importantly, do not assume for a minute that the open PC architecture would even exist without the dominating historical presence of Microsoft Windows. The fact that you can even sit down with an assembler and start hacking a boot image and work your way up to running literally whatever you want on a readily available PC has never been a given. Considering the attitudes of more closed OS and hardware makers in other ecosystems (like cell phones), it is entirely possible that without Microsoft and the need for backwards compatibility, just running code on a cheap mass-produced box would long ago have required signed code, a crypto key from some industry licensing group, and more cash for membership and fees than any small company is ever going to have.

Tuesday, February 20, 2007

Grepping in PowerShell

I originally wrote this for the company wiki the other day, and thought it might be useful to a wider audience. The context is parsing and processing an iTunes library.xml file (just a one-off task), which I thought might a be a fun and educational opportunity to slice, dice, and ... how does that Ron Popeil commercial go? ... with PowerShell.

PowerShell is the new shell for Windows. New, and supported, but not "the official" in the sense that it doesn't ship with Vista, although I'm guessing it will ship in the Longhorn Server rev.

If you're used to Unix shells, then you'll probably be floored by the power of PowerShell and somewhat annoyed by the syntax, which, despite liberal aliases to familiar things like ls takes some getting used to.

.net framework integration means you can easily access any object in the .net base class library, and there are some special tricks that do some of this for you too. The canonical example seems to be this one, a quickie rss reader:


$wc = new-object System.Net.WebClient
$rssdata = [xml]$wc.DownloadString(‘http://foo.bar/rss.xml’)
write-host $rssdata.rss.channel.title
$rssdata.rss.channel.item | foreach { write-host $_.title }


Since the source file is xml, I had thought the XML parsing would come in handy, but it turned out that there was no real data model to the XML. Basically, there is just a big nested map structure (key-value pairs in blocks) in the item list. Sort of XML for the "takes-void*-returns-void*" crowd. So then grep looked promising because the keys and values (and their tags) were grouped on individual lines.

Grepping is a little counterintuitive with PowerShell because the pipeline between commandlets in PowerShell is filled with full-on objects not strings. If you just want text, you can use Get-Content, which provides its output as a bunch of string objects, 1 per line, which is convenient. Here's an example I came up with after struggling a little bit to get a grep type of functionality. I throw a sort and unique on here for fun:


Get-Content Library.xml | ForEach-Object { if ($_ -match [regex]"(?<=Artist\<.{13}).*(?=\<\/)" ) { $matches[0] }} | Sort-Object | Get-Unique | Out-File lib.txt


Many of these things can be abbreviated too, so if you want your script to read a little tighter, you can use


gc Library.xml | % { if ($_ -match [regex]"(?<=Artist\<.{13}).*(?=\<\/)") { $matches[0] }} | sort | unique


Isn't that sweet?

Since the regex uses zero-width lookahead and lookbehind assertions instead of extracting a marked subexpression, I'm curious if anyone has input on whether one approach is faster / better / shinier than the other.

My first guess is that they are similar, since my first cut at implementing lookahead + lookbehind would probably be to match the whole outer expression while naming the non-zero-width-bit in the middle, and assigning the value of that to the expression.

Monday, February 12, 2007

Yipes it's Y! Pipes

Super cool: there is no reason that a human should need to handwrite HTTP/XML/mashup/filtering logic for simple cases. Even with the highest-level toolkit, it still requires time, introduces bugs, needs to be hosted...
Systems like this are about moving toward a declarative specification for extracting semantics from web services (in this case RSS).

This particular implementation is a bit fancy on the graphics, which makes it run slowly, and it seems like it needs to extract data from RSS only. That is, if you try it out, it expects every URL "fetch" result to look like an RSS formatted collection of "somethings" ... which is nice, but it would be cool if you could also process XML from REST queries, or build SOAP queries as well. My first inclination was to ask for some kind of RegEx widget, but perhaps the Y! Pipes team intentionally doesn't want to allow us to go down that route ... over time they want more structure, not less structure in the data. They probably feel like RegEx has already been done in the HTML scraping world, although there is certainly lots more work to do there.

If you are interested in this stuff, check out some other approaches and flavors of this notion too:

- Dapper which tries to build web services on top of any web page as a data source. These guys have a "virtual browser" which lets you point and click your way through existing pages to build a service

- Kapow and OpenKapow -- enterprise and "free online" design tools for scraping, mixing, mashing and republishing the web

- QL2 an "old-school" enterprise software product used for industrial strength scraping, it implements a query language so that you can treat the web data sources that are being used as a virtual database (!) (frighteningly enough for an "unstructured data" query tool, this system is used in some large mission critical apps)

- YubNub: this souped-up version of wget lets you define "commands" (aka abbreviations) for issuing web queries, can substitute parameters, and pipe things together. It's arbitrarily extensible since you can always write a servlet/ashx/&c. to provide any data access or transformation you might want. On the other hand, it's more about plaintext (or human readable anyway) than XML

Wednesday, January 31, 2007

A Little Less Code, A Little More Action

I've been working on a side project, with a very web 2.0 flavor, partly serious (I really want to use this product, so I'm building it myself) and partly tongue-in-cheek (includes many free cliches, from the GEN-U-INE web-2.0-logo-generator masthead to a name that ends in -r).

In trying to maximize my productivitah and agilitah, I've been forcing myself to write absolutely as little code as possible, and to lean heavily on framework pieces that let me get it running now, and refine/refactor/redo later:

1. ASP.net 2.0 -- the built in support for users, roles, master pages, data binding to arbitrary objects, integrating SOAP web services, and mobile web pages is both well documented and fantastic. Been around for a few years, hardly news. Well understood and can scale like a mofo if I'm ever so lucky.

2. ASP.net AJAX and the AJAX Control Toolkit -- these are newer and totally rock out. In my book, there are only two long-term high-productivity high-sustainability conceptual approaches to AJAX. One is the ASP.net AJAX approach (there are also libraries on other platforms that use this same method), where simple declarative markup causes automatic generation of relevant client side code, server side endpoints, object marshalling, etc.

This allows really neat tricks like the following: start with an asp:calendar tag for a calendar control. Bind any sort of logic you want to its ASP.net post-back driven events on the server -- as simple as changing the UI or as sophisticated as booking a reservation on the selected date. Now just wrap the tag inside an asp:updatepanel like this

<asp:UpdatePanel ID="UpdatePanel1" runat="server">
   <ContentTemplate>
      <asp:Calendar ID="Calendar1" runat="server"/>
   </ContentTemplate>
</asp:UpdatePanel>

...and now your post-back is done, server code run, and results rendered all in an AJAX call.

The other sane approach is the Google Web Toolkit / Script# approach, of writing the client app in Java or C# with static typing, interactive debugging, refactoring, etc., and then compiling to Javascript as a build step. Of course I did succumb to one use of the "unsustainable" approach -- hand-coding a particular AJAX feature I wanted -- but I think that most typical AJAX cases can be handled by one of the above structured approaches.

3. SubSonic, an ActiveRecord implementation for .net -- I'm a big fan of object-oriented design, which usually means a heavier OR/M layer to a substantially distinct RDB schema. But for a simple project with a tiny straightforward domain model, I wrote the schema and went with ActiveRecord instead. I am extremely impressed with SubSonic. It does just what you want, code is easy to debug in the rare case something blows up, and poof! automagic simple data-accesss layer with beautiful generated classes, and no code.

Using these three framework pieces, I've managed to put together a site with users, roles, profiles, a dozen pages, a half-dozen mobile pages with automatic device-specific rendering, data access for the actual domain objects, mashing up a couple of external services, AJAX where appropriate ... in probably less than 250 lines of hand coded C#, and less than one man-week of effort.

Time (and some free analytics data) will tell whether it's useful to anyone besides myself, and whether I should expand it with more sophisticated functionality. But of course the beautiful thing (and yet another web 2.0 cliche) is that with this little effort to build, and a few bucks a month to host ... it's a fun time and who cares if no one else wants to use it ?

Thursday, January 18, 2007

Moving On but not Skipping Out

In the interest of full disclosure, yesterday was my last official day at Skip Interaction. I'll be moving on to new projects which I hope to discuss here soon.

Most likely I will continue to support Skip's efforts in one or another fashion as the company seeks to move up to the next level of larger distribution channels, additional funding, and the fun stuff: more one-click travel functionality.

Wednesday, January 17, 2007

A Computer that You Can Program: How Novel

In a big-boxy toy store I noticed what might be called kids' computers. These are laptop form-factor devices made for kids, that include age-appropriate pseudo-educational games.

These devices are sophisticated, with full keyboards, card slots, USB connectivity, mice, touch screens and touch pads in some cases... and benefiting from overseas production in mass quantities, they range in price from around $30 to $90. This page, if you scroll all the way to the bottom and look at the last three rows, shows the devices I'm talking about.

I'm standing there thinking, "Sweet!" ... How cool would it have been to have one of these to work on when I was a kid, programming a Color Computer 2 that cost over $500 inflation-adjusted (about $250 at the time). I learned more from programming the CoCo than from any software I could have run on it. And I believe that any child will learn more from creating with a computer than from some "shape drill" or "math drill" software. Just as I give my son blocks and Lego bricks to build with, I wondered if these inexpensive laptop wonder toys had a code mode, where the child could write a program, in Logo, BASIC, Squeak or anything else.

So far as I can tell, after reading through the manual (published online) for VTech's top-of-the-line Color Blast Notebook, there is no opportunity for programming this device. The manual for the Touch Tablet notebook (which supports PC connectivity) also shows a huge list of interesting built-in programs and utilities (including a whole category of "math and logic"), but no programming language.

With Logo and Squeak, programming can be as fun and easy as using Lego blocks. Making kids' computers impenetrable objects featuring only software that is published to them, rather than created by them, makes about as much sense as teaching kids about shapes but never giving them a crayon and letting them draw.

Tuesday, January 16, 2007

P#: Yet Another Reason I Love .net and the CLR

When Microsoft said that the CLR was designed to support many languages that might be right for many tasks, I don't think they meant fundamentally similar languages like VB and C# (modulo the syntax).

What's really cool is being able to cruise along in C# ... and then let Prolog go to work on your logic resolution with this Prolog implementation.

Here's a partial list of .net languages from Ada to Zonnon.

Sunday, January 14, 2007

Bruce Tate's Gettysberg Address

Bruce Tate is a consultant probably best known for working on Java, and for publishing the book Beyond Java.

I recently read Bruce's more recent From Java to Ruby, while standing in the tech section of the San Rafael, California Borders store. Sorry, Bruce, that you won't get the royalties for this, but I just couldn't put it down.

This book has a lot of great content (not that I don't have a small bone to pick here or there ... if you really think .net is still Java's separated-at-birth-twin in 2006, you need to spend more time using it). But what really struck me was the elegance, brevity, comprehensiveness, and precision of the book. A few more writers like Bruce, and we could happily live without about 90% of the IT press.

In precise, minimalist language Bruce systematically looks at the where Java and Ruby came from; where they each excel; where they fall short; how they compare to their predecessors and contemporaries in various dimensions -- real, no nonsense dimensions that enterprise architects think about; and how to get started with Ruby (if you want to) with a fabulous insight into the organizational dynamics that would foster different kinds of Ruby trials.

You get the immediate and lasting impression that this is a guy who really has a valuable perspective on the long-term evolution of software engineering practice, from the Mythical Man Month through today's productivity push with things RoR, and on into the future.

Friday, January 12, 2007

DIV is the new TABLE

Ok, I'm not a web designer by any stretch of the imagination, let alone an expert in building layouts using CSS. On the other hand, I've been using HTML and its offshoots since before it was the standard for the WWW. So I'm puzzled that designers spend pages debating the best way to get, say, a 3-column layout to work well in various browsers using CSS, and that there isn't a straightforward declarative way to do it.

I used to do "desktop publishing" (remember that term?) with a Mac SE, 1 MB of RAM and PageMaker. PageMaker had no problem working with all sorts of page layouts: first creating columns and then adding blocks that lived in, around, across, or through those columns, with and without "flow"... on more or less any size piece of paper ... and it could do WYSIWYG on the Mac SE's tiny monochrome screen while delivering mathematically precise PostScript for prepress work. A different but equally effective view of the world was that of QuarkXPress, where content blocks asserted their own column layout, rather than just snapping to column guides on a page. So it's really not such a hard problem in 2007.

There is a CSS3 draft published over a year ago on multi-column layout. The next step in that process appears to be another draft. I have to believe that someone is overthinking the problem. Start with column count and gutter, all uniform, and go from there. Then go back and add uneven columns, blocks that span multiple columns, flow of content between elements.

Never mind that CSS allows (or could allow) you to flip every switch on every element in a page -- good design should let you do first things first. And the first thing for many page templates is a header-multicolumn-footer layout. CSS without core layout declarations is like a sophisticated office phone where you can't find the number pad.

Defenders of the CSS process may blame the browsers, and there is blame to go around on some well-known bugs. But an overly complex (dare I say overengineered?) spec for how CSS must work doesn't make it easy for browser makers to comply. Instead, it gives them cover. Meanwhile Microsoft and Adobe are edging in on "standard XHTML/CSS" with extremely appealing flow document alternatives. I love Flash, WPF, and WPF/E. But I'd hate to see proprietary runtimes become the only way to deliver top quality design experiences on the web.

Wednesday, December 13, 2006

Obligatory Windows Vista Post (™), Part 2

Improved high-density display support is a big part of Vista. But what about those of us with low-DPI displays?

Ok, is that supposed to be a joke? Who would have a low resolution display?

Anyone who has a LCD panel larger than 17" and which runs at a native resolution of 1280x1024. Which is to say, anyone who has bought a nice desktop 18" or 19" panel in the last couple of years (excluding some widescreens, and the 19" panels at 1600x1200). The Vista "baseline" resolution is 96 DPI, which turns out to be just right for typical 17" flat panels (at 1280x1024). On these displays, UI elements which are sensitive to DPI render beautifully. In particular, the ClearType font smoothing technology which is widely used in Vista, even for legacy apps.

That same ClearType logic on 18" panels (87 DPI or thereabouts) or larger produces text so blurry it's distinctly uncomfortable to read.

The display customization box where a user can specify the DPI of the display has another complementary quirk: it allows you to choose higher-than-standard pixel densities, but not lower ones. If one types in a lower pixel density (as a percentage), it "snaps" back to the default setting. Moreover, if you have two monitors with two different pixel densities (I can't be the only one in the world in this situation), there's no way to specify that. It may be the case that a video card can supply this additional option, but it's not available in the latest nVidia drivers. And what if you have two different video cards (so that a single driver is not managing both of them)? The latter possibility suggests that the OS needs to provide this option...

Any ideas out there? Am I missing the "big red button" somewhere that fixes this? (By fix, I mean offer a place to enter the precise DPI of each display, so that the OS can inform apps, and tune its own services [ClearType] accordingly.)

Finally, lest anyone wonder why a large lower-res display (such as 19" monitor at 1280x1024 native resolution) would ever be desirable:
  1. Viewing HD media content from a distance. Neither a smaller monitor, nor more pixels on a bigger monitor is a significant help here.

  2. Gaming. Running a game at 1280x1024 on a larger screen is a great way to get a rockin' immersive experience without the substantial additional system load that would be required to render 1600x1200 (or widescreen) frames.
Anyway, I'm looking for answers. In the meantime I think I'll downgrade to a pair of matching 17" displays for regular Windows work.

Obligatory Windows Vista Post (™), Part 1

Ok, what's to say about Vista RTM amid the flood of coverage...

I installed Vista on my wife's laptop to do some usability testing. She is a relatively strong "typical Windows user", but not a "power user." Uses Office, the file system, corrects redeye in photos, knows what not to click on on the web; has no idea how to add user accounts, update a driver, or monkey with the control panel; doesn't know why anyone would want to do it.

As a geek, the parts of Vista I love are the parts not immediately visible. Mainly, the out-of-the-box support for .net 3.0, especially WPF and XPS. I'm always asking, "Have you seen the New York Times Reader?" I'm still mourning the loss of WinFS, and the Vista shell experience struck me as just-another-graphical-shell, with a couple of new things to see/find/learn/adjust to. My wife, on the other hand, had no problem with any of the changes (e.g. in the filesystem Explorer windows). She loves Vista. She said she found it way easier to use than XP: more intuitive and more productive (not to mention more visually attractive).

My point here is not to make a commercial for Windows Vista; this experiment compares Vista only to XP, and says nothing of how, say, OS X would stack up. Instead, what I really learned was how differently usability design / engineering / testing in Vista comes across to a "normal" user than to a geek. Every time I "discover" this usability fact, I am surprised anew, even though I shouldn't be.

I'll never stop being amazed by (1) how completely wrong we geeks can get it when we try to talk about what is usable and appealing to average tech consumer personae; (2) how brilliant the great usability engineers and designers are who can get some of this stuff right up front; and (3) how much of a mistake it is, when developing technology products, perpetually to defer serious usability study.

Thursday, November 30, 2006

Mauve has the Most RAM

Here's a recent post on the 37signals Job Board. I've seen a number of these postings lately, and I wonder: why would you specify a particular technology or platform when you haven't prototyped (let alone built out) a product yet?

When I've posted ads for jobs at Skip, notwithstanding our existing platform and the advantages of extensive experience in the relevant technologies, I've always emphasized that talent and attitude count way more than a particular language or platform. Experience with core CS concepts and perennial software development patterns/anti-patterns comes next, but is still more important than a platform or framework. Because anyone really talented knows that just as "Java is the new Cobol," .net will someday be the new MFC, and RoR will be the new ... OK, you get the point. And anyone with the proper foundation, inclined to ask the right questions, can learn new frameworks -- even new practices/methodologies (like an agile approach) -- and put them to use.

So why are Web 2.0 founders performing what I would call an extreme and misguided attempt at premature optimization?

First guesses:
  • It's boom time again in the Silly Valley, they've read a Business Week article that talks about platform X, and decided the way to become the next Google is find some X hackers
  • They don't understand the differences between the open source options, but like the idea of open source, so they've picked one... that is, they really mean "I believe open source affords my startup some opportunities or advantages, and I need an architect who is an expert in the relevant options"
  • They asked their geekiest / most successful techie friend, who said, "Just go with X nowadays, don't worry about [insert important but subtle tradeoff here]"
I have another idea: the entrepreneurs have gone a bit behind the hype to get a feel for the practices or philosophy associated the best-known practitioners on that platform. They then imagine that the best way to find folks who subscribe to that philosophy is to decide on the platform, then hire for the platform.

So when someone insists on LAMP, for no particular reason, maybe they really mean they are hoping to engage an engineer or team that embraces agile practices or at least agile concepts. When the entrepreneur has got his mind set on Ruby, he really means "the Ruby way" (least surprise, don't use 200 lines where you could use 10, etc.) If he or she says Ruby on Rails, it probably means get-it-up-quickly, include-some-AJAX, and ActiveRecord-will-do-it-for-now.

These folks would do well to distill out their methodology desires from premature platform commitments that cascade into hiring filters. After all, with the hardware virtualization and bytecode interpreters prevalent (and coming), and web services as de facto server-side object bus, we can do ActiveRecord on .net on Mono on Linux, or Java interoperating with Ruby on OS X, Agile development with IronPython on IIS... There is less reason than ever to close off options before any code has even been written.

Saturday, November 25, 2006

Travel Right with Skip 1.2 for Blackberry

It's been about a year since the first version of the Skip client became available for testing. Targeting connected business travelers, and inspired by the interface metaphor of classic killer Blackberry apps like email, the Skip Blackberry client was heavy on text and light on visual sophistication.

As Blackberry users migrated to the high-density color devices like the 87xx and now the Pearl, Skip started looking a lot like opening Notepad on Windows Vista. But the 1.0 release suffered from the curse of being good enough. No major bug was ever reported for the Blackberry client, and the biggest usability problem by far turned out to be folks having trouble keying in their id and password on the device (not just SureType -- even QWERTY folks have had a lot of trouble, strangely enough).

Meanwhile, 2006 turned out to be a busy year for Skip's 1.5-FTE engineering division: to get the ball rolling we needed to deliver the core Skip server app, minimal web UI, travel industry integration, Java phone client (twice), Plam/Treo client (twice), Windows Mobile (it's in test, so try it now with this OTA install; additional bits you might need install OTA from here and here)... and the Blackberry client never got a proper polishing because it was just good enough never to get to top of the "urgent" list.

So I'm glad to finally offer this substantially touched up Blackberry client to all of our loyal early adopters. Most of the changes are small bug fixes, ergonomics improvements, and UI features which will be self-explanatory to readers of this blog. The only non-obvious change is a power-user mechanism that lets you use the same client to work with a production (www.goskip.com) or test system (peridot.goskip.com) account: in the username field, simply prepend "test!" to your email address (e.g. instead of foo@bar.com, test!foo@bar.com), and the client will operate with the test system.

You may notice some new UI features (like the icons) will only appear via the test server for now, as we'll be doing some additional testing before moving the server code changes into production. ETA is probably a week. Also, the first time the client syncs, it will load up the icons into its local cache, so that first sync may take a little longer than usual, especially over EDGE.

Let me know what you think, and thanks for your patience this, er, um, whole year.

Thursday, November 09, 2006

Security Questions Considered Harmful

I went to sign on to the citibank site today, and before I could complete the sign-on, I was presented with the following required step:


This whole approach is just so darned awful.

This data isn't secret. Some is public record, like where I was born. Other things, like my favorite pet's name, are not pieces of data one normally protects. It's much easier to amass a collection of unprotected facts, and use those to pose as someone, than to compromise an actual password, an encryption scheme, etc. Moreover, most of these questions are used by many sites for the same purpose. So if I know someone's nickname, street address growing up, city of birth, mother's maiden name -- all readily available in the U.S. -- I'm that person on a lot of websites.

Personally, I use Mr. Schneier's approach of typing in random gibberish, thereby protecting myself at the cost of some convenience if I do lose my password.

While I'm on the topic -- and I'm sure someone has written authoritatively on it before, but nevertheless ... -- I am surprised how rarely people realize that their unprotected email account is the weakest link for all of their "secure" online activity.

Many sites (including Skip) email a new password to a user upon request. The gmail account that they stay logged in to, or log in to from a questionable computer in a hotel lobby, thinking, "My life is so boring, hey if someone wants to read my gmail, let them" ... or a business email account accessed on the road in the clear ... these become the easy way to get passwords.

There's nothing inherently wrong about using email to do a password reset (reset -- that means only a temporary password is sent via email; the user must then change it to a more protected one). Folks just need to realize that the email account -- if it is registered with sites that send passwords -- needs to be protected. Complex password, routine changes, all that...

Wednesday, November 08, 2006

The Transactional Web

Ok, maybe a double-Z-list blog like this one ought not to start coining phrases.

On the other hand, I started thinking a little about the taxonomy of the (programmable | read-write | 2.0) web.

Three categories came to mind right away:
  • Classic mashups take two or more sources comprising different kinds of data, and combine them in a new UI or tool. This group includes things like HousingMaps (housing from craigslist + Google Maps), or BroadwayZone, which pulls in show info, hotels, transportation, etc., and uses Google Maps to provide a substrate and a UI for working with all of this data.

  • Agent apps take multiple sources of related data, and bring them together to perform an action on behalf of a user. Some are goal-seeking agents, which search a wide space and narrow it down to something tractable based on a heuristic like minimizing a price. Kayak, which interacts with a huge number of travel websites and vendors, and Ugenie, which searches e-commerce sites, are agent apps.

  • Proxy apps take one or more sources of data and bring it into a new context. 411Sync's Kayak queries use Kayak's own API to get raw data that can be formatted for the SMS service. This category also includes RSS (client) widgets, and mobile smart clients like Abidia (mobile auction), Mobio (movies) and, yes, Skip.
Being at Skip, it's probably no surprise that I find the last category the most interesting personally. This category of apps is spawning what I've started calling the transactional web -- it's a flavor of mashed up app that makes real commits against external services. Not all proxy apps are involved in the transactional web, it's just that mobile proxy apps are where a lot of the transactional excitement is right now. Unlike the other categories above, these apps are doing things like buying movie tickets out of a finite inventory, checking you in to a flight, bidding in an auction.

Other kinds of apps can be "transaction mashup apps" of course -- I can imagine a travel web site that uses airline seat change APIs together with, say, SeatGuru, to automatically get your family the combination of seats you want to sit in.

But to do that, and all sorts of other amazing things, more "transactional APIs" need to be opened up beyond the B2B world they're trapped in now. Companies -- and old economy companies in particular -- have been hesitant to open up these services as transactional APIs to the general public. There are some financial and security concerns, but they are not insurmountable as PayPal's successful payment API has shown.

What we need now is for folks like United Airlines to offer the same things via API that it has already put onto its web site (check in, cancel, standby, upgrade, seat change, flight change, status). Each web check-in helps the airline, and saves it money; the service should be published as widely as possible.

OpenTable blazed an early path with web-service-standards-as-B2B-infrastructure. But, hey, there are some great apps in the heads of people outside their small group of strategic partners.

Web 2.0 has shown if nothing else that there are more good ideas out there for what can be done with a data set than there are in here. And that the value of the data often increases the more freely it can be used by other applications in unexpected ways. I'm suggesting that the notion of a web-service-accessible data set be expanded to include the real-time seat map of an aircraft, the reservation book at a restaurant, the transaction history and current status of a Visa account, a doctor's appointment schedule ...

All the hard work for this stuff has already been done. When this switch finally gets flipped on, you're gonna see some real fireworks.