Thursday, July 24, 2008

No, CherryPal Will Not be the First (or any) Mass-Market Cloud Computer

Notwithstanding this predictable VentureBeat article about a thin-client device with a somewhat anatomically suggestive logo.

I've previously written about why thin-client is a not-now and not-soon solution.

The $249 "CherryPal" box reminded me of another twist on the problem ... one that comes into play when the thin client isn't just software (like Skyfire's screen-scraping browser) but a new hardware device.

See, here's the thing about building your own new hardware in smallish quantities: it's really expensive on a per-unit basis. Or, another way, you can't offer a fraction of the capability per dollar that a Dell or Sony can. Your $249 thingamabob is going up against other $249 devices that have way more stuff (e.g., entire laptops) because they are produced in high-volume orders of established designs/modules/parts.

To be more precise, there is a spreadsheet you can put together that maps your bill of materials, plus any special physical design issues into a cost per fabricated unit. It includes multipliers that will make you sad, like your $2 can't-live-without-it chip might end up adding $20 to the finished product cost... depending on where it fits in, affects other parts, quantities, and other stuff.

It's tempting though, especially since the last 5-10 years have brought the ability to fabricate in China for lower cost and in much smaller quantities than would have been practical before. (Of course the costs aren't really lower, they're just externalized into a bunch of other areas ... but those are politics/economics/policy areas more than tech, so I'll leave them be for now.)

So, whereas before you might have needed to sell 700,000 units to break even on something, now 20,000 units will do it: increased temptation.

But if you're CherryPal, you still need to convince someone that a $249 thin-client is more useful than a $299 laptop with 10x the horsepower, a screen, storage, and all the rest.

Wednesday, July 23, 2008

Free Idea: AJAX Desktop Publishing App

If you're hankering to join the web startup circus and have everything but an idea, here's one for you: a full-on desktop publishing app ... online. The ready acquisition market for decent online productivity apps can serve as inspiration, even as the insane complexity of the undertaking urges you on toward greater heights of caffeination and maybe some anti-depressants.

Seriously, apps that can deal with complex page layouts and can produce a variety of press-ready output (InDesign, QuarkXPress) are expensive and hard to learn, and the average person rarely needs them. It would be nice to be able to hop on a web page and do a quick design. Wizards or integrated help could go a long way. All the usual business models (adds, freemium, pay-per-storage, etc.) apply.

Now, if you build this in Flash/Flex or in Silverlight (2.0), you merely have a ton of work ahead of you. While challenging, you may feel like you're not ... well ... proving just how wicked you really are, since those two platforms give you all of the technical infrastructure you need.

So why not go all the way? Do it in JavaScript, in the browser. Yep, WYSIWYG, arbitrary page sizes and layouts, in the browser. Advanced typography too. Fun times.

Here's a hint: one of the tricky bits is measuring text line layouts, since you'll need to make sure that the view matches the backing data (from which you will be generating other "views" or output formats).

JavaScript (the browser APIs really) don't give you everything you'd like. For example, one standard thing you need is to find out how many characters will lay out into a column of a particular width. Or simply what size a run of text will turn out to be.

While you can't ask these things directly, the clientHeight DOM element property can be useful: it shows the actual height that a DIV's content will lay out at ... so it will also signal when content has flowed from one line into a second line. By hiding the DIV, rigging up a while loop, and monkeying with the content (string), you can build a routine that measures text, so that you know exactly which characters are going to appear on a line.

Completing the application is left as a (possibly lucrative) exercise for the reader.

Monday, July 21, 2008

Packaging Outlook Scripts in Word Documents

Microsoft Office includes (unless you exclude it at install time) a flavor of ye olde Visual Basic (i.e. VB-COM not VB.Net) that is designed to facilitate Office automation.

If you're willing to put up with the VB syntax and conventions, you can put together mini-programs (including GUIs, menu options, toolbar buttons, etc. if you like) in even less time than it would take to boot up Visual Studio and run the Office add-in wizard or add some Office .Net assembly references.

Interestingly, distributing the resultant "macros" can be tricky... especially for Outlook. There are all sorts of suggestions depending on how widely you want to deploy your script, and how many "Are you really really sure you want to run this script?" dialogs you expect your end users to click through. Some of the approaches are quite hokey... for example copying over a client script file that will delete any other macros your user has. Eeesh.

It's possible that .vbs files and Windows Scripting Host are actually the "right" answer ... but here's another I used:

  1. Write and debug your Outlook scripts in Outlook using the built-in VBS IDE.
  2. Copy the bits you need into the dev environment in Word (this assumes your user has Word, and I'm guessing if they run Outlook they probably run Word as well).
  3. Save the word document as a word-doc-with-executable-content (*.docm). Even if you haven't defined any GUI elements, any quick and dirty VB procedures you've written will show up as macros in the View->Macro dialog.
  4. Bonus: write any instructions you'd like to pass along right in the word doc.
  5. Give the file to your users.
  6. COM-based scripting is COM-based scripting, so even though a user runs the macro in a Word doc, it can find and manipulate their Outlook items.

Not to worry, Word (and Outlook) prompt the heck out of the users when they open and try to run the code. And saying yes to your file will not require the user to grant access to any other script-bearing files.

It's not the slickest procedure in the world, but it's wicked fast and it serves the basic scripting purpose: Here's a doc with a script I wrote to convert Outlook notes into a single Task item for syncing with my Windows Mobile notes app.

Friday, July 18, 2008

Pardon the Glare: I Think My Tinfoil Hat is Catching the Sun

Last week, it became clear that Kevin Martin and the FCC are getting closer to laying the smackdown on Comcast for the ISP's BitTorrent "management" policies (or was it because Comcast keeps lying about it? or because Comcast has tried to game the hearings about it? so hard to keep track of now).

And now all of a sudden I'm getting 4x more BitTorrent throughput on my Comcast cable line than I ever did before.

I'm tempted to imagine a connection.

But I also suspect that positing a connection is tinfoilhattery on my part ... in particular because the bandwidth/connection patterns I see don't look like the TCP reset packet approach that Comcast had been using, and which gave them away.

I'm getting more throughput in the middle of transfers, where there had never been any resets.

So maybe I should just be happy.

Or maybe that's just how Comcast wants me to feel.

Thursday, July 17, 2008

On the Other Hand...

Yesterday I wrote about the potential of the iPhone to break the longstanding wireless app logjam in the U.S.

Today the Free Software Foundation blogs about the other logjam we might be getting into with the device, namely that the app development and publishing community is just as locked down and controlled as the DRM on iTunes tracks. Wait, actually the app side is even more locked down (since there's a low-tech path to extract songs by way of audio CDs, whereas the app infrastructure requires jailbreak).

I'm curious whether this passage from the FSF post will have the fanboys plugging their ears and whining "I can't hear you" or just shrugging their shoulders and saying "who cares":

"Apple, through its marketing and visual design techniques, is manufacturing an illusion that merely buying an Apple makes you part of an alternative community. But the technology they use is explicitly chosen to divide people into separate digital cells, and to position Apple as sole warden. When your business depends on people paying for the privilege of being locked up, the prison better look and feel luxurious, and the bars better not be too visible."

If not a prison, it's still a walled garden. Like Verizon's mobile app vending machine (odd are you've never even hard of that one) or the old America Online. Maybe it can work, but there are better alternatives.

Wednesday, July 16, 2008

iPhone App Store: Let's Hope It's Not All Fun and Games

The iPhone phenomenon seems to be a kind of Rorschach test, where you can find whatever you're looking for ...

There are all sorts of posts on data from the App store, looking at, for example, pricepoints in the app catalog, or the kinds of apps available.

These data are far from perfect (though worth a look), and in any case I hope that VentureBeat's post on the prevalence of Game and Entertainment apps turns out to be wrong.

Why? Mobile apps and mobile net usage have been waiting to "break out" in the U.S. for nearly 10 years. It's tempting to agree with the sentiment that mobile net use is reaching critical mass thanks to iPhone, but everyone in the industry has made that mistake many times before.

Successful off-deck software on phones in the U.S. has been more or less limited to games. So when I read the headline "Fun! Nearly half of all the iPhone App Store apps are games or entertainment," all I could think was "let's hope that it doesn't stay that way!"

I have no problem with Super Monkey Ball per se. It's just that if all the top app activity is in games and entertainment, then the mass psychology around the iPhone / App Store ecosystem is in danger of slipping irretrievably in the direction of "diversion" and not the enormous opportunity it really is.

Before you say I should just chill out about the Monkey Ball, consider:

Blackberry, because of its email device pedigree, was pigeonholed as a "corporate email/organizer device" years ago, even though it could do many other things. Despite RIM's attempt to sex up the brand, boost the hardware, cut prices and improve the design, it looks now like the window is closing for RIM to make any big gains. The enormous deployed base of Blackberries never translated into a general platform for mobile net-based computing.

At the moment, all of the top 10 -- and 18 of the top 20 -- paid App Store apps (over the last 24 hours) are Game/Entertainment apps. So right now, when the frenzy is at a peak, with mass media coverage of sold-out stores and consumers showing acquaintances their hot new gadget, the eyeballs are on the Super Monkey Balls.

To a lot of people, who have never seen a 3rd-party app on a phone, and who don't totally get what this is all about, the demonstration is going to look more like a Nintendo DS than a handheld computer.

Which raises a possibility: perhaps Apple was wise to restrict iPhone 1.0 to Safari-based apps, thus forcing consumers to view the device as a mini-web-tablet, rather than as a portable entertainment device.

Refactoring: Bittersweet if it Could Easily Have Been Avoided

I've been doing some refactoring on a decent sized app lately. Since there are lots of other people doing stuff and the app is close to a supported 1.0 release, it's like working on the engine of a car while it's cruising down the freeway. Not just tightening the cap on the coolant reservoir, more like converting the engine to run on hemp seed oil and corn cobs instead of gasoline.

On one hand, it's fun: mundane tasks that might feel academic become a lot more interesting when the task is part of the Rubik's cube of after-the-fact architectural change.

On the other hand, a lot of the refactor is an attempt to deal with unnecessary complexity in less complex way. The underlying complexity -- not mediocre code or an initial pass at red-green development -- is really the enemy.

And here's the rub: this app has some network protocol requirements that are (1) outrageously baroque; (2) opaque and undocumented; and (3) completely unnecessary for the app to function for its users.

As a result, it's already eaten over 120 man-months (and a staggering amount of money, let's just say over 8x what you're probably thinking the labor cost was).

At least 2/3 of that time and money (maybe 75%) went to nothing the user will ever see or care about, and nothing that offers any inherent business value or competitive advantage for the owner. That 2/3 is all expended in dealing with the bizarro protocol requirements ... and the issues and complexity it spawns: the design compromises, the testing, the bug analysis and remediation, the refactoring. I'm not even counting the actual delays and feature cuts in this cost.

Where did I get the 2/3 number? Well, the project was initially scoped without this network protocol issue. The estimates were written down. And, in the initial iteration -- about 30 man-months of work over 5 calendar months -- we tracked resource usage and found that core functionality and QA was reliably 1/3 and closely matched the estimates, while accommodating these "other" issues was 2/3. Week in, week out. No surprises. Things haven't changed since.

It's ironic because this pattern is something that often comes up when developing against a true legacy system -- the formats, protocols, and systems in place simply cannot be changed right away for the convenience (or productivity) of a new team. And yet this initiative was green-field, not legacy. It could have used any protocol in the world. There was 0 (zero) install base that needed compatibility with this particular system.

I almost wish I didn't know this information ... It's the one thing that puts a real damper on the refactoring fun: knowing that with even a tiny bit of management planning and the willingness to deal with team issues, it would be unnecessary.

Saturday, July 05, 2008

Yet Another Microblogging App ... With a Twist

Fun as it is to argue about whether the latest Twitter outage will be the death of it, or whether identi.ca has legs, there's a whole other world of RSS/Microblogging possibilities that hasn't been exploited. Unfortunately, as I'll describe in a bit, there's a reason for that ... and an implicit call to action to fix it.

Microblogging -- or, really, the feed-oriented delivery mechanism -- could be huge for all kinds of private-domain problems that are stuck in Web 1.0 mode of e-mail updates and web-page dashboards.

Whether it's a group trip; a job search (that my good friends but not my coworkers know about); the latest status on project, issue, or change request; whether my cat is in or out (neighbor cats beware!); what my son is up to; a surprise party ... and on and on ... there are microblogging feeds I'd like to publish and consume, that are not public. They are not 1-1 either. They are published to a group. I trust members of that group, and they can bring in others. Perhaps many people can update or write to the feed as well.

Seems awfully useful in personal and professional contexts. But there's no app that exactly does this right now. The closest things are Tumblr's "groups" -- which are cool, but don't offer feeds -- and FriendFeed's "rooms" -- which are also cool, but don't offer authenticated feeds.

So I hacked together an app during a few spare hours one weekend to do what I wanted.

You can try it if you like at http://www.statusmonster.com or its real location, http://stat.heroku.com (have I mentioned how totally freaking awesome heroku is? well, that's a post for another day, but take me word for it, they are ninja rockstars those guys).

This app makes it super easy to create a multitude of private RSS (atom, really) status feeds, and share them with people. It has a mini-dashboard to watch the latest on a bunch of feeds. But I'm thinking folks don't need Yet Another Web Page to visit all the time. They need authenticated feeds, which statusmonster offers.

Wow, I'm gonna get rich and famous.

No, I'm not.

Here's the problem:

If a feed is not authenticated (like the basic feeds from statusmonster, or the "room" feeds at FriendFeed) then midstream aggregators like Bloglines or NewsGator may index these feeds for search, and/or offer them to others for subscription. This is the essential difference between syndication and publication -- a.k.a. the answer to, "Hey, how is a blog different from a homepage?" This is great for my blog, but really bad for my candid job search notes.

Ok, so we'll offer authenticated feeds. According the Bloglines FAQ, for example, authenticated feeds are not indexed for search or exposed to other users.

I did that, and started trying it out with NewsGator's desktop and mobile products, Outlook's RSS reader, Bloglines, Google Reader, etc.

The power of feeds lies in the fact that the end user gets to decide how to consume and/or process the content. That is, to get the full power/potential of feeds, they need to work with pretty much every major reader/aggregator service.

And that's where the trouble starts. Reading authenticated feeds with readers is a completely hit-and-miss affair.

The best behavior I found actually came from Outlook 2007, which not surprisingly treats the feed almost as an email account. It takes your credentials one time, and then when it polls -- or when you click "send/receive" -- it supplies them behind the scenes and updates the view of read/unread items. Pretty much exactly what you want.

But it was all downhill from there.

Bloglines processed my authenticated feed, but seemed to take forever to reflect updates (much longer than other feeds), and it eventually lost sync with the backing feed. The feed still exists, same location, same credentials, but eventually Bloglines started showing its little "[!]" marker meaning "problem with feed" and never updated again.

NewsGator kind of half worked. Google reader doesn't do authenticated feeds. And so on down the line.

So doing all this cool stuff on the existing infrastructure is not gonna happen. Big bummer because I'm convinced there's major value in these use cases, so we need to figure out how to make them a reality in a way people can actually use (and will want to use).

What are the constraints?

We don't need super end-to-end crypto any more than email does. Most folks do their email in the clear, figuring the content is not top secret, but also assuming that their employer/ISP/email provider is not gonna go and publish the email in a Google-able way.

I think that's the standard we're aiming for -- basically a better way to do stuff that nowadays might be handled by a whole lot of one-to-many emails.

A set of standards around handling authenticated feeds might be all we need. But how to enforce that, since anyone can cook up their own aggregator and ignore the standards?

Force the user agent (browser or client app) to supply some kind of secret to decode the feed? Maybe, but this is only strong in proportion to the strength of the key material, and having lots of high-entropy key material per feed makes this cumbersome and hard to use.

Something about confidential and syndication don't really mix. But we don't need another walled garden email/messaging system, and we don't need more web pages to visit (like Tumblr's groups).

I'm working on it. Meantime, what do you think?

Friday, July 04, 2008

When Corner Cases Attack

Notwithstanding the credit given to MTV's Real World, I think reality TV took off with that Fox special that featured a deer knocking a hunter on his rear. That guy just never saw it coming.

Same thing seems to happen too often when we start making architectural adjustments around a corner case in a software application.

The real problem is how to tell when an exception to the initial model -- the corner case or edge case -- is truly a strange bird, versus when it is a big piece of the model that was missed the last time through.

Sounds like something that should be easy to resolve with proper analysis, whether of the thick-stack-of-paper variety, OO variety, or an agile story-based style.

But it's not.

I've recently been doing some work on a project where this mistake was made. At some point in the past, this team looked at an odd duck, thought hard, and realized it was a big piece of their world. Big enough that the underlying infrastructure -- database, network services, etc. -- should be designed with sufficient generality that this corner case would become a mainstream case.

I understand how the analysis went, and why it made sense.

Only problem is, with 20/20 hindsight we can now see it was wrong. Bringing this edge case into the tent made the system an order of magnitude more complex, and it should have been left as an odd hack in a dozen places ... but as development happened over time, on several different module teams, the problem of the added complexity wasn't clear until it was too late.

To be a little more specific, I'll offer an analogy: you're writing a driving game or simulation. A big part of the fun is the spots where the players catch air, jumping their vehicles over obstacles to get ahead. But how to model the behavior of the cars as they fly through the air? A quick hack? or ... integrating elements of flight simulation or 3-D physics simulation?

Well, this team went the equivalent of the flight simulator route, and their hard-enough-to-code 2-D driving game has become a brutal 3-D simulation. All for a few jumps that in retrospect should have been hacked.

My point is not to complain about a historical decision that probably didn't seem bad given the info available at the time. It's to ask about how to avoid these problems in the future.

If you grant my assertion that, in some cases, no straightforward analysis reveals the problem at the moment the "how do we handle this case?" question is asked, then it seems like we need a set of milestones for a posteriori analysis.

We make a design call when we need to make a call, and then we need some specific questions -- and times or situations to ask them -- that will tell us if we're on the right track.

Specifically: What is the best and earliest way to ascertain if we're

  1. monkeyhacking in an edge case that really demands a proper refactor
    OR
  2. doing a refactor that way overengineers a solution that really just needs a monkeyhack

What do you think?