Monday, December 31, 2007

HD or Blu? Format Wars are Over, Winner is x264

Holiday sale $99 refurb HDDVD players couldn't win the war for the HD camp, and the PS3 still shows no sign of ending it in BluRay's favor.

Meantime, the de facto standard online for HD disc content, legit or not, is x264.

After DivX/XviD/MPEG-4 replaced MPEG-2 as a generally practical format for video content from DVDs, it took some time before gadget makers released decent players that could load up a DivX stream off of any CD-R or DVD-R and play it (reliably).

My money says we'll have a much shorter wait this time. In fact, in the end-of-year spirit of goofy tech prognostication, I'll wager that by Christmas of next year
  1. The "official" format war between BluRay and HD-DVD will still not be meaningfully resolved
  2. But 200 bucks will get you a DVD player that can load up an x264 stream and play it back, upsampled and/or at HD resolution
  3. That same player (or maybe the $300 version) will either have a USB port so you can plug in your 1TB external enclosure and play HD movies that you've, um, obtained, without using a disk at all, or else it'll have a network port.
  4. Game over.

Thursday, December 20, 2007

Loving RoR with NetBeans 6

I've switched to NetBeans 6 for Ruby coding, and I'll never go back to those bogus text editors.

Some time ago, I wrote about IDEs for Ruby. I'm not sure I did a great job pointing out out that for Ruby (like JavaScript and other dynamic languages) some of the basic features we'd like from an IDE (code completion, refactoring) are non-trivial problems. The Sapphire in Steel blog has some great articles on working those problems.

The milestones for NetBeans were promising; the other free contender, the Eclipse bits formerly known as RadRails, are now part of the "Aptana IDE." Aptana is a reasonable tool for JavaScript, but last time I checked, Aptana was in "1.0" and the Ruby support still crashed and failed in interesting ways. The fine print points out that Ruby/Aptana/RadRails isn't 1.0. Bummer.

I'm very excited about this NetBeans release. For a detailed discussion of features, check out Roman Strobl's review (part 1 and part 2). Once the app starts and gets warmed up (which admittedly takes longer than booting the OS on my Windows server box), it does a pretty good job of the hard stuff, like refactoring. And it's rock-solid on the rest, like interactive debugging.

Unfortunately, Rails 2.0 became official at about the same time NetBeans 6 did. And Rails 2 changes a few things, like filenames/extensions on erb templates, that NetBeans doesn't know about. But it's easy to work around those things, and I'm sure an update will be forthcoming.

Friday, December 14, 2007

Amazon SimpleDB isn't Astoria... it Could Be, but Does it Need to Be?

A while back I wrote about Microsoft's Astoria REST-based relational data store in the cloud (or in your data center, if you want it there).

With Amazon's SimpleDB, we're a step closer to making this vision a reality. Now we're almost on track for competition (and sooner-than-later commoditization) of the new world where you don't even need MySQL to store your stuff.

Why almost? Because SimpleDB is not a full RDBMS, but is looks more like a flavor of triple store. Now, a typical (i.e., SQL-style) RDBMS can be built on top of a triple-store fairly easily. So we could will see a SQL processor, JDBC drivers, and the like, from the community pretty soon.

Another way to look at that "top layer" is to take a REST API like those used by Astoria or ActiveResource [PDF link] and simply implement that. Not as expressive as hardcore SQL, but easier, and probably enough for many applications.

What I don't see -- in the long run anyway -- is applications developing against thin wrappers specific to the Amazon triple store service itself. There's nothing fundamentally flawed in doing so ... it's just that, for a variety of reasons, data storage has evolved very slowly. The relational model is going on 40 years old, but still reigns supreme in terms of popularity, even if it has conceptual or technical flaws when put to work in today's applications.

Given the brilliant data storage alternatives that have fallen flat time and again, I doubt Amazon SimpleDB will change the way people talk about storing structured data. So SimpleDB doesn't need to be SQL but it will probably need to at least be RESTful.

Thursday, December 13, 2007

How Bay Area Subprime Mortgages Relate to High-Tech Startups

Earlier this week, Tom Campbell, Dean of Berkeley's Haas School of Business, gave a long interview about "the mortgage crisis" on KCBS' in-depth segment [MP3]. At one point, the conversation turns to the general paucity of housing in the Bay Area (relative to demand, anyway), and the notion that landlords may be beneficiaries as folks suddenly discover that (1) they can't afford $975,000 for that 3-bedroom house and (2) it isn't worth $975,000 anymore either.

But aren't these b-school types the ones who are always telling us to "think outside the box"? The best Tom can come up with is a kind of housing trust or co-op, that lets you own, say, 25% of an expensive house, while you still get to live in it (an investor group owns the rest).

That's a really cute way of pumping up housing prices and speculation even further, by letting investors who would never do the real-estate transaction themselves pool their investments and then spread them across a bunch of properties (doesn't this sound a little like the mortgage problem we just saw?), while leaning on loan guarantees to cover their downside.

The cost of housing and difficult commutes are a big brake on the tech industry; they consistently rank at or near the top of Silicon Valley business leaders' concerns about the growth of their companies and the success of the region.

In the positions I've held over the last 4 years or so, I've been responsible for hiring engineers. Not just punch-the-clock engineers, but wicked smart, willing-to-build-it-from-scratch-but-wise-enough-not-to startup-minded engineers. But it is brutally hard. Even with robust salaries, it is difficult to get applicants in the door, let alone hired. And, no, it's not due to a shortage of U.S. geeks. Housing costs and impractical commutes seem to be the primary limit on the realistic pool of applicants.

Over time, if we don't do something about it, we'll only have the old-guard geeks (who bought homes a long time ago or live in rent-controlled places in SF or Berkeley) and immediate college grads (who are up for an adventure and happy to have roommates).

The problem is, when your only tool is sprawl, and you run out of farmland to sprawl on, you either throw in the towel (Silicon Valley) or you sprawl farther away (American Canyon, Fairfield, Tracy, even Monterey). When your only tool for traffic congestion is building more lanes, that's what you try to do, even though latent demand means that more freeway creates more traffic in the long run, not less.

If I'm so clever, what am I suggesting? We need to use a different tool, namely smart growth.

Among other things, we need significantly higher density, infill development, and more mixed-use development (residential, commercial, and light-industrial uses in the same or nearby buildings). This isn't a speculative proposal: where it has happened locally, it has been popular. Look at Santana Row, San Mateo, downtown San Rafael, even South of Market/Mission Bay SF (although arguably the planning there didn't go nearly far enough, leaving too few housing units to make any dent in affordability).

Luckily, the Bay Area does not share America's poorly grounded prejudice against living in towns. So communities like the ones I propose, on the peninsula and in the East Bay (Hayward, anyone?) would likely be embraced. And infill opportunities abound: Every time you see a big-box store in the Bay Area, whether you love them or hate them, imagine a few stories of residences on top and additional businesses lining the enormous "blank" sides of the store at street level.

Will this generate so much housing that no one will be tempted to take on debt at crazy terms they can't afford? Of course not. But if it can help keep that housing-cost-to-salary ratio from growing quite so fast for the folks we want (and need!) to work with, while bringing the ancillary benefits of denser communities, it seems like a no-brainer.

Wednesday, December 12, 2007

No Uber-Soft Launches and No Stealth Mode

Uber-Soft Launches and Stealth Mode are two common practices that are usually big red flags of impending trouble.

To be clear, an Uber-Soft Launch is not a classic, small-scale launch where you release a decent version (maybe beta) of your product, but don't blast every PR trumpet you can find until you get a first round of feedback and some perf stats. There's nothing wrong with that; it borders on a best practice.

On the contrary, an Uber-Soft Launch is when the CEO or entrepreneur starts hedging about whether this launch is really the product or really the big launch he's been working toward and talking up. "We're going to just try this out and see what happens ... "

That statement is legit if it's intended as faux-modest understatement from a guy (or gal) who's clearly going for broke to make the product succeed. But if the firm or leader is really this wishy-washy and diffident about the launch, forget it. It's game over.

But then, in that case it doesn't matter because what the entrepreneur is really saying is that he doesn't expect to succeed so he's simply covering himself so he doesn't look silly after the failure. And that CYA attitude is one of the key things that indicates impending failure. It's a symptom of lack of conviction, a fear of failure that drives systematic bad decision-making.

Stealth Mode is a little less black-and-white, as there are a few cases where it may pay off. A few. Meaning not many. Luckily, Web 2.0 seems to involve far less "stealth mode" than Web 1.0 so it's less of a problem.

When might stealth mode be useful?

(1) A company has a specific physical or algorithmic invention (no, not Amazon one-click), and intends to patent it and to defend the patent vigorously (= has the massive cash to do so). Company wants to make sure it's documented and filed before anyone else files. If this is you, then you'd better be working toward the patent filing as fast as you can, no excuses. And get a good lawyer. If you're not ready and planning to defend, then stealth mode doesn't matter. Someone else can implement your technique if they want, and even patent it. You'll win or lose on execution and customer acquisition.

(2) A company's value is going to be based on a "network effect" play rather than a "hard-to-duplicate" play, and already has big the PR for the launch you lined up. In this case, since you know you're not "hard to duplicate," you don't want to spill the beans until you can fire off the giant PR cannons, at which point, you'll either grab a big enough chunk of network effect to sustain you, or else you'll drift. An example of this is Ning, which was in stealth mode for a long time. Marc Andreessen's celebrity and connections were the big PR blast, timed to match Ning's actual launch. But what if you're not Marc Andreessen or Kevin Rose and you're not going to make any headlines with your launch? Then you're not going to get a big "pop" when you come out of stealth mode, so really you're just:

Afraid of someone stealing your idea

But that's not a good reason for stealth. There are very few new ideas. If an entrepreneur thinks he or she has one, it almost certainly indicates insufficient research to find the people with the same or very similar idea before (and now), and consequently ignorance of why they failed (or might fail).

If this is you, get over yourself. Make it part of your "leadership agenda" to systematically find the previous incarnations of your idea -- or related ones. Analyze the heck out of them. Do better. Or be more popular. (Pick one or both).

But here's the twist, and if you're introspective then you saw this coming: you're not really afraid of someone stealing your idea, you're really

Afraid of someone not liking your idea

and you don't want to deal with that. You think that by waiting until you have perfect execution you will stun disbelievers with the beautiful product. That's just a delay/procrastination tactic. The underlying fear (of the product falling flat) will just make you want to delay and delay, making the product more and more "mature," so as to defeat nay-sayers.

Doesn't work. There will be nay-sayers. Embrace them. Love them. If you have money, make them into a focus group and pay them! Separate the whining pessimists from the ones with specific advice. Don't waste your time on the former, but realize that the latter are creating value in your company for free.

They are doing what your best product managers and designers should be doing -- finding and clearing roadblocks to adoption. Listen to them and verify what you think they're saying, by checking with other real people. Then you have a real bug list to work on, not getting the that drop-shadow AJAX doodad the right shade of pink.

As an entrepreneur, you're already drinking enough Kool-Aid by necessity; if you hide from naysayers, it just leads to a Kool-Aid overdose death-spiral.

I've been involved with companies that have committed both of these sins (and many more) so of course my perspective is warped by that. But don't take my word for it. Find the companies and entrepreneurs that you look up and want to learn from. Read their stories or go talk to them (being careful to filter out the Spiel and the 20/20 hindsight). Whatever you do, don't hide out convincing yourself you've invented cold fusion.

Friday, December 07, 2007

MozyHome Backup Fails to Backup Designated Files

I have been testing out Mozy's MozyHome remote backup product, and have found that it sometimes ignores new or changed files in the backup set marked to be backed up. Sometimes it "discovers" these files (or changes) days or weeks later and backs them up; other times, if I modify a file "nearby" (relative to the way my backup set is constructed), it will suddenly discover all the other changed files and back them up. Still other times, no poking or prodding seems to make it back up these files.

This is a serious problem. After all, the raison d'être of the product is backup. Imagine that a home user or a business installs this solution, marks files to be backed up, observes that the backup process is running successfully and then -- after a data disaster -- learns that, well, some of the files are backed up and some simply are not.

If Mozy were a fly-by-night outfit, one might say that better due diligence is required in choosing a backup provider. But with its recent acquisition by storage giant EMC and its global contract with GE, Mozy appears to be a solid company.

The software, though not perfect, basically works. The backup and restore are straightforward. You can encrypt locally with your own key so that no one but you could ever decrypt your data in the case of a breach (although interestingly, the file names are not encrypted, so don't count on hiding the existence of my_illegal_off_balance_sheet_transactions.xls, or gifts_to_my_mistresses.doc).

But we're talking about the number one, sine qua non, only -- really -- important use case for backup. It has to back files up. Or at least, if it doesn't, it needs to tell you what failed, when, and why.

When I first discovered this problem, I realized that publicizing it could have a negative impact on Mozy's business. So instead of blogging, I contacted them directly to learn more. Unfortunately, after a few back-and-forths, including my running their diagnostic tools and sending them the reports and explaining that I was not going to give their support personnel full remote access to my box without more information, they have gone radio silent.

As a software engineer, I am fully aware that this problem could be a strange corner case. Perhaps it is so narrow that it never affects anyone besides me. But I doubt it. And, in any case, this problem is severe enough that it warrants a little investigation to determine its breadth.

Why do I doubt it is an unusual corner-case failure? Simply because my configuration of the service is so "typical." I'm running XP SP2, NTFS on a well-maintained, modern machine, on a secure home network behind NAT, with no strange services or applications of any kind running (e.g., the kind that might hook and hack kernel file system operations, or leverage alternate data streams). The affected files are not under unusually named file paths, or have any funky attributes set on them.

The only things I am doing that are not defaults are using my own encryption key, and adding a few files to my backup set that aren't in the "My Documents" tree. My conclusion is that it is likely that whatever glitch is causing the software to miss files on my file system, is also missing files on other people's file systems. And they have no idea.

Perhaps this problem doesn't affect other users -- but without trying to verify the bug, how can Mozy know? In my last email to them, I specifically asked if their QA team had even attempted to replicate the bug. Had they tried and failed to reproduce it? Fair enough, maybe I could offer some help. But if they haven't tried, it makes you wonder what bug report could possibly be a higher priority? Maybe if their app runs off the rails and reformats your drive, that's a higher priority. But barring active destruction of your data, or a major security bug that could compromise you to a third party, I can't think of anything.

The ultimate problem here is not even an engineering problem. Yes, there's a bug in the software, but there are bugs in almost all software. Rather, it's a process problem. How does the QA process work? How does customer support work? What steps do you take if someone reports the Really Big Bug? Is the right thing to assume they're a crackpot? Can you afford to do that and not even look into it? (Hint: No.)

</end of regular post> <free:bonus>

Since I'm really not out to get these guys, but ideally want to help, I'm gonna offer a free first step: it's really easy to tell after the fact if this bug is manifesting, since the set of files actually backed up simply doesn't match the local description of the backup sets. So an easy diagnostic is to write a list of these two sets of files and diff them. If there are deltas, you've got a problem.

So... you push out an update to the client that creates a list of the files in the active backup sets and sends it over to QA as the last step in the online backup process. Then QA just has to generate a matching file list (the match is on the account id [email address] and either the date/time or the id number of the backup) from the Mozy meta-data store and compare.

Tuesday, December 04, 2007

Free ActionScript Code Generation: VASGen 0.2 Release

I finally got a new rev up of VASGen, my Violet UML ActionScript 3 Generator.

It is built against Alexandre de Pellegrin's enhanced version of Violet UML, which is designed to run either standalone, as an Eclipse plug-in, or via JNLP ("Java Web Start"). This version of Violet also has a much glossier look and feel, which make it particularly pleasant to use (thanks, Alexandre!)

screen1      s2thumb

The main functional enhancement to VASGen is in the area of package support. This version supports placing classes and interfaces (and nested packages) inside of packages, using the Violet modeling tool. When code is generated, the proper folders and qualified type names will be used.

If you don't want to package all of your types, or you are just sketching around inside what will end up one package, you can specify a default package by just dropping a package icon anywhere in the diagram and marking the name of it with a '+' (e.g., "+foobar").

I've tossed in a couple of bug fixes too. Details and the download are here.

If you uncover problems, I would love to hear about them and hopefully fix them.