Friday, November 30, 2007

Hazardous Attitudes: Disregarding VC Due Diligence

The FAA identifies five "hazardous attitudes" that have proven so dangerous to pilot decision making over the years that they are explained in the FAA "Pilot's Handbook of Aeronautical Knowledge" and included by reference in exams and regulations. These attitudes are Anti-authority ("Don't tell me."), Impulsivity ("Do it quickly."), Invulnerability ("It won't happen to me."), Macho ("I can do it."), and Resignation ("What's the use?").

Working with startups, I've seen entrepreneurs exhibit all of those attitudes when trying to convince others (and themselves!) that they needn't worry about VC due diligence.

I would advise those entrepreneurs -- and any wannabe entrepreneurs -- to read Rick Segal's fabulous post on due diligence. In addition to giving advice, Rick explodes a commonly held notion that VC due diligence is just a formality and that, if you get to the d.d. stage with a VC, you're all set, you can just wait for the wire transfer to hit your bank account.

Due diligence is real -- Rick suggests that one or more of three deals he's currently looking at will not close because of problems at the due diligence stage. Read that sentence until you believe it. Then, before you tell yourself that it doesn't matter because you're going to bootstrap, you'll never need a VC or a corporate investor, go back and read the five hazardous attitudes again. If you think success means believing in Plan A so much that you don't bother preparing a Plan B, the odds are you'll be laughing about this failure some time down the line over a beer.

Now that we've got that out of the way...

A couple of Rick's points that deserve emphasis:

  • Financial Forecasts. Of course they'll be rosy. What's important are the assumptions. Where did you get your data? How hard did you try to get good data? Is the logic that ties the data together sound?
  • Business Thesis and Assumptions. "... [W]hat do I have to believe?  What do you believe?  And, of course, what are the assumptions behind those beliefs. ... You have to have the same story, metrics, thesis, etc, from day one." If you change your execution plan a few times, that's to be expected. If your fundamental beliefs about the space are changing faster than you can execute, you have little chance.

At some point between the kitchen table stage of your startup, and the time when you walk into a VC meeting, the following things will become relevant. Plan accordingly:

  • "Understandings" or "Gentlemens' Agreements" with investors or employees. Unusual terms can be changed or dealt with at VC time, but I wish I had a buck for every time a CEO told me these issues would just melt away because everyone would be so pleased at the prospect of making a big deal or closing a round.
  • Questionable expenditures. This can either be a few big items or systematic spending that adds up. Don't do it, and if you do, don't try to hide it or hand-wave. I've seen a VC identify a six-figure sum missing from the books, and still make the investment. The firm identified the issue and decided they would arrange the deal so as to handle it and get it under control. It wasn't a showstopper for the company, but it was the end of the line for the guy who tried to cover it up.
  • Team issues. Although Rick says that not every VC would talk to all the employees in a small company (he would), it's a good bet that your core exec team -- and probably everyone in your first 8 people -- will get a good grilling regardless of the VC. The team has to be a team, and you won't be able to fake it. This doesn't mean everyone needs to agree or be buddies, but they need to function properly as a group. If you don't have a team, the VC will figure this out, and your funding is unlikely.
  • Product. Ok, this should be obvious. You can spin the marketing claims ("a better way to manage your contacts" could be anything) but you can't fake the technical claims ("syncs up to 5,000 contacts to any device" is either true or it's not).
  • Customers. Although some VCs seem to be trying to get all the risk out of their portfolios by investing only in companies with a solid customer base, that is their problem, not your excuse to pretend you have customers that are fictional, occasional, or just plain unlikely.

If you still think this is all hypothetical, or won't affect you, then take another look at the five hazardous attitudes and ask yourself: are you really planning for success? just closing your eyes and hoping? or fooling around and enjoying the ride?

Thursday, November 29, 2007

Overclocking: I So Ought To Have Known Better

Ok, I'm not a big pc modder guy, but I was trying to squeeze some extra life for gaming out of a machine that isn't the latest. Using techniques I had successfully executed before, I was in the process of kicking the 3.2 GHz CPU up to 3.65, when I toasted my HDD. The thing was, I started skipping some steps I had done the previous times, and that turned out to be my downfall.

I know, the high-clock-speed chips are so 2005, because a slower CPU can have 7 cores, and execute 19 instructions per clock cycle and have so much on-die cache that you just mirror the entire system RAM into cache on startup and then pull out the DIMMs and use them to level that kitchen table with the short leg.

The fun part is seeing which component will fail from the overclocking. It's not the CPU of course. At one time it was the video card; fixed that. This time it looks like either the SATA bridge or the drive electronics themselves. Ah well. It'll totally be worth it once I get that pixel shader 3.0 action going. (I told you it was an older machine, besides, don't get me started on the awful experience I had trying Vista/DX10/PS4).

Monday, November 26, 2007

500 Channels, er, Feeds and Nothing to Read

The blogosphere often writes about itself, and the discussion of group blogs vs. individual ones is no exception. Nevertheless, I have seen an increase in quantity and decrease in quality (per post anyway) from a number of top blogs that I read. Maybe coincidentally, these blogs are team affairs now.

It kind of snuck up on me.

Slowly I started finding more and more items each day from certain feeds, and skimming over more and more useless posts. As the post-per-day count increased, it seemed that more filler was appearing. The filler was less interesting, lower quality, pseudo-news (i.e., the obligatory regurgitation of tech semi-news that was already published elsewhere, plus two sentences of uninspired commentary).

Just to keep up with the flow, and to stop wasting my time, I started dialing feeds that had been on full-text mode down to headlines only, to make it easier to skip the garbage. And now I've started tossing a couple of these so-called A-list feeds in the trash altogether; days were going by without an impactful or truly newsworthy item.

These are commercial blogs. Like the commercial TV and mass media from which they originally tried to differentiate themselves, the publishers have elected to fill lots of airtime with bad shows, and then sell the commercial breaks.

The publishers have every right to do that. But if I wanted fake news and comment, I would watch CBS or read cbs.com (not sure if that's even their URL, that's how irrelevant they've made themselves).

But I'm not going to bash these guys (and gals) -- after, they are digging their own graves, at least as far as being in the vanguard media and not the legacy media.

Instead I want to mention and thank a few great bloggers who have elected not to do this even though their audience size is such that they might have. I look forward to reading their work every time an item shows up in my aggregator:

Thanks, guys!

Wednesday, November 21, 2007

Lessons from Porting a MS-Office-Scale App to Flex 3/Flash 9

Earlier this year, I led a consulting team which ported a large Windows client app (similar to a member of the Microsoft Office family; think something like Visio but with different functionality) to run in the Flash 9 runtime, using Flex 2 and Flex 3 Beta, in under 6 months.

I was invited to present a (very!) quick talk on some of our "lessons learned," as part of O'Reilly Ignite at Adobe MAX in October. Alas, schedule issues made it impossible to present at MAX. So here, in Ignite style (15 seconds per slide), are the key lessons from the project:

  1. The learning curve for Flex/ActionScript 3 is very shallow for experienced software engineers, so a strong team can jump right in and be productive even without any Flex/Flash experience. Only two of our five developers had done any Flex or ActionScript work before at all. That said, Flex is still a “leaky abstraction” – some aspects (e.g., deferring an operation to a subsequent frame) of Flash Player still bubble up – so it’s useful to have at least one engineer who is familiar with coding for Flash.
  2. There is no need to think about a Flex app as a Flash movie on steroids – simply think of it as a VM-based app (like a C# or Java app) from day one. Bringing standard OO pattern experience is very helpful. UML/Code generation tools for AS3 may be helpful too depending on your dev process.
  3. Premature optimization is still the root of all evil. Most optimizations are a non-issue with the performance levels of Flash 9, and we found that a small number of relevant optimizations could be easily done toward the end of the project without significant pain in refactoring. Network operations, which were outside of our scope, were the only cause of performance issues. Nor did we compromise in terms of space: We had tens of thousands of lines of code, hundreds of features, and most of our display via custom code (the app is partly a diagramming app) … and it compiled to a .swf in the 670 KB range.
  4. ECMAScript 4, JavaScript 2, ActionScript 3: whatever you want to call it, throwing static typing into the mix (without requiring it everywhere) rocks! Our unscientific feeling is that we enjoyed a 2-4x productivity improvement over a similar language (e.g., JavaScript) because of static analysis capabilities and compile-time error checking.
  5. E4X on a large scale is a beautiful and terrible thing.
    • The beautiful part is: It did not lead us to make a mess of our application with snippets of XML processing scattered everywhere (something we worried about at first), and the performance is amazing. We were doing a ton of XML for integration to existing systems so we worried at first. We measured it, and benchmarks of XML processing speed with E4X were far and away more than adequate.
    • The terrible part is: the E4X API has some quirks and inconsistencies that are not documented, and in a number of cases we needed to cook up some ripe hacks to work around them. These issues can and should be cleaned up. E4X also misses a couple of (admittedly rarely used) aspects of the XML schema spec around attribute namespace scoping, that just happened to cause us some difficulty.
  6. The Flex framework is in serious need of a native AS3 rich text, full [X]HTML editing widget. Workarounds currently include AJAX in the browser, or AS2-based components that can’t run immediately in an AS3 app's VM space.
  7. For building some applications, the async-callback approach can create substantially more chaos than it eliminates. To address this, Flash/Flex needs threads, or at least continuations. In fact, this is the first time in my career I’ve really seen a problem (handling certain callbacks) where continuations would be an ideal solution and not just a clever one.
  8. It is easy to mix open source Flash tools and commercial Adobe tools – my team did, and it worked well all the way around.

Since Flex/Flash are likely to stay key contenders for any RIA implementation project, I hope these learnings are helpful when embarking on that sort of work!

Saturday, November 17, 2007

Beware the Hail Mary Meta-Hyper-Framework

A while back I spoke with the CEO of a midsize enterprise software company. He invited me to work with a small team building a framework for customers to create applications that would run with his company's app.

I spent some time with the team, looked at the framework, and declined. As diplomatically as possible, I explained what I saw as the key problems with the initiative:

  • The term "architecture astronauts" barely touched these guys. I think they actually used the term 6GL in conversation. They were architecture hyperspace/time-travelers.
  • At the same time, I'd call them "infrastructure spelunkers" as well. The team trusted nothing, and was reinventing everything. They were rebuilding so far down the stack I started wondering where they built their own hardware. Then I realized that was silly -- after all, they would mine their own copper before they'd use someone else's wires.
  • The initiative had been running for several years with no concrete delivery. There was no deadline for a future delivery either. Given that the predominant tools and platforms were evolving underneath this team faster than the team was moving, I suggested I did not expect they would ever release.
  • No real world app -- either inside the company or among its customers -- was using it yet. That is, there was no dogfooding or validation of any kind that the framework was learnable or made the relevant problems easier to solve than existing approaches.

The CEO appreciated my frankness, and said that from his point of view, it was a big long-shot bet:

  • For him, the 15+ man-years that it cost to work on this framework were a non-issue. With plenty of revenue and people, if this initiative went nowhere, it would not be a significant loss to the company.
  • The framework was not critical-path for anything else, so he didn't anticipate any major downside.
  • Potential upside was, he felt, huge. If this framework somehow succeeded, he could grow his company much faster than he could any other way he could envision.

Well, I thought, I wouldn't make the same choice -- I would probably look to gain the same leverage using a radically simpler framework or mechanism -- but then I wasn't looking to take on an executive role, so it wasn't really my call.

DHH, the man behind the Rails web app framework, wrote about frameworks when he explained "Why there's no Rails Inc":

"... Rails is not my job. I don't want it to be my job. The best frameworks are in my opinion extracted, not envisioned. And the best way to extract is first to actually do. ... That's really hard if your full-time job is just the extraction part since you now have to come up with contrived examples or merely live off the short bursts of consulting. For some that might work, but I find that all my best ideas and APIs come from working on a real project for a sustained period of time."

Nicely put.

Wednesday, November 14, 2007

"Wrong Date" Emails == SPAM

On a regular basis, I get emails from vaguely reputable entities (airlines?!) sent with the wrong date. For example, on Wednesday I'll get a message "sent" on Monday. Because these messages show up out of order in a mail reader, and are marked "unread" among mostly "read" messages, they catch my attention. And sometimes I read them.

It's time to call this out for what it is: an annoying spamming technique designed to try and get my attention. And, like sending all your email flagged "High Importance," it gets old fast and makes you look bad.

There are plenty of time servers on the net. Plenty of bulk mailing mechanisms that don't get several days behind in getting the mail out. And plenty of docs on how to leverage UTC and offsets in your environment of choice.

So I don't believe anymore that a "misconfigured server" or "backed up bulk remailer" is to blame.

If I were an ISP, I'd think about creating spam filter rules that tweak an entity's rating every time bulk mail comes through with a date that's way off. Do this enough, and your messages go to my users' filtered spam folder.

Android? Give Me a Call When (If?) Phones Go On Sale

In this era of cluetrains, agile, getting real, perpetual beta, and all the rest, I would have thought that FUDly tactics like announcing big game-changing vaporware just to assert your position couldn't be taken seriously.

Not that Google's Android phone software is vapor (you can download it today) -- rather, the proposition of a viable phone ecosystem (devices, carriers, software -- in short, something you can actually use) is vapor.

It pains me to write this. After all, I proposed an platform-oriented MVNO (which Google may become with its possible spectrum bid), and I also stated that, to change the game, Google should support J2SE (and more), which Android does. And Google's challenging the carriers, who are the big problem in wireless software.

But I gotta call it like I see it, and announcing a big platform initiative when the first real handsets are at least a year away is a movie I've seen before.

There's an ALP announcement like this every six months; Palm announced the mythical Cobalt (with SDK!) over and over; Motorola made a big deal about its Linux phones. To be fair, Moto does ship a number of phones running Linux under the hood. But they're not really open, and neither developers nor consumers have taken much note. In early 2006 we were told MIDP 3 devices would be shipping by, er, now. Oh, and Sun's magic wand is going to bring us JavaFX mobile devices.

It's old-school (bad!) tech marketing at its worst. At best, these announcements need a bogo-coefficient to convert, say, "Just 12 months away!" to "Just 36 months away!" ... but really it's best not to believe in anything you can't buy at your local Sprint store.

Producing the SDK for public consumption now is not necessary to provide development lead time (how long did it take developers to build apps for the iPhone?). Plus freezing APIs now just makes it that much harder to alter the them when unforeseen issues crop up closer to mass production.

Although Apple is famous for taking a haughty approach toward their developers and customers, there is something to be said for putting the hardware and carrier together first, releasing the phone second, and then when you see what people really want to do with it, release an official SDK to help them.

I wish Google the best of luck with 700MHz, the carriers, the handset makers, the FCC, and the public. I'm just not sure this is off to an auspicious start.

Sunday, November 11, 2007

Agile Clients Present Challenges for Fixed-Price Consulting Firms

As more companies adopt agile development techniques, their increased desire for flexibility is causing pain for fixed-time, fixed-price consulting partners who have not yet figured out how to make the new dynamic work.

Traditionally, there are two models for technology consulting: "Time and Materials" (T&M), where the consultant bills per unit time and is only minimally responsible for the overall timeline, and "Fixed-time, Fixed-price," where the consultant is responsible for proposing a combination of scope, budget, and timeline and then delivering it.

The problem is that the initial time and price estimation models are based on a particular deliverable target. And while it's traditionally been a consulting challenge to keep change under control during an engagement, now clients are being completely upfront and saying, "We are developing alongside you and we intend to make substantial changes to the target every month, as the project develops."

With agile methods, this is not a shocker. And the client is justified in believing that if their organization can deliver new features and make spec adjustments every few weeks, surely high-priced consultants should be able to keep up.

The good consultants can keep up. The problem is figuring out a way for them to get paid for it; otherwise they won't be able to do these kinds of projects for long. Without a model that specifically addresses the possibility that expensive goal changes may arise, the consultants are forced into (a) being less agile than the client, (b) eating the cost of the changes, or (c) asking for more money or time. None of these options are good ones for the consulting relationship.

A first step toward a solution is to change agile co-development from an externality to a core part of the service delivery model. An iteration length should be defined as part of the project work statement. So far so good. The tricky part is allowing the consultants to make time or price changes at the checkpoints between iterations. After all, wasn't this whole project sold as fixed-time, fixed-cost? Sounds like having one's cake and eating it too. The reality is that both parties need to allow these changes (the client needs to change the specs; the consultant needs to change the cost) in order for the process to work.

The problem on the client's end is that development methods like agile, even if they tie in to product development processes, are often more "micro" in the budgeting scheme than the department budget or consulting budget. That is, the client's finances are not as agile as their development.

On the consultant's end, if the engagement is renegotiated -- even at a micro level -- between iterations, the overhead of managing and selling the project goes through the roof. And of course it's always possible that agreement can't be easily reached to match a spec change with a cost change. In that case, the whole engagement can fall though adding massive risk to delivery and reputation all the way around.

While not a complete solution, a number of elements would be helpful

  • Guidelines that throttle the rate of change between iterations via specified metrics, so that both sides have some ability to predict cost
  • Escalation chains for approval based on the amount of money or time involved, so that small and large changes alike don't necessarily need to go to a VP Finance or the legal department every time
  • Emphasizing these agile mechanisms in the sales process, so that an agile consulting firm (which might potentially cost a little more) remains competitive with others (who might offer a lower price but not be able to speak to how they will deal with these issues).

At this time, technology consulting opportunities abound and make good sense for clients and service providers alike. But to succeed in the long term, fixed-cost consultants need to evolve their models a few steps to keep up.

Sunday, November 04, 2007

Red Pill Shoe #2

So XSpace is actually, well, everyone in the social networking space except for Facebook.

And the Red Pill app itself is still a hypothetical, but it's getting closer and it's clear where that data would be helpful now.

Scoble's suggestion that his friends would all have to move before he'd look outside the "walled garden" of Facebook, and Don Dodge's suggestion that Facebook users won't just leave are both missing the point.

Users don't need to make up their minds now, they only have to decide that at some point in the future they might want to spend more time somewhere else. That's where Red Pill and its data come in.

Meantime, Bebo's press release headline ("[...] REVEALS PLANS TO LAUNCH FACEBOOK-COMPATIBLE DEVELOPERS PLATFORM") appears to overreach the actual article content -- namely, that Bebo will "make it easy for Facebook developers to port their applications," not necessarily host Facebook apps directly.

But there seems to be no clear reason why an OpenSocial proxy system could not actually host F8 applications. Except for maybe some terms-of-service issue. All those "fb" abbreviations and tags may have to get ROT13'ed or the like.

Saturday, November 03, 2007

Want a Real Cheap, Real Usable Windows Laptop? Don't Stop at XP, Go All the Way With Windows Server

As we move from back-to-school shopping shenanigans into holiday shopping shenanigans, a number of retailers (Wal-Mart, BestBuy, CompUSA) are selling dirt-cheap underpowered Windows Vista laptops for $300-$400 from vendors including Toshiba, Acer, and Gateway.

A few articles have been published on "upgrading" these machines from Vista Home Basic, which is more or less unusable on them, to XP Home or Pro. But why stop there? If you want to get down to business on a dirt-cheap laptops, and Linux isn't your thing, take a look at Windows Server.

I've used Windows Server 2003 on extremely old boxes (Pentium 3, 500 MHz, 384MB RAM) for testing and development purposes, and these machines are snappy. Server has a lot less eye candy and overhead than XP (I often catch it idling with only 120 MB of RAM in use), which makes it ideal for these underpowered machines.

How do you do the "upgrade"? Well, the the biggest challenge is the same as with XP: the laptop vendors provide Vista drivers only, and you'll need a Win2000-XP era driver for Server 2003. So before dropping the cash, do some research and make sure you can find XP drivers for the video card (on-board or not), Ethernet, and WiFi devices at the least. Sound card support can be hard to find too.

Once you've got the drivers, it's a straightforward install. Throw in another GB of memory (these laptops are sold with 512MB) for $30 or so, then go into the system control panel, boost the virtual memory setting and tell Windows to prioritize console apps rather than services (Server does the opposite by default, as one would expect). You probably want to uninstall the high-security mode of IE, if you're planning to use IE at all. Do this under Add/Remove Programs, and then Add/Remove Windows Components.

There are a small number of applications that can differentiate the server from the desktop Windows OSes and will complain (Grisoft's free AVG anit-virus product, for example, as well as some Windows Live apps). For the most part, though, all of your applications will run fine.

Now ... where to find a Server 2003 license? (The OS requires activation like XP.) At $999 for "Standard Edition" and $399 for the cheaper "Web Edition," this OS was never meant to be used with a $300 laptop. Although it would be nice if Microsoft offered a single-user non-server price, that's about as likely as Apple selling you MacOS for your Acer laptop. Good bets are companies buying new licenses of Server 2003 R2 or Server 2008 to replace Server 2003. You'd be surprised what unused or retired licenses could be kicking around your own company.

Another approach is to download the Server 2008 Beta and try that. 2008 is based on the same kernel as Vista, and so should use Vista drivers rather than XP drivers. However, I haven't used Server 2008, and I don't have any immediate experience that would inform performance, installation on random hardware, etc.

While this is not a recipe for building a cheap laptop for your less-than-computer-savvy relatives, if you're already a hacker looking for an extra portable machine to do Windows work on, it's almost certainly the most bang-for-the-buck possible.