Sunday, November 30, 2008

Most Laser Printers are Razors, not Cars

Once upon a time, buying a small or midrange laser printer was like buying a car. Big upfront expenditure, lots of sweating the details, and a moderate amount of thought about gas mileage and scheduled maintenance, er, toner and fusers and all that.

Now, however, it's more like buying a razor. The core features are mostly reasonable, each "size" printer has a speed/duty cycle that determines its suitability for an installation, and the cost is so small that it's all about the consumables (blades).

So why won't vendors -- or manufacturers -- print the cost-per-page of consumables right next to the dpi, ppm, idle power consumption, and time-to-first-page?

It's easy to find 10x differences between otherwise similar printers in the cost-per-page based on the manufacturer's price for toner cartridges and their intended yield.

Big companies, of course, have IT purchasing folks who perform these calculations, factor in the discount they get because the CIO plays golf with the right people, and order the gear. In the case of printers, large companies are typically buying high-volume printers that are among the cheapest per page anyway.

But startups, professional practices (think doctors, accountants), small to midsize businesses -- they rarely calculate the TCO for each device. It would be helpful to have the consumables price per page listed right on the sticker, like MPG.

Saturday, November 29, 2008

For Individuals, Top Laptop Values May Be At Low-End Prices

About a year ago, when I started my latest stint doing 100% contracting, I realized I would need a laptop for meetings, client presentations, etc. Not for coding -- I've written about my views on that, and I'll never trade away my nuclear aircraft carrier of a dev box for an inflatable dinghy just so I can hang with hipsters at the coffee shop.

Since I wouldn't be using the laptop much, and historically the many company-provided laptops I've used have turned out to be poor performers, malfunction-prone, and costly to deal with, I resolved to get the cheapest new laptop I could find. (Laptops have a strange depreciation curve, with the result that a cheap new laptop is often a much better device than a used one at the same price.)

In addition to the usual holiday sales (you can guess what prompted this article now), the whole Vista-Basic-capable-not-really thing was going on, with the result that many machines were being cleared out for good, considered illegitimate for Vista and not saleable.

I snagged a Gateway floor model at BestBuy for under $300, put another gig of RAM in ($30?), snagged XP drivers, and installed Windows Server 2003 R2, since I had found that Server 2003 was very light on the hardware while offering more than all the benefits of XP.

At this price point, I figured the laptop borders on disposable, and if I could prevent the TCO from getting high come what may.

Well, a year or so on I have some results to report.

It has performed far beyond my expectations (and as a developer my expectations tend be unreasonably high).

The only negative is the comically poor build quality -- this is a machine that one must literally 'Handle With Care' as it's built about as well as those tiny toy cars out of a quarter-vending-machine. I think I could snap it in half if I tried, and a careless twist could rip the drive door off or crack the case right open. The keyboard rattles a bit on some keys.

I have a padded briefcase and the machine was never intended for "heavy duty," so that wasn't a big deal for me. And, in any case, it seems more of a reflection on Gateway than on the price point, since, e.g., Acer offers rock-bottom laptops with much higher build quality.

That issue aside, the machine has performed flawlessly. No problems with any part of it, despite being a display model. And performance adequate to some real programming.

The ergonomics are poor for programming (single monitor, low-ish resolution, etc.) -- but it snappily runs NetBeans/Ruby/Java; Eclipse/Java plus various mobile device emulators (e.g., Android) which I needed for a course I taught this summer; even Visual Studio 2008. I do run MySQL rather than SQLServer, in part to keep the load down.

Let's see ... what else has run well on here? ... the Symbian S60 emulators (along with dev tools) ... Sun's VirtualBox virtualization software, with an OS inside. All the usual productivity stuff (Office 2007, MindManager 8) ... Microsoft's Expression design/UI tool suite ... video encoding software and high-bitrate x264 streams from hi-def rips ... often many of these at the same time. Everything I've asked it to do, it does seamlessly.

My conclusions are that sturdier laptops may well be worth it, especially for corporate IT departments -- I'm thinking about products like the ThinkPad and Tecra lines, where the price doesn't just reflect the specs but also a sturdy enclosure, standard serviceable components, slow-evolution/multi-year-lifecycle per model etc.

But for an individual, unless you have a very specific hard-to-fill need (e.g. you want to do hardcore 3D gaming on your laptop or capture DV, a bad idea with a 4200 RPM HDD), the top end of the value equation for laptops appears to be at or near the bottom of the price range. When one considers that higher-end peripherals (e.g., a BlueRay writer) can easily be plugged in via USB, and a faster hard drive will snap right into the standard slot, the value-price equation seems to get seriously out of whack for those $1200-$2500 machines.

That's not to say these higher end machines are not great ... they just don't represent value at the pricepoint. Just as a Mercedes E-class is a fine car, but the radio commercials that try to make it out to be some kind of value purchase are downright funny, I think the same applies for the high-end VAIOs, MacBook Pros, etc. Those machines are a style and brand statement for people who care about making such a statement.

This possibility is interesting because, in most products, the "optimum value" position is somewhat above the bottom end of the price range ... that is, the least expensive products are missing critical features, making them a poor value, while the high-end ones charge a "luxury premium." If laptops are different, that seems worth noting.

The usability of an ultra-cheap laptop also suggests a response to folks who commented on my earlier article, saying that companies are loath to buy a desktop and a laptop, so if an employee needs any mobility at all, they get a laptop. It appears a good solution might be to provide a high-end desktop and an ultra-cheap laptop. At these prices, the employee's time costs more than the laptop, and my experience suggests little productivity (given a remote scenarios such as a training class or client demo) is sacrificed.

Tuesday, November 25, 2008

Do FlexBuilder and MXMLC Really Feature Incremental Compilation?

I use FlexBuilder in my work, and, overall, it's a decent tool. Eclipse gets a lot of points for being free; Flex SDK gets a lot of points for being free. FlexBuilder doesn't get points because it's basically the above two items glued together along with a GUI builder, and it costs real cash.

Wait, I'm off track already. The price isn't the issue for me. Rather, I want to know why FlexBuilder doesn't feature incremental compilation.

Hold up again, actually, I guess I want to know how Adobe defines incremental compilation since they insist that it is present and switched on by default in FlexBuilder.

Now, if I make any change (even spacing) to any code file -- or even a non-compiled file, like some html or JavaScript that happens to be hanging out in the html-template folder -- FlexBuilder rebuilds my entire project. And it's a big project, so, the job even on a 3.6GHz box means a chance to catch up on RSS or grab more coffee.

Interesting take on incremental compilation. See, I thought the whole idea was to allow compilation of some, ah, compilation unit -- say a file, or a class -- into an intermediate format which would then be linked, stitched or, in the case of Java .class files, just ZIPped into a final form.

Besides allowing compilation in parallel, this design allows for an easy way to only recompile the units that have changed: just compare the date on the intermediate output file to the date on the source file. If the source file has changed later, then recompile it. It does not appear that this is how the tool is behaving.

Perhaps this logic is already built into FlexBuilder -- mxmlc, really, since that's the compiler -- and the minutes of build time are spent on linking everything into a SWF. Since Adobe revs Flash player regularly, and many movies are compiled with new features to target only the new player, it should be possible to update the SWF format a bit in the next go-around, so that linking doesn't take egregiously long.

Apparently, at MAX this year, Adobe has started referring to the Flash "platform" -- meaning all of the related tools and tech involved around the runtime. Fair enough, it is a robust ecosystem. But "platform" kind of implies that the tools support writing real -- and big -- applications, not just a clone of Monkey Ball or another custom video player for MySpace pages.

Sunday, November 23, 2008

Software Discipline Tribalism

Unfortunately, people seem rarely able to stop at a reasoned preference -- e.g., "I like X, since X may offer better outcomes than Y" ... and too often end up, at least whenever group persuasion is involved, somewhere more dramatic, personal, extreme, and narrow -- e.g., "I'm the X kind of person who rebels against Y, since Y can offer worse outcomes."

This is as true in cultures of software development as anywhere else.

While many people have been guilty of this unproductive shift in attitudes, it seems many of the Agile development, dynamic languages, small-tools/small-process/small-companies crowd has long since fallen prey to it.

To be sure, there was much temptation to rebel.

At the start of the decade, proprietary Unix was strong; many processes came with expensive consultants, training, books, and tools ... and overwhelmed the projects they were meant to guide; many tools were expensive and proprietary. Web services were coming onto the scene and large players, with licenses and consulting hours to sell, created specs that were unwieldy for the small, agile, and less-deep-pocketed.

When the post-dot-com nuclear winter set in, small companies had no money to pay for any of that, and we got LAMP, Agile, TDD, REST, etc. Opposition to OO, which had been strong in many quarters, suddenly faded as OO was no longer identified (rightly or not) with certain problematic processes. Ironically, many new OO language fans had been ignoring the lightweight, free (speech and beer) processes that some OO advocates had been producing for years.

These have all proven to be useful tools and techniques and have created whole companies and enormous value ... but somewhere along the line, instead of being in favor of these tools and techniques because in some cases they produced better outcomes, either the leaders or the converts started thinking they were the rebels against anything enterprise, strongly typed, thoroughly analyzed, designed and well tooled.

This shift in attitudes does not help the industry ... nor even the clever consultants who lead the charge, deprecating last year's trend for a new one which they just happen to have written a book about.

We desperately need a broader perspective that integrates all of these pieces. There are things manually-written tests just won't do -- tools like Pex can help immensely, even if (or because... )they are from a big company.

Analysis and design are not bad words, while Agile can get dangerously close to simply surrendering to the pounding waves of change (and laughing at goals all the way to the bank) rather than building against the tide, and trying manage to a real outcome on a real budget.

Static languages can get hideously verbose for cases with functor-like behavior (Java and C# [pre-3.0], I'm looking at you). At the same time, go talk to some ActionScript developers -- who have had dynamic and functional for years -- and you'll see an amazing appreciation for the optional strict typing and interfaces in AS3.

REST is great, but in playing at dynamic, it turns out to be rather like C -- it's as dynamic as the strings you pipe into the compiler, and no more. Absent proper metadata, it cannot reflect and self-bind, so it sacrifices features that dynamic language developers love in their day-to-day coding.

Ironically, most of the critical elements of this "movement" -- along with open source -- are being subsumed into the big enterprise software companies at a prodigious pace. Sun owns MySQL and halfway owns JRuby; Java servers may serve more Rails apps than Mongrel/Ebb/Thin/etc. soon, Microsoft is all over TDD, IronRuby, IronPython...

I suppose the sort of tribalizing we see here is at least partly inevitable in any field. But it would serve the entire industry if that "part" could be made as small as reasonably possible. As a young industry with a poor track record and few rules, we ought to be more interested in better software outcomes than in being rebellious.

Thursday, November 20, 2008

Microsoft Pex Moves the Needle Bigtime on Software Testing and Correctness

Over two years ago, I wrote about how neither the assurances of static compiler technology nor the ardent enthusiasm and discipline of TDD (and its offshoots) represent major headway against the difficulty and complexity of large software projects.

At the time, this issue came up in the context of static languages versus dynamic languages. There still exists a political issue, although today it is more transparently about different organizations and their view of the computing business. I will revisit the successor debate in my next post.

For now, however, I want to talk about a tool. In my post of two years ago, I suggested that significantly better analysis tools would be needed in order to make real progress, regardless of your opinion languages.

So I've been excited to see the latest tools from Microsoft Research fast-tracking their way into product releases -- tools which can really move the ball downfield as far as software quality, testing, productivity, and economy.

The most significant of these is called Pex, short for Program Explorer. Pex is a tool that analyses code and automatically creates test suites with high code coverage. By high coverage, it attempts to cover every branch in the code and -- since throwing exceptions or failing assertions or contracts count as branches -- it will automatically attempt to determine all of the conditions which can generate these occurrences.

Let me say that again: Pex will attempt to construct a set of (real, code, editable) unit tests that cover every intentional logic flow in your methods, as well as any exceptions, assertion failures, or contract failures, even ones you did not code in yourself (for example, any runtime-style error like Null Pointer or Division by Zero) or which seem like "impossible-to-break trivial sanity checks" (e.g. x = 1; AssertNotEqual(x, 0))

Moreover, Pex does not randomly generate input values, cover all input ranges, or search just for generic edge cases (e.g., MAX_INT). Instead, it takes a complex theoretical approach called "abstract interpretation" coupled with a SMT (satisfiability modulo theories) constraint solver to explore the space of code routes as it runs the code and derives new, significant inputs.

In addition, so far as I can understand from the materials I've seen, Pex's runtime-based (via IL) analysis means that it should work equally well on dynamic languages as on static ones.

To get an idea of how this might work, have a look at this article showing the use of Pex to analyze a method (in the .Net base class library) which takes an arbitrary stream as input.

For those of you who are inherently skeptical of anything Microsoft -- or anything that sounds like a "big tool" or "big process" from a "big company" -- I'll have more for you in my next article. But for now keep in mind that if Microsoft can show that the approach can work and is productive, user friendly, and fun (it is), then certainly we will see similar open source tools. After all, it appears the same exact approach could work for any environment with a bytecode-based runtime.

Last, I do recognize that even if this tool works beyond our wildest expectations, it still has significant limitations including

  1. reaching its full potential requires clarity of business requirements in the code, which in turn require human decision making and input and
  2. for reasons of implementation and sheer complexity this tool operates at the module level, so you can't point it at one end of your giant SOA infrastructure, go home for the weekend, and expect it to produce a report of all the failure branches all over your company.

That said, here are a couple more great links:

Wednesday, November 12, 2008

BizSpark Shows Wider Microsoft View Around SaaS Innovation

Depsite a lot of reporting about Microsoft's new BizSpark program, one interesting bit wasn't featured in the coverage:

Microsoft's existing nearly-free-licenses-to-help-developers-get-going-on-the-platform program has been around for a while and has always required that a company plan to distribute a "packaged and resalable" application targeting a Microsoft platform. 

It could be an app on Vista or Server, an Office add-in, a Windows Mobile app, or one of a few other options. But it had to be packaged, at least in the sense that it was digitally bundled into an installable set of files even if it never got put in a physical cardboard box.

This requirement made some sense as far as promoting the client OS ecosystem but it disqualified any online offering. An online service had to work around the restriction: for example, by offering a small Windows Mobile app that has some interaction with the service.

But the language makes a statement -- Empower was largely about helping ISVs new or old develop apps on the platform, thus making the client platform stronger. Nevermind that an online service that targets browsers and the iPhone might lead to Server license sales later.

BizSpark, on the other hand, takes another approach. There is much talk of online solutions -- the program is meant to dovetail with hosting providers (or Microsoft's Azure platform) to offer the server-side muscle a solution will need after its incubation period. The program is aimed exclusively at new companies -- if a firm has been in business 3 years or more, it does not qualify.

Both programs exist today and will presumably continue. So I'm not suggesting there is a big move from one view of the world to another. But there does seem to be a conscious broadening of horizons in terms of seeing where innovation is taking place and how Microsoft can be part of it.


Tuesday, November 11, 2008

On the Knocking at the Gate, VCs, and a Math Problem

Bang!

This economy may be the wakeup to VCs (and CEOs alike) that their future isn't what they want. But the murder happened years ago, and the "I don't want yes men, but strangely I don't listen to much else'" hivemind has just woken up.

The IPO window isn't shutting or recently shut -- it never re-opened after the dot-com meltdown. A handful of IPOs (including a Google) doesn't make a "window." Whether you blame SarbOx, or a trend of investing in companies with a wink-wink style of sustainable competitive advantage, that could not produce high enough valuations to warrant a public offering, there were to be few IPOs.

The startup and VC world started getting this idea a couple of years ago, when they realized that at the smaller exit values (for the exits that were to be had), in order to get a high multiple return, the initial investments would have to be so small that the venture fund couldn't afford to service the quantity of investments. That is, the investment would have to be too small to be worth the firm's time. Uh-oh. A few innovative programs came out of that realization. But for the most part everyone acted like this was just a bad dream.

Moreover, the vaunted "get acquired" exit that appeared to be the next-best exit option, has been rather overrated. The real acquisition numbers for the most part are not what investors (or founders) would like. Not to mention the acquisition could well mean the end of the road for the business (Google is the most famous for this) which is the opposite of what founders should want, and so produces some strange incentives. Yes, YouTube and Skype ... but the curve falls off quickly.

The bad news is that this pile of trouble has been sitting in the corner stinking up the room for years.

The good news is that it's not a sudden crisis, and may well be correctable by VCs who are willing to, um, take some risk (this means getting out of their comfort zone in terms of rituals and assumptions, or expanding said zone) which is, ironically, what they are supposed to be doing for their investors.

But what about all of that advertising? Isn't there money in all of those targeted ads? Or, at least, wasn't there supposed to be until the advertising market started downhill?

In the short term, maybe ... but in the long term, the model doesn't work at the macro level and here is some math that suggests why.

Showing ads is kind of like printing money. You can show as many as you like up to a function of your pageviews. In order for the ad-economy to grow, the attention economy has to grow. That is, the aggregate amount of attention-hours spent against ad-supported pages needs to grow. Ok, there's definitely evidence for that (GMail, etc.)

But what about the ratio of the ad growth rates to attention growth? Attention growth has real-world limits (number of people, amount of time, ad-blockers, desensitization to ads) while theoretical ad supply does not. The limit on attention growth does not limit real-world ad growth. For example, if I view 50 GMail pages where I used to view 15, it's entirely possible that the same amount of attention is now divided across more ads -- or that the total spent attention is even smaller.

In the long run, the ad-value growth is smaller than the application-value growth. So the deficit of uncaptured value for businesses relying on ad revenue grows larger over time.

We can check this analysis by looking at the numbers from another point of view. Start with the raw resource itself -- a piece of a data center that includes a unit of compute power, storage, bandwidth, and hosting.

Hosting and displaying ads is relatively less expensive (in units of the resource) than hosting application functionality. So the very resource which makes the ad-supported model plausible will supports growth on the ad side that is at least as strong as growth in hosted functionality, which is the attention-harnessing product. That is, the resource availability supports growing the supply of units for spending attention as fast or faster than the supply of units for capturing attention. Again, over time, in the aggregate, more ads are powering less features. The value of each ad goes down.

This has nothing to do with the overall economy, consumer spending, etc. It's simply a side-effect of the coupling between the monetization mechanism and the product.

If, at the same time, the "real-world" spending that is driven by even successful ads is flat or in decline, you have an even bigger problem.

Thursday, November 06, 2008

Windows 7 Puts on a Good Show

I took Windows 7 for a quick test drive yesterday. My main goal was to see whether the performance would be so brutally bad as to make me relive my Vista experience.

For those who haven't read my Vista posts, the short version is: early Microsoft developer releases had unbearably bad performance; Microsoft made excuses ("debug build" etc.); turns out the RTM was nearly as bad. I made an honest attempt to run Vista but, as a developer, I just couldn't bear the excruciating waiting, knowing that I could be screaming along in XP. I run XP (or Server) to this day for my development.

Just the Vista-like look of the early Windows 7 bits made me anxious -- I wasn't expecting a lot. I set it up in a VMWare VM with 768 MB of RAM and no VMWare tools (= minimal video perf) in order to torture the OS. Naturally, things would be better running on the metal in a new machine designed for Win 7.

Install was very fast and seamless. I could see a difference right away in perf: even the shell in Vista runs slow, and this one was snappy. I saw the new-and-removed UAC, and I liked.

To add some more load, I installed Visual Studio 2008, which is a fairly heavy app. In addition, it was Visual Studio that had made me give up on Vista in 2007, so I thought it was fitting to try it again.

Inside Visual Studio, I opened a WPF windows project. Mucked around with the GUI editors, built, debugged ... and it cruised along nicely in the VM. Next, I set up an ASP.Net web project, and got that going in debug mode with the integrated server. Finally I started to feel some minor slowdown -- but it appeared I was running out of RAM with my 768MB VM. This was not a huge shock, since my install of Win 7 was consuming about 450MB RAM at idle, with no user apps running.

The 450MB RAM usage is a little disturbing, but, hey, even fast RAM is cheap. And my Server 2008 setup was idling at about 350MB with few services enabled, so I suppose this usage is to expected.

Overall, I was very happy with my Win 7 preview. I could see myself actually using this as my OS and not cursing all the time, which was a pleasant surprise.

The big unanswered, and unanswerable, question is: how similar will this experience be to the final RTM of Windows 7?

On one hand, Microsoft might have released this preview "stripped down" -- either to make it run better on today's hardware, or just because the additional components with which it will ship are not yet ready for public consumption. In that case, future builds might be slower.

On the other hand, still smarting from Vista, Microsoft might adjust the previews in the opposite direction -- a sort of "under-promise, over-deliver" thing -- lest anyone see a later build and say anything except "wow this is fast."

Tuesday, November 04, 2008

On BASIC and JavaScript and 25 Years of Coding

I realized I've been putting programs together on a regular basis for 25 years now. I distinctly remember some of the earliest programs I worked on, around October 1983, and the fourth-grade teacher who let me get away with a lot more access to my school's computers than was the norm. When I somehow convinced may parents later that year to get me a machine ... things got even more out of control.

I worked with a bunch of the classic 80s "home computers": Tandy Color Computers (1, 2, and 3) and Apple II (and II+, IIe, IIc, IIgs, ...), and some not-so-classics like a CP/M machine that ran via an add-in board (with its own Z80) inside an Apple.

The languages and tools were primitive, performance was always a concern, most serious programs required at least some assembly-language component for speed and hardware access and, even if they didn't, compatibility across computer types was nearly impossible.

A lot like programming in JavaScript nowadays (I guess replace assembly language and hardware with native browser plug-in [e.g., gears] and OS APIs).

I could flog the 80s-BASIC / JavaScript analogy to death, but (1) if you read this blog, you likely can fill in the blanks yourself and (2) my goal isn't to bash JavaScript, which would be a side-effect of the drawn-out analogy.

What I find interesting is the question of why these things seem similar, and I have a hypothesis.

I have noticed that many members of my peer group, who started programming at a young age on these early microcomputers, have an affinity for tools, structured languages, and to a lesser extent models and processes. I wonder whether this affinity is not some kind of reaction against the difficulties of programming these early microcomputers in what can only be called a forced-agile approach, where debugging and testing took an inordinate proportion of the time, "releases" were frequent, and where the only evidence of working software was ... working software.

I will be the first to admit I am quite conscious that my experiences in the years before the C/DOS/Windows/C++/Mac era make me appreciative of and (depending upon your perspective) perhaps overly-tolerant of -- process, tools, models, definitions, infrastructure, etc. as a kind of reaction.

Let's stretch the hypothesis a little further: Gen-Y, who missed this era in computing (I would say it really ended between 1987 and 1989) will have typically had their first experience with coding in a structured, well documented, "standardized" ecosystem -- whether that was C on DOS, or Pascal on a Mac, or for those still younger perhaps C++ on Windows or even Java on Linux.

Depending on their age today, and the age at which they first took up coding, this generation had compilation, linking, structured programming, OS APIs, perhaps even OO and/or some process from the beginning. For them, it is possible to imagine, the overhead of the structure and process was something to rebel against, or at least a headache worth questioning.

Hence their enthusiasm for interpreted languages and agile approaches, and the general disdain for processes with formal artifacts, or extensive before-the-fact specs.

That's the hypothesis.

A simple study could validate at least the correlation -- by gathering data on age, age when someone started coding, the language/machine/environment they were using, perceived benefits or disadvantages in those years, and their interests today. And even the bare correlation data would be fascinating.

Considering that these approaches are often "best fits" for rather different kinds of development projects, knowing what sort of prejudices ("everything looks like a nail") individuals bring to the debate might add a lot of context to decisions about how to make software projects more successful, and how to compose the right teams and resources to execute them.