Sunday, September 30, 2007

In Software Estimation, Fewer Inputs Can Trump More Inputs. Here's Why:

This post is the fourth (the others are here, here, and here) summarizing and commenting on elements from Steve McConnell's excellent Software Estimation: Demystifying the Black Art.

There are estimation models that take a large number of inputs, and others that rely on only a few inputs. Some of the input data are more objective (e.g., the number of screens in a spec) while other data are more subjective (e.g., a human-assigned "complexity" rating for each of several reports).

One thing that Steve explains is that, when many subjective inputs are present, models with fewer inputs often generate better estimates than models with "lots of knobs to turn."

The reason for this is that human estimators are burdened with cognitive biases that are ridiculously hard to break free of. The more knobs the estimator has to turn, the more opportunities there are for cognitive biases to influence to result, and the worse the overall performance.

It should be reiterated that this same problem does not apply to systems that take many objective measures (such as statistics or budget data from historical projects) as inputs.

Saturday, September 29, 2007

Lifecasting will be Prevalent, and has a Down-To-Earth Future

Lifecasting is one of those things that is interesting on a theoretical, academic, political, and artistic level. Yet, right now, it's pretty boring in "real life."

But there is a big bourgeois, commercial opportunity coming for lifecasting, and with that will come low prices, ubiquity, and all sorts of new content also relevant to the avant-garde. In the same way that cellphone cameras now catch politicians off guard and police beating protesters, the surveillance (sousveillance?) society will take a quantum leap forward.

Walgreen's sells a low-end USB web cam for $14.99 today, and for well under $100 one can get a higher-end unit. I predict that in 2-5 years, I'll be able to drop $40 in Walgreen's and get a wearable cam/mic with an 8-ounce belt-clip battery that will plug in to my cell phone ... and I'm in the game with

This is not live-without-a-net futurism here, either. I'm making a modest argument by extrapolation on the hardware side. The original rig was a hassle to put together with today's technology. The biggest challenges involved upstream mobile bandwidth, battery power, and data compression.

With EVDO Rev B, HSUPA, and WiMAX, bandwidth looks to be less of a problem than giving customers a reason to buy it. As MPEG-2 fades away in favor of MPEG-4 flavors like H.264 and 3GP, cheap hardware compression is already becoming less of an issue. In fact, many of today's low-end smartphones are most of the way there in terms of a basic lifecasting rig. Battery power will remain an issue for anyone wanting to go live 24/7. But for a few hours at a time, several ounces of lithium-ion will keep the camera, compressor, and radio humming.

So what are the bourgeois, commercial applications?

  • Conferences: organizers won't love broadcasts of the content, but, at least in tech, they are desperate to find some way to keep the shows compelling. I can see a lot of organizations sending one delegate to lifecast while others back home watch and interact, including talking to vendors, visiting hospitality suites, etc.
  • Meetings: an interactive lifecast of a remote meeting would be a more productive way to participate than just a conference call or even a traditional web/videoconference.
  • Social events: suppose your school reunion is far away and not nearly exciting enough to make the trek. But a friend who lives in the area goes, and you can ride along via lifecast? That could be a riot. And if it isn't, just close the browser.
  • Education: how cool would it be sit in on some virtual flight lessons, tuned in to a lifecast from a CFI giving a real student a real lesson.
  • Remote Personal Assistant: Instead of a worrying about wearable computers with smarts, you go about your business while a remote assistant tracks your lifecast. Need directions? a pickup line? a reservation? instant info on anything? Your assistant, sitting somewhere comfortable with easy access to all things cyber can do the virtual legwork and send you what you need in real time.

The monetization platform is already here, as early adopters have jumped out ahead. Phone hardware (which will serve as the workhorse for the system, just as it does now for millions of phone-cam snapshots, videos, and mms messages) is moving at a rapid pace. That just leaves a few more parts to design and sell to complete the picture. With the amounts of VC money flowing today, I don't see that last part as a problem.

Thursday, September 27, 2007

LOC Counts to Compare Individual Productivity? No Way.

"Measuring programming progress by lines of code is like measuring aircraft building progress by weight." - Bill Gates

The other day, someone on my project team proposed ranking developers by their lines-of-code committed in Subversion. I really hope the suggestion was meant to be tongue-in-cheek. But lest anyone take LOC measures too seriously, it's worth pointing out that LOC is a bad, bad metric of productivity, and only gets worse when one tries to apply it across different application areas, developer roles, etc.

Here are a few reasons why LOC is not a good measure of development output:

  • LOC metrics, sooner or later, intentionally or unintentionally, encourage people to game the system with code lines, check-ins, etc. This is bad enough to be a sufficient reason not to measure per-developer LOC, but this is actually the least bad of the problems.
  • LOC cannot say anything about the quality of the code. It cannot distinguish between and an overly complex and bad solution to a simple problem, and a long, complex, and necessary solution to a hard problem. And yet it "rewards" poorly-thought-out, copy-paste code, which is not a desirable trait in a metric.
  • In software development, we desire nicely factored, elegant solutions to a problem -- in a perfect world, we want the least LOC solution to a problem that still meets requirements such as readability. So a metric that evaluates the opposite -- maximum LOC for each task -- is counterproductive. And since high LOC certainly doesn't necessarily mean bad code, there isn't even a negative correlation to use from the measurement.
  • In general, spending time figuring out the right way to do something, as opposed to hacking and hacking and hacking, lowers your LOC per unit time. And if you do succeed in finding a nice compact solution, then it lowers your gross LOC overall.
  • Even in the same application and programming environment, some tasks lend themselves to much higher LOC counts than others, because of the level of the APIs available. For example, in a Java application with some fancy graphics and a relational persistence store, Java2D API UI code probably requires more statements than persistence code leveraging EJB 3 (based on Hibernate) based on the nature of the API. Persistence code using straight JDBC and SQL strings will require more lines of code than EJB 3 code, although it’s most likely the “wrong” choice for an application for all sorts of well-known reasons.
  • In the same application and environment, not every LOC is equal in terms of business value: there is core code, high-value edge-case code, low-value edge-case code. To imagine that every line of every feature is the same disregards the business reality of software.
  • You may have read that the average developer on Windows Vista at Microsoft averaged very few lines of code per day (from 50 down to about 5 depending on who you read). Is Microsoft full of lazy clueless coders? Is that why the schedule slipped? I doubt it. There were management issues, but Microsoft also worked extremely hard to get certain aspects of Vista to be secure. Do security and reliability come with added lines of code? Unlikely – in fact, industry data suggest the opposite: more code = more errors, more vulnerabilities.

But no one on our team would ever write bad code, right? So we don’t need to worry about those issues.

Not so fast… Developers, even writing “good” code, generally make the same number of errors on average per line (a constant for an individual developer). So if I write twice as many lines of code, I create twice as many bugs. Will I find them soon? Will I fix them properly? Will they be very expensive sometime down the line? Who pays for this? Is it worth it? Complex questions. Never an easy “yes.” Or, as Jeff Atwood puts it, “The Best Code is No Code At All.”

And beautiful, elegant, delightful code is expensive to write (because it requires thought and testing). The profitability of my firm depends on delivering software and controlling costs at the same time. We don’t fly first class and we don’t use $6000 development rigs, even if we might offer some benefit to the client or customer as a result. And we don’t write arbitrary volumes of arbitrarily sophisticated code if we can help it.

Ok, so why, then, is the LOC metric even around? If it’s such a bad idea, it would be gone by now!

Here’s why: while LOC is a poor measure of developer output, it’s easy to use, and it’s a (primitive but functional) measure of the overall complexity and cost of a system. When all the code in a large system is averaged together, one can establish a number of lines per feature, a number of bugs per line, a number of lines per developer-dollar, and a cost to maintain each line for a long, long time into the future.

These metrics can be valuable when applied to estimating similar efforts in the same organization under similar constraints. So they’re worth collecting in aggregate for that reason.

But for comparing individual productivity? I don’t think so.

Wednesday, September 26, 2007

Sun CTO: JavaFX Mobile Stack Aims to Clean up J2ME Disaster^H^H^H^H^H^H^H^H Issues

It's only mild hyperbole to call J2ME a disaster. If you've depended on ME to make a profit, it might not be hyperbole at all.

But good news may be somewhere in the pipeline. This morning I heard Sun CTO Robert Brewin speak at AJAXWorld. His talk largely concerned enterprise services, but when he mentioned that one argument in favor of Java is its ubiquity, and that Java runs on about two billion phones, I couldn't help but stand up and ask for the mic.

Since Robert was ready to acknowledge the existing hassles, my question was: does Sun plan to fix it -- e.g., by becoming more stringent about how devices are certified as Java-capable?

In his response, he punted on the J2ME aspect, suggesting developer could put pressure on device manufacturers to standardize their Java behavior. Since cellphone carriers are players in the equation, I'm awfully skeptical about that. But the cool part was that Mr. Brewin said that the place where Sun plans to really make an impact here is with JavaFX Mobile.

Since JavaFX Mobile, adapted from the Savaje Java-based OS acquired by Sun, is a full kernel-to-app stack, it should provide much better -- perhaps even 100% -- compatibility between devices.

Now let's keep our fingers crossed for full J2SE support on it.

Tuesday, September 25, 2007

Watch Everyone: Realtime Google Streetviews Coming Soon

Or so I claim. Here's why:

It would be killer app, albeit a controversial one, for Google.

But live street-view data will be available sooner or later and the American trend is to let the private sector gather the data, then sell the data to the public sector (even when the data is highly suspect). So I'm sad to say that I doubt privacy and security advocates will suddenly triumph over the live street-view concept.

Users will contribute data from their homes or from their cars as they drive around (similar to the Personal Weather Station trend). Why would they do this? Some will do it just because they like being part of the initiative. Others might need a little more persuasion.

Hmm... Persuasion... Like what?

The obvious incentive is free mobile data service, via Google devices and spectrum, as long as the user's cam is communing with the the grand data center. A lot of folks would gladly put a bug-eye camera and GPS tracker on their car roof to save upwards of $60 per month for tethered high-speed mobile net access.

Thursday, September 20, 2007

How Much Did You Just Pay for a 4-Cent Ounce of Coffee?

A little off-topic, but we all know that software construction depends more on coffee than on editors or compilers.

When I buy a cup of coffee, it's usually the drip-brewed stuff from Peet's. And I've always wondered how much of a "convenience fee" I'm paying in the store beyond the cost of brewing that same cup (same beans, strength, etc.) myself.

So I finally got around to figuring it out.

Peet's (and most gourmet coffee sellers) recommend two tablespoons of beans per 6-oz. cup of coffee. This is far more than most folks use at home, but it's not because the vendors want to pad their bottom line; rather, a strong cup of coffee similar to what is brewed at the store requires it. The 6 oz. number comes from the peculiar markings on American coffee makers, which count a "cup of coffee" as being 6 oz. (most likely because it's the volume of those formal old-fashioned china coffee cups).

The arithmetic works out this way:

  • I measured and weighed Peet's French Roast Whole Bean coffee, and determined that the beans weigh 0.325 oz. per fluid oz.
  • So a pound of these beans, at $11.95, contains 49.2 fluid ounces (this is a volume measure of beans; there's no fluid involved yet).
  • Since two tablespoons = one fluid ounce, a pound of French Roast is enough to make 49.2 "cups" of coffee.
  • The "cups" of coffee are 6 fluid ounces each, so our pound of coffee makes 295 ounces of brewed coffee.
  • Each ounce of brewed coffee costs 4.05 cents, not counting the cost of water, electricity, etc.

Peet's charges $1.75 for a 16-oz. coffee in the store, which I could have brewed for 64.8 cents. Note that the store markup is probably a lot more than the 270% here, since my calculation is based on my retail price for the roasted beans, which should be higher than the internal Peet's cost.

Starbucks? The numbers should be pretty close, since Starbucks asserts that 2 tablespoons of its beans weigh 10 grams (0.35 ounces, pretty close to the 0.325 that I measured for Peet's), and their French Roast is $10.45/lb. These numbers imply a cost 3.81 cents per brewed fluid ounce.

Monday, September 17, 2007

Estimates, Targets, and Commitments: Mix 'Em Up and You're on the Way to Pain

This post is the third (the first two are here and here) summarizing and commenting on elements from Steve McConnell's excellent Software Estimation: Demystifying the Black Art.

When doing a little searching to see who had written about this point before, I found a fabulous post that covered exactly what I wanted to cover. I also found that McConnell has put the first bit of his discussion about this topic online as a free excerpt. He comes back to these topics later in the book, but the key definitions are here [~ 1MB PDF].

Since these good folks have done the heavy lifting on my post for me, I'll proceed right to the controversial part:

One would think, given the clarity and common sense of these definitions, and the nonjudgmental approach which McConnell takes toward difficult business situations, that software companies would embrace this clarity and let the light shine in on their estimates, targets, and commitments.

But, in my experience, the opposite is true. Groups will actively obfuscate the distinctions here, or deny that one or another of the terms is distinct or relevant. I believe that the root causes of this practice are

  1. Diminished tolerance for ambiguity common in group settings and
  2. Fear of the potential reaction if it is conceded that, e.g., a target value and estimate value may be far apart

Covering up these distinctions does not affect reality, but only attempts to manage perceptions. And not everyone's perceptions will be successfully managed. Such behavior is rarely helpful and often harmful.

Friday, September 14, 2007

ASP.Net Hack for Processing SOAP Faults on Clients that Hate the HTTP 500

Occasionally you may need to execute some SOAP operations using a client that doesn't understand SOAP. If this client doesn't like the HTTP 500 that comes back when the server generates a SOAP Fault (behavior specified in the SOAP spec), you may not be able to read the content that comes back, with the actual fault XML in it.

For example, I recently needed to use URLLoader and HTTPService to access a SOAP service in Flex, because the WebService (SOAP) implementation didn't like the server's WSDL. Admittedly, the WSDL included some legacy XML schema, and was a bit unusual, but it did validate. So it seems plausible that other valid services might also be inaccessible to the Flash/Flex WebService or other SOAP clients.

With HTTPService and ActionScript 3 E4X support, it's not terribly painful to do the SOAP client work yourself... until you get a SOAP Fault. The 500 causes HTTPService and URLLoader to abort and they do not return the document content to the application.

In this instance, I couldn't alter the web service itself, and didn't have the resources to build a new HTTP client on That's the setup. Here's the hack, for IIS/ services:

Concept: alter the ASP.Net processing pipeline to return OK (HTTP 200) in place of a 500, but only when the SOAP fault is the cause of the 500. The way to do this is to use an ASP.Net facility called a Soap Extension to allow programmatic reading and/or writing to the SOAP message stream as it passes into and out of the application. Find the message in the state you don't like. Alter it. Done.

  1. Create a C# DLL project in Visual Studio and pull in boilerplate code for a System.Web.Services.Protocols.SoapExtension (MSDN docs and MSDN magazine have sample code, or look here)
  2. Pull out everything you don't need (which is most of it, if you're looking at a real-life sample)
  3. Locate the ProcessMessage method, and check for the SoapMessageStage.AfterSerialize lifecycle value. In most samples, there's a switch statement on the SoapMessageStage, so you can just identify the proper branch (the others should be blank for this basic solution)
  4. Add the following code:
       if (message.Exception != null)
          HttpContext.Current.Response.StatusCode = 200;
  5. Build. Place resultant DLL with other precompiled binaries used by the ASP.Net application.
  6. To tell ASP.Net to use this extension, add the following XML tag to the application's web.config, and all SOAP calls will use your extension. Find (and/or create) the <soapExtensionTypes> tag (under <webservices>, under <system.web>), and add

  7. <add type="YourSoapExtNamespance.YourSoapExtClassName, YourSoapExtAssemblyName" priority="1"  group="High" /> 

That's it. One line of code, a little configuration, and your hack is on. For a little more context, I've put the full SoapExtension class here.

Tuesday, September 11, 2007

MS Security Laws of Limited Use if You Don't Know Who the "Bad Guy" Is

I read (via) an excellent Microsoft TechNet article called "10 Immutable Laws of Security" and it seemed to me that one big problem is in defining the "bad guy" the author is talking about in these laws.

Here are the first 4 of the laws (the ones with the term "bad guy"):

Too often, my problem is not about one of the "10 Laws" coming into play, but wondering whether the agent I'm dealing with is a "bad guy." Obviously, if I'm thinking about downloading a random piece of potential malware, or letting users post to my website with arbitrary JavaScript, then the bad guy often fits the traditional definition of a malware distributor, black hat, etc.

But what about ... "legitimate" businesses like a media company that wants to install a broken DRM system with a rootkit? what about a company that means well but writes a compromised browser plug-in that I'm supposed to install? or a company (or client-company) IT admin who wants to physically "configure" my system for their VPN, virus protection, app protocols, etc.?

I don't let anyone upload programs or scripts to my website (intentionally) ... but what about all those widgets I might put on my site? The scripts that widgets pull into the client browser can't do any harm to my web app that I won't let them ... but from my users' point of view, anything these widgets do to them or their data, directly or indirectly is my fault.

It's a problem that's been discussed a lot (e.g., Windows Firewall exceptions, Vista UAC, etc.) My point is simply that many users (even many who are not extremely sophisticated) have a pretty good handle on laws 1-4, and need a better way to figure out whether a vaguely legitimate, well-meaning agent should count as a "bad guy" or not.

Friday, September 07, 2007

Software Estimation: All Estimates Are Range Estimates

This post is the second (the first is here) summarizing and commenting on elements from Steve McConnell's excellent Software Estimation: Demystifying the Black Art.

All estimates are range estimates, whether we realize it or not. If we say that a project will finish on July 14, or that, by the end of the year, 56 function points will be complete, we likely do not mean we are 99% certain that these numbers are dead on.

We are implying or suggesting some kind of range (e.g., "sometime in July"). And without making the bounds of the range explicit, nor specifying a level of confidence that the value being estimated lies inside that range, we are probably doing more harm than good with our "estimates."

McConnell points out that in many areas of life we systematically do a bad job at constructing range estimates wide enough to include the value we're trying to estimate. He includes examples -- and an exercise for the reader -- showing that even when given specific instructions to generate an arbitrarily wide range that will include a target value, we still fail to make it wide enough.

We have a cognitive bias that causes us to mistake a wide estimate range for an unacceptably vague answer even when it is not. No wonder, then, that in real-world business scenarios, where pressures exist to create estimates that are both hyper-precise and artificially small, the estimation process comes apart right out of the gate.

How to think about ranges and estimates? Here are a few points to get started:

  1. Many values can be estimated in a software development project. Typical values to estimate include total effort or resources to implement a set of functionality, or quantity of functionality that can be implemented with fixed constraints.
  2. A point estimate is really just a very narrow range estimate -- and almost certainly an inaccurate range estimate. 
  3. How wide should a range estimate be? Wide enough that one can have a specific confidence level that the range includes the actual number being estimated.
  4. If the confidence level is fixed, then the width of the range necessarily depends on how much is known to inform the estimate (as well as on the estimation techniques being used, etc.) For example, if you know the specs for a project, it is in the best case possible to achieve a narrower range estimate at given confidence level than if you don't yet know the specs at all.
  5. The amount if information known (feature details, effort to implement each feature, unexpected dependencies, etc.) increases over time and over the course of the project.
  6. Therefore, updated range estimates can narrow over the course of the project. Initial ranges, if they are accurate, will necessarily be quite wide. McConnell refers to these narrowing intervals over time as the "Cone of Uncertainty" based on the cone-like shape that the graph (range vs. time) makes.
  7. Despite the convergence of the "cone" graph lines, the cone concept reflects a best-case scenario of mapping limited information to limited accuracy in estimation. If other estimation best practices are not followed, it is possible to do much worse than the ranges reflected in the cone.

Tuesday, September 04, 2007

The Google Phone Should (Will?) Run Your Regular Java SE Apps

Om's rundown of supposed Google-phone details was picked up today by engadget mobile. If the rumors are on target, and the phone is really based on a Java-on-Linux OS, with a Java API for apps, then I have one thing to say. And hopefully all those PhDs at Google thought of this a year ago.

Wanna make be a real disrupter in the smartphone world, not just a me-too? Java SE.

Here's why it makes so much sense that Google's probably already done it:

  • Sun bought SavaJe in April
  • Sun's JavaFX mobile is based on the SavaJe tech
  • Google can get help from Sun on this Java-based OS (similar to what RIM built on their own)
  • But, being Google, they're in a position to ask for full J2SE support...
  • ...which exists, performs well, and which Sun can now provide, because SavaJe built and owns it! (even though they later focused more on a J2ME++++ approach)

Google and Sun can ship the device with a J2ME environment layer (similar to the J2ME-in-an-applet environments that exist online), and probably the JavaFX mobile layer too. But allowing anyone to crank up plain old Java code (of which Google already has quite a lot) on the phone would be a game-changing move.

Sunday, September 02, 2007

Carbonite Keeps Your Data Safe ... But From Whom?

Plenty of online companies have leveraged broadcast advertising (Travelocity, Amazon, and Yahoo! all mounted serious campaigns). Still, when I hear about a new startup on the radio in these days of TechCrunch-driven marketing, it seems like it deserves a look.

Carbonite is a startup offering easy-to-use offsite PC backup functionality to average end users -- they've recently started mainstream media broadcast advertising in the Bay Area.

I immediately wondered about the security of my backup data, though. As a youngster, you have all kinds of "secrets" that might cause you social embarrassment or get you grounded. As an adult, though, your PC hard drive has tax returns, medical records, legal documents, banking info ... things that you really don't want compromised. So a security breach of a service like Carbonite is particularly frightening.

Notwithstanding the company's "security" FAQs, which talk about encryption and about protecting the data in transit, I started getting a sinking feeling when the password field let me choose a six-letter all-lowercase dictionary word.

Since Carbonite has no other data about me, is that password enough to get all of my data back out? I took the app for a test drive and verified that, yes, my email address and that password gets all of my PC data back. Moreover, someone could "restore" my PC files to another machine, switch my Carbonite account over to that other machine, and then change my password, locking me out.

My password can also be reset if I forget it. But I don't lose access to my backup data. That means that my password is not required to reconstruct the private key that decrypts the data. A close check of Carbonite's web site verifies that in future they plan to offer an option where only the end user keeps the private key. For now, however, they keep everyone's private key on file.

I'm not a security analyst, but I'm going to toss out a few recommendations and let the experts weigh in and improve upon this as necessary:

  • User passwords (maximum risk exposure is one individual's data)
    1. Require a stronger password than "any six characters."
    2. Communicate to the end user that this password will protect access to all of their PC data, so it's worth spending a minute to pick a longer passphrase.
    3. At least encourage the user to not use the same password that he uses for every web site (namely the first one that's gonna get stolen when he sits down at a random PC in an Internet cafe and logs onto hotmail, flickr, and MySpace).
  • Maintenance of customer private keys (risk exposure is potentially all the backup data of all customers!)
    1. For power users, get that you-keep-they-key version of the app up and running ASAP
    2. Until then, make sure best practices are maintained around the storage of the private keys and data.
    3. Presumably, the keys should not be all in the same system vulnerable to a single security compromise.
    4. The keys should not be in the same system as the data they're protecting, for the same reason.
    5. No one individual should have access to all of the keys or all of the data, on purpose or by accident.

The service definitely delivers on the ease-of-use promise, from the no-credit-card free trial to the dead-easy "it just works" client application. I just hope it doesn't come at an unreasonable cost in terms of security.