Tuesday, September 30, 2008

Credit Crisis: At Least One Great Thing Already Happening for Silicon Valley

The unfolding credit crisis may have severe implications for the region -- or the world -- a bit down the line. But it is already helping chip away at one of the Bay Area's biggest long-term problems: housing (un)affordability.

I'm going to keep with my narrow technology focus here and, fairly or not, make an analysis specifically of the region's viability as a technology and innovation hub, its ability to route investment into businesses that attract top engineers and churn out world-changing products.

Every year when the region's large business leaders meet, housing is at or near the top of their list of concerns. It's hard to hire in a place people cannot afford to live. Especially when potential employees are talented and often mobile, with a lot of options in front of them.

Owning a house in the Bay Area has been expensive for some time. But -- in the space of the last eight years or so -- it has crossed the line from "painful but affordable" to "not realistically affordable" for the technology professionals smart enough not to take one of those wild-'n'-wacky subprime loans with a handful of bogus low payments up front.

I'm going to be concrete here, and use specific numbers... numbers which may leave you gasping or slapping your head if you live elsewhere in the world, but numbers which are real.

In the wake of the dot-com crash, a large portion of the area's "starter house" stock was available for $350,000 to $500,000. These were often 2-bedroom tract homes, not always in the most desirable locations, but not in the worst either. They were no bargain, but they were affordable for the "senior software engineers" with a decent education, a half-dozen years of experience, and the willingness to save for a real down payment all that time instead of going wild with the Visa card. These engineers had seen their earning power top $100,000 (partly due to the dot-com boom), and if they avoided Aqua when the company wasn't paying, they could sock away some serious cash.

A widely held rule of thumb regarding housing affordability suggests that at most one-third -- perhaps a little more -- of gross income can go to housing. More than that, and the probability of the buyer experiencing financial hardship or outright defaulting goes up sharply.

So the senior engineer, with say six or seven years of professional experience, making a little over $100,000 -- exactly the "heart of the batting order" in a growing tech company's talent pool -- could afford a $400,000-$500,000 house, at around 6%, with the traditional 20% down payment. If they had a significant other who was also earning money, they could afford a little more. But not too much more if they didn't want to be dependent on those full dual incomes indefinitely.

Almost all of the folks in my "cohort" (say, within four years or so of my age) whom I know and who bought houses in the Bay Area did so in this way, at this time, and with this sort of cost.

Fast forward three years.

Easy money and bogus mortgages have caused prices to balloon. Those $400k-$500k houses become $700-$800k houses.

Salaries have drifted up a touch as the worst "crash" years pass, but not much beyond the rate of inflation, maybe $5k-$10k per year for these mid-level positions. Certainly not enough to cover the change in housing prices.

And -- just like that -- those engineers, the early-to-mid-career core of any tech company trying to scale, the folks who know enough to use a little process and not re-invent the wheel, while still working hard and willing to take chances to innovate and make something happen, have no access to the stock of starter homes.

Beyond the larger down payment, they discover that making a "real" (i.e., based on traditional mortgage terms) monthly payment on this $750,000 house means making over $150,000. And that's well beyond the typical salary for a senior engineer or even a lead engineer / architect type role.

And there's the story: that bubble cut off the up-and-coming generations of engineers from homeownership. Indefinitely if not permanently. If the Silicon Valley business leaders roundtable thought housing was an issue before, they are in a whole new landscape now.

But this is exactly where the credit crisis is starting to help. It was never practical to build our way out of the housing shortage because not only is buildable land scarce here, but ready credit meant each housing unit gets bid up based on what lenders are in the mood to invest.

Now that the brakes are on, we've already seen the median home price in the Bay Area drop by nearly a third from its high in '06. We need a couple more years of this -- together with lenders that want to see payments under that one-third of income mark, and a solid down payment.

When it all shakes out, perhaps some of my friends who missed that narrow window early in the decade and so, despite working hard, excelling, and making serious incomes, missed a chance to own any kind of house, will get their opportunity.

And, if it's too late for them -- after all, families grow, the kids get bigger, and that worn-down starter house won't look so attractive when we're all middle-aged -- at least the next generation of geeks and whiz kids will have a reason to work in Silicon Valley.

Saturday, September 20, 2008

Low-Hanging Fruit: a Server-Side JavaScript API (or Standards, or ...)

There's a big chunk of stuff missing from JavaScript cloud-hosting platforms (like 10gen) and as well as from JavaScript semi-app-servers (like Phobos).

It's called any kind of API or standard.

Hard to believe, but after several years of growing JavaScript influence, and a whole web culture that is tilts towards openness and standards, all of the players -- Bungee Labs, AppJet, 10gen, Phobos, and many others -- are rolling their own little server-side platform APIs.

Standards make a platform easier to learn, understand, debate, debunk, and fix. They allow a larger community to share code and ideas, and provide a small degree of lock-in-proofing and future-proofing. Standards also allow transparent competition on the basis of implementation quality, tooling, SLA, etc., rather than obscuring those things behind incompatible facades (APIs).

New platforms on new technologies with no standards behind them can be a hard sell -- especially when they do not offer any new capabilities.

According to Techcrunch, Bungee is in a "freefall." And the interesting bit is that their CEO ascribed the recent round of layoffs to 'actual vs. anticipated rates of adoption.'

Hello, if you are trying to sell the world on your server-side JavaScript programming and deployment environment, you're not helping your 'rates of adoption' by also asking people to learn and commit to your own home-brew platform API.

Now to be fair, there aren't a lot of alternatives in the absence of a standard. But ... it would make a lot more sense for all these players to get together and create some standard APIs and commit to using them. The APIs would cover all the basics: e.g., persistence (of object, key-value and relational flavors), templates, request/response handling, calls out to other web services and processing of their responses, publishing SOAP services (which still remains critical in the enterprise world), and interop with other server-side environments (Java, Python, etc.)

Overnight, there would be a single community (and acronym!) instead of a dozen fragments. Like any standard, it would generate books, conferences, training materials -- and controversy, which is never a bad thing when you need publicity. We would see real performance tests, and get a real debate over where the JavaScript-to-SomethingElse boundary should be and why.

And these vendors would gain instant legitimacy by being founding contributors to a specific platform "trend," rather than lone voices in the woods. That legitimacy (and, via the sad logic of large companies, the "legitimacy" of being printed on the top of some conference bag) would help them appear credible to customers big enough to pay them real money.

Thursday, September 11, 2008

BYOA (Bring Your Own Analogies)

I found this brilliant label on the side of an industrial-strength wood chipper/eater/pulverizer.

There are so many other places -- especially in software development -- where a label like this (including both explicit content and implicit assumptions about the attitude of the reader) would be appropriate.

I won't ruin your fun by babbling on about all the specific cases; instead I'll leave you the pleasure and satisfaction that will come as you begin thinking of them.

stop-to-think

Tuesday, September 09, 2008

Presentations from SF Flash Hackers August '08

A couple of weeks ago I gave two mini-presentations to the SF Flash Hackers group, on topics I've talked about here before ... but I figured I'd post the slides to slideshare.

Here are slides about porting a large Windows app (Mindjet MindManager 7.2 with Connect) to run in the browser via Flash (developerd with Flex):

 

And here are slides on generating ActionScript 3 code from UML class diagrams using my VASGen tool -- which is really a contribution building on two existing tools, the Violet UML modeler, and the Metaas ActionScript 3 meta-library (in Java):

As3 Code Gen from Uml
View SlideShare presentation or Upload your own.

Thursday, September 04, 2008

Citibank Needs to Get Their PKI Act Together

While I'm thinking about security ... there has been plenty of debate over whether Firefox 3's hostility toward self-signed certs is a good idea.

Either way, this should be a non-issue in the banking world, which ought to have proper certs on any public facing machines.

So I was more than a little surprised when I went through a workflow with Citi, where a number of links, widgets, etc., triggered the Firefox self-signed-cert blockade/warning. The problem resources were loading from subdomains like foo.citibank.com or bar.citimortgage.com.

When I did some work with [insert other extremely large bank here], everything was SSL, even internal web service and app communications, and usually (not for external customers) mutual authentication. The bank had an entire PKI department, which controlled numerous separate CAs corresponding to dev, test, production, different business units/functions, etc.

Hooking up to most things meant sorting out the right kind of cert to present, and working the proper cert chain for whatever the server gave you. In some situations -- e.g., programmatically sorting this out in Java -- it was a major hassle. Not to mention that the PKI group was cooperative but busy, so there could be delays. And the certs were set to expire in not-so-long, so a whole mechanism was necessary to make sure you didn't have system failures in your department due to not getting a new cert placed in time.

It would have been much easier to use self-signed certs all over the place, but the bank wanted some extra protection even against rogue calls from inside the network. The policy made sense and, even if it didn't to you, your alternative was to box up your stuff and leave the building.

Of course intentionally deploying a production service to external customers with a bogus cert was so unimaginable it wouldn't have even been funny ... in order to be funny there would've had to have been some molecule of possibility in it, and there wasn't.

Can Citibank really not have controls that prevent this?

Chrome's Unusual Installation Location: Good, Bad, or Ugly?

I -- and many other folks -- have noticed that Google Chrome installs only for a single user, and does so in a way that does not require administrative privileges to run the installer.

Basically, it just drops its files into a subdirectory of the user's home directory, places its shortcuts in the user's specific Start Menu folder, Desktop folder, etc., and arranges for its GoogleUpdate.exe helper app to launch from Windows/CurrentVersion/Run under HKEY_CURRENT_USER, rather than HKEY_LOCAL_MACHINE.

This is an unusual pattern for a Windows installer, almost certainly rigged in order to allow minimal-privilege user accounts on corporate networks to install and run Chrome ... under the radar of IT or management policy, if need be.

The question is whether this is inherently a security problem.

On one hand, I've read posts pointing out that this setup leaves the executable vulnerable to other executables that run with the user's permissions. This means another app could replace Chrome with a compromised Chrome, and the user would never know. At the same time, if Chrome can install, then any other malware could install itself the same way -- set itself up to launch under HKCU/.../CurrentVersion/Run, and sit in the background doing anything it wanted (like listen to keystrokes for another HWND). Then again, being in the user's browser might make snarfing credentials and scripting their use (or taking advantage of an auth cookie being present) a lot easier. The point is that a traditional executable under Program Files should be less vulnerable -- a nonprivileged user account can't rewrite those files.

On the other hand ... this is not terribly unlike the install/run routine on *nix servers. If I'm a "regular" user, I'm not installing to /usr/bin, I'm just untarring in a local directory, possibly building, and then running the binary. Of course a user doing this is likely more sophisticated than general Windows users, and fewer *nix end users means less malware at the moment.

Wednesday, September 03, 2008

Always-On JavaScript Mildly Disturbing

Google Chrome doesn't have a switch to turn off or restrict scripts. While this might be an upcoming feature, my guess is that Chrome is about "running" web 2.0 apps, and so JavaScript is considered essential.

Well, maybe that's ok if the security model around JavaScript execution is as fantastic as the comic book suggests.

On the other hand, the launch of this browser featuring a well-known security flaw (admittedly not a JavaScript flaw) makes me a less comfortable about always-100% script execution.

Amazon EC2 to Support Windows Server AMIs

Jeff Barr, Amazon Web Services Senior Evangelist, just finished giving a talk at "The AWS Start-up Event – San Francisco" in which he showed slides listing upcoming plans at AWS, including support for Windows Server.

This is an exciting development, as Windows Server / ASP.Net make for a fantastic if potentially expensive platform. Now Microsoft has to step up to the plate and come up with a pay-as-you-go, per-cycle or per-cpu-hour licensing scheme.

One of the things that makes ASP.Net interesting is that it lives in a nice middle ground between Java, which is extremely fast (in EC2, this means less expensive per transaction) and has great "enterprise" capabilities but is cumbersome to develop with, and, say, Rails, which is quite slow and has poor enterprise app cred but is very pleasant and lightweight to develop with. ASP.Net, especially with the MVC framework and forthcoming support for Python and Ruby in addition to C# and the other .Net languages, seems to combine extreme performance, easy development, and access to as much "enterprise" as you need while offering lightweight alternatives like LINQ and SSDS.

My point isn't to make a commercial for ASP.Net, but to point out that if Microsoft can get their licensing in order, they might catch up in the cloud world through fast, cheap development cycles plus faster (and hence cheaper) runtime operation on a given machine instance than some competing platforms.

Just to be fair, cloud vendor enomalism has run Windows Server on EC2 before, by virtue of the Qemu emulation software (on top of Linux). But if we're talking about maximizing efficiency, wasting cycles on another layer of emulation (EC2 instances are of course virtual to begin with) doesn't sound like the way to go.

Monday, September 01, 2008

Om:Yes, Gillmor:No ... Comcast's Data Transfer Cap

Om Malik nailed it at the end of one his posts on the 250GB cap. He lists Comcast's stilted usage examples for email, music, etc., and asks "How many HD movies will fit under the cap?"

That's the right question, but not because he is worried he'll bump the cap watching video.

The reason it's the right question is that the cap has more to do with Comcast's long-term business plans -- and the cable/telco industry's plans -- than about slapping down a handful of network hogs.

One fact before we continue. As I've written before, while everyone squabbled over BlueRay or HDDVD, the filesharing world settled on the x264 implementation and a container format for it. In that compressed format, a high-def film averages 10-15 GB.

But here's the real deal:

Comcast's cable TV and on-demand video business competes directly with Internet video (both legal and gray-market). By making the cap high enough that it won't immediately impact YouTube watchers but -- over time -- will put a serious dent in customers' ability to get HD video over the Internet, Comcast is simply crippling one product so that customers will have to buy another product from them. Don't even think about mentioning broadband competition -- if you define competition as more than one entity offering substitutable goods (i.e. similar bandwidth, QoS, etc), then most of the U.S. has zero competition.

The 250GB cap is a way to push customers to buy the same content twice. To be specific, you'll pay to see a movie broadcast on cable TV in HD, and then pay again for some special plan that lets you watch the same content later on Hulu. Or you'll pay for Comcast "on-demand" to watch a show on your TV. But if you want to place-shift that show to your laptop or the gym (on your phone), you'll have to pay to get enough headroom to download the same video.

This move is not just about trying to squeeze out more money. It's yet another attempt by network operators to neglect their core business and instead try to get into a content business, so they can do two things badly instead of just one.

Meanwhile, Steve Gillmor writes that bandwidth caps will mean the end of BitTorrent downloading and a shift to streaming. Steve's a smart guy, but streaming is one of those perennial non-trends like speech recognition and thin-client computing. It's only conceivable to someone who has a computer connected to their TV or who lives on their laptop.

Having played a key role in several home-entertainment / media-tech startups, two of which have been successfully acquired, I feel confident saying streaming is only a little piece of a solution. For the foreseeable future, streaming will never be more than that. It neglects too many consumer needs and desires, is too inflexible, and is too warped by DRM and platform-lock-in issues.

Moreover, the latest round of DRM around HDMI and high-def media playback in general means that even in-home "streaming" -- i.e., from a PC to a TV or from one PC to another -- is facing setbacks just as Joe tech-savvy consumer finally starts to realize how to use a home server or network attached hard drive to get and watch shows.

And the set-top boxes, increasingly tied in tightly with content delivery (Comcast DVR, DirecTV TiVo expiring your pay-per-view movies, etc.) are more closed than ever. Instead of encouraging an ecosystem of streaming, they force customers into an ecosystem of downloading.

I think we'll see more people gaming the 250GB cap ... using scheduling software so they can maximize the utility of the 250GB and hoard downloads. Not to mention leaving the torrent client running so that every time I visit someone else's network -- an office, a coffeeshop, etc. -- I'll keep the downloads slowly but surely coming in.

One final note: the marginal cost of moving a kb of data around a big network -- that is, the cost of the power and cooling for the switches and the amortized cost of the cables and gear and ops guys divided over their lifetime data transfer -- is more or less zero when the network isn't already near peak bandwidth somewhere. So even if there is a problem, the volume-based cap is the answer to a different problem.