grid sweet spot?

Grid computing seems to be climbing the hype curve. I can’t claim to be an expert on it, but I have to say it has the smell of improbablility.

Distributed computing architectures are in most cases more art than science. While some problems are easily decomposed into tiny pieces that can be solved in parallel, others are clearly not. Also, often the critical bottle necks are communication, not computation — you spend your time finding the right ways to package up the problem and connect the pieces to avoid moving data around unnecessarily.

You can certainly develop algorithms for distributing the computation that adaptively address these problems; however, any time you ask a computer to do something that has a degree of “art” to it, you have to accept a certain amount of slop.

Grid computing’s sweet spot seems to lie on an improbable location in the terrain — a place where computers are too cheap to be managed directly by people but too expensive to be left idle. The improbability of this location is increased by the as-yet steady march of Moore’s law and the inevitable friction you’d have to accept in the system.

When you add to this the challenges of trying to develop, debug, and manage such a grid, I think you are forced to conclude grid is likely to see more theory than practice.

I think the reason its so popular is that it is technically cool and it appeals to certain ideals widely held by engineers (e.g. efficient use of resources).