For a fraction of a second I thought Matt had stolen my topic. Fortunately (for him!) it turns out we’re just perusing different sides of the same coin. To wit:
Back when I started at the company, four owners and umpty-ump name changes ago, our philosophy was to do the most with the limited resources we had. There were two main reasons for that:
- We were trying to compete with other companies for an extremely limited customer base, so we had to keep the cost of our product low.
- We didn’t have much money.
In practise, that meant that we’d cram as much as possible into every piece of hardware and software we had. My first workstation was a 486/40DX that did triple duty as the company’s nameserver and mailserver; occasionally I’d have to run something in X Windows, so that would be running too; and it also became our first intranet webserver when I installed a just-released piece of software from NCSA called httpd. Our software ran on a single PC, then the next-generation version expanded the requirements to a SPARC/PC cluster with the SPARC handling video output, system control and applications, and the PC handling serial ports.
And it was good.
We worked that way for about five years, vastly expanding the capabilities of the software using the same hardware we’d always had. Sure, we got new machines from time to time that had faster processors, more memory, and so on, but we’d keep the old stuff running as long as it was physically possible. Our cluster grew by another PC, but that was it.
Then came the buyout by a big Silicon Valley company, and everything changed; not immediately or all at once, of course, but gradually, like boiling a frog. We heard stories about the magical land to the west where money dripped from the ceilings, there were perks and freebies as far as the eye could see, and everyone had huge racks… of servers!
And we wantss preciouss. Shiny preciousss, nice precioussss… we waaantsssss….
And we gots, to a certain extent; never as much as the parent company, but things started showing up. And as we got more, we started to do less with it, which was the fashion at the time in the west. A three-server product cluster sprouted a dedicated GUI host, a dedicated command-and-control server, a dedicated database server, and two more workhorse PCs to do some of the heavier lifting. We started ordering quad-processor SPARCs with two gigabytes of RAM and storage space measured in fractions of terabytes. Even QA got their share, two clones of production environments for stable and destructive testing. We started installing racks of our own in the server room.
Then the dot-com bubble popped, and we got more. As the parent company shed employees, the racks of equipment they had used came to us. We were swimming in servers; I got a server of my own to play with
.
Things levelled off after that; the stock crashed and suddenly we were back to being the little startup that could, maybe, if we really tried hard. But the damage had been done. There were still scads of equipment around, but none of it was used to anything resembling its capacity. Instead of sharing one underutilized machine to do three or four low-resource functions, we kept using three or four servers; every new process meant a new computer. The minimalist mentality once present had been overtaken by the habits formed during our short-lived affluence.
That’s all several years ago now. Since then I think we’ve started to get back to middle ground, a reasonably steady state where we’re not straining for resources but not neck-deep in them either. We’re making better use of the hardware we already have, and limiting the inflow to stuff we actually need. But every once in a while I’ll see a desktop computer taken from an office after an upgrade and wonder why it isn’t being used as the modern-day equivalent of my trusty old mailserver/nameserver/webserver/X terminal….