I first learned to program in 1972, using Fortran and punch-cards submitted overnight to a mainframe.

I became a full-time developer at a large oil company in 1982, when the main development environment was an IBM mainframe, fed with semi-real-time data by a hierarchy of IBM mini-computers, Apple IIs and IBM PCs. The languages I worked in were a mix of assembler, Basic, C and Cobol, and data was stored in flat files or indexed files.

The mainframe data center was a huge space with raised floors to allow for water pipes to cool the equipment and masses of cables to connect everything. As a mainframe developer, you rarely went into the data center, which was a high-security restricted area.

data center minicomputer

In 1982, a well-equiped desktop computer had 16KB of RAM and a floppy drive which stored 80 to 140 KB of data. A five MB (yes, MB) hard drive was the size of a large, heavy shoe-box, and so expensive that it was shared between multiple desktop computers in the office, and so fragile that moving it was not recommended.

IBM PC Apple II

Source code was stored on floppy disks or mainframe files, and all backups were on magnetic tape.

8-inch floppy 5-inch floppy

Data communication was dial-up in the range of 300 to 1200 baud, or a dedicated line of around 2400 baud (around 240B/s), so we squeezed every byte out of data files, and wrote our own communication protocols to check for corrupted data.

You could take a computer home with you, if you were willing to lug the monitor as well, but you could not do much with it. On the desktop you could edit and compile assembler or C code, but in an enterprise environment you could often only test the code with a local mainframe connection.

To access the development and test systems, you had to physically be in the office, and often your only real view of the data was a printed report created overnight, printed at the central data center and delivered to your office the next day.

Even in the office, the cost of running a development environment on the mainframe was high, and we were aware that every compile cost money. So we desk-checked our mainframe code carefully before submitting a compile.

Each developer was basically a human lint checker, looking for coding errors before submitting the compile. Compilers were simple in those days, and a single missing period in a Cobol program could generate hundred of cascading errors. It was not possible to break up programs in the way we are now used to, and over the years I helped to maintain a number of programs where the single program source file was more than 10,000 lines of code. Even when source code management tools became common, only one developer at a time could check out and work on these monsters.

Moving into the modern world...

A project I worked on recently had a large Angularjs front end and a Spring / Java / MySQL back end, deployed on AWS. The database was what we consider small these days - just a few GB. The source code was stored in git so all changes were immediately available to a team that was geographically dispersed.

All our developer laptops had at least 16GB RAM and 512GB solid state disks, so any developer could spin up a full local test environment, complete with a local database.

Data communication speed on the internet is now so fast that even if you are targetting mobile phones, you don't need to consider special data formats. Like everyone, we used JSON to pass data between the front end and back end, despite the considerable overhead in packet size that this entails.

WiFi speed now is at least 10x faster than the fastest hard-wired terminals in 1985, and many offices have GB wired speeds, so you can productively work from just about anywhere in the world. Most developers take their computers home, and work from home a lot of the time. On this project, for a few months I worked from southern Portugal while the rest of the team was in the US.

Next Post