How Will the Giants React to the Micro?
Table of Contents
from the May 1982 issue of Practical Computing magazine
The mainframe manufacturers are finding that microcomputers - so recently derided as mere toys - are making inroads into their hitherto safe preserves. Clare Gooding examines their contrasting styles, and ponders on how the giant mainframe builders will fare among the quick-witted bandits of the micro world.
Time was when anyone working with computers had a hard time at social gatherings. If you were foolish enough to admit it, the reaction was either “Oh that’s all too technical for me, don’t know anything about it”, or worse, an inundation of stories about payroll computer errors and gas bills for £0.00. Nowadays a more likely reaction is, “We’ve got one of those at work, amazing little machine, we do everything on it”.
The computer, that great mystical, rather threatening beast, has become less remote, and almost respectable. The amazing little machine is likely to be a microcomputer. Those who dismissed Pets, Apples and Tandys as hobbyists’ toys now find them so ensconced in the business world that they speculate whether micros will eventually replace the mainframe.
The key to the rapid progress of the micro has been software availability. Instead of being limited to the programs sold by their friendly dealer, people have also written their own software.
A few years ago this would have been tantamount to blasphemy. The micro was still considered an experimental freak, nothing to do with real computing, except by an enlightened few who set about linking micros with larger “host” machines to make software development possible in a more familiar environment.
Amateur beginnings

Software houses such as CAP and Logica, who had made their killing on huge mainframe projects, were already fiddling with micros in attics and basements. At the same time do-it-yourself hobbyists began to discover the joys of Basic. Even if the results were far from perfect, they provided an alternative to the turnkey products at a price small users could afford.
As for the hardware, new potential users thought they could afford a micro where previously a bureau service or a larger machine of their own would have been out of the question.
The mini paved the way for the micro as companies like Hewlett-Packard and Wang offered cheaper hardware solutions, but the nature of software production stayed much the same. The micro arrived when the pattern of the computer market was changing in any case. Software was beginning to play a larger part, although a firm wanting to computerise would still look first at the hardware it wanted, and then find a software house or a package through the manufacturer.
In the old days someone somewhere in an organisation would realise that a computer might make the company more efficient, by doing payroll and perhaps more specialised company-related tasks.
A consultant might move into the company, spending some weeks getting familiar with existing routines. If the hardware itself had not been chosen it might be his job to specify the machine as part of the system design.
Usually an existing manual system would provide the skeleton, and some constraints, for the eventual computerised application. The consultant would confer with the systems analyst, who would translate the entire system into separate modules or programs.
Ample documentation
Each program had its own design document, a specification which set out the size and names of fields, the layout of report printouts, and so on. These were probably passed on to a fairly large team of programmers.
It was perfectly possible for programmers never to know the clients’ original aims for the system. They could spend all day shoving fields and values around without knowing what they represented: their prime concerns were, not surprisingly, far removed from those of the client company
In the mid -seventies there were still some hangovers from the days when software had been of secondary importance and even given away free with hardware. Programmers took great pride in tweaking: devising clever routines which would run more efficiently in hardware terms.
The problem with clever-clever programming was that, however efficiently it ran, when it came to changing it or debugging at a later date no-one else could decipher what the whizz kid had thought up.
Changing skills

As hardware prices began to drop, programming became increasingly important. In most large software houses prog rammers were taught that documentation was essential and that all development programming should bear future maintenance in mind.
Turnkey projects became less common as companies accepted package solutions to data-processing problems. Tailoring packages to individual requirements was easier and more profitable with well-structured and documented programs than when programmers had given variables names like Fred.
All this meant a shift of skills. Specialisation had been essential before because of the size and complexity of systems The jigsaw of hardware operating system and programming language in a specific system design called for inside knowledge at different levels.
With big data-processing shops the job of operator was, and still is, a separate skill demanding familiarity with the ins and outs of large-scale systems software. But in the small company - a first-time user or one which had perhaps relied on a bureau before buying its own machine - the roles would be merged.
On every desk
The end-users might be people who had been with the company for some time, familiar with the business and possibly the manual system which had preceded the computer. Often operating the machine formed only a small part of their duties and it would be a matter of teaching them to treat the new system as a tool.
The new small-business systems were within the reach of many more businesses than the mainframes with their special premises and team of attendants. The mystique began to be dispelled as people saw small computers being installed in their own office premises - not behind closed doors, with special under-flooring and cooling systems, but in the same corridor, and under the care of Brenda-who-has-been-here-for-years.
When the microcomputer burst through the pages of the Sunday colour supplements into homes and businesses to learn Basic and write programs. Operating systems like CP/M meant that people could manage their own machines, and the market realized that applications written for particular machines and operating systems in the micro market were saleable.
Computers were more widely used than ever before, and end-users expected better service, more for their money, and even access to their own information logical enough, but impossible in the days when hardware had been so expensive that the computer had to be carefully tuned to maintain performance per penny.
Handing over power to end-users can hamper the absolute performance of the machine, but makes the people more valuable because their time is spent more efficiently. In the eighties, this has become the important part of the employment equation. People are becoming more aware of computerisation than they were when their pay slip and bank statement were the closest they ever got to a computer.
While the “Noddy programs” gather dust, the new and sophisticated applications of the micro have forced the data-processing business to take notice. Microcomputers have long been part of the furniture in universities and colleges, and they have already proved their worth at departmental level in large companies like Shell.
Those in the DP industry who had been inclined to dismiss the microcomputer as little more than a toy, far removed from real computing, have had to re-evaluate. Nonetheless software houses recognised the limitations of Basic, the native language of the average micro.
Too many people had pushed out software which worked for them, without realising how easy it is to bomb software if it does not cater for all sorts of errors. It is easy enough to write a routine to do a particular job, but much more difficult to make it watertight, bug-free and easy for the user to work with.
Powerful tools
Micro packages became freely available but they sometimes lacked quality. Some needed extensive testing by the user and others were just so limited in power that users would become exasperated and look for something else.
The micro software market went through a similar learning cycle to the mainframe and minicomputer markets except that microprocessors could be linked to big host machines where the software could be developed before being run on the micro.
This gave access to the more powerful and sophisticated techniques of programming, particularly high-level languages. The more enterprising software houses concentrated on supplying those tools to the micro, and gradually Pascal, Cobol, Algol, APL and RTL/2, all highly “professional” languages, began to emerge on micros.
The other big problem was lack of sheer size and power. Even if you were lucky enough to find watertight software, the limitations of the floppy disc made themselves apparent pretty quickly if the micro was running several applications, rather than just one.
By this time, software experts whose roots were in the traditional data-processing world were well aware that the micro offered opportunities which made working in a Cobol shop with a mainframe dull by comparison. Thoroughly professional software tools, like the CIS Cobol compiler from MicroFocus, complete with development aids, had been produced. Far from dismissing the micro as a toy, most professional programmers became enthusiastic and realised that their skills were not necessarily obsolete.
The gap narrows

Everything had grown up a little since the original eight-bit micro. Technology had moved on, and hard discs solved the storage problem for microcomputers. Winchester-type hard discs, such as Corvus, meant no more fiddling around changing floppies and squeezing data into overflowing spaces.
The 16-bit machines that have been appearing on the market in the last year or so are not far removed from minicomputers: As well as mass storage, operating systems cater for multi-users, offering the kind of facilities that used to be associated more with mini and mainframe machines.
Manufacturers had learnt the importance of operating systems to machine and software sales from the immense popularity of CP/M. In the eighi-bit market, pecdple wrote applications which ran with CP/M simply because it had the reputation of offering a wide choice of software. The cycle perpetuated itself: people bought CP/M machines because they knew that there was plenty of CP/M software out there, and programmers, sure of their market, went on writing it.
Even Digital Research, the small systems house which originated CP/M, admits that it was not necessarily the best operating system. It was ready and available when people needed it, and became recognisable and familiar. Just how tight a grip it now has is evident in that even on the more powerful 16-bit machines of the next generation, customers are asking for CP/M to be implemented, much to the amusement of those who have nurtured new operating systems into being so that the new machines can make the most of their extra power.
There is a wealth of indepenently-written software applications on tap to CP/M, with a range and choice which would bewilder most mainframe pundits. As a result, the micro manufacturers have evolved a different method of doing business from the original “here’s the hardware and you’d better stick to us for the software” technique. Most micro manufacturers did not attempt to supply applications. Hardware dealers could refer buyers to whole lists of independently-written software.
This off-the-shelf method of selling software like soap powder from a supermarket works far better in the micro environment than it ever did with the large machines, though there are major differences in the two markets.
To make a profit, micro software distributors have to sell in volume, and the customer has to take it or leave it. There is no question of elaborate tailoring for each customer, and packages have to be robust enough to stand on their own with the minimun of maintenance. Documentation and operating instructions have to be of a standard that would allow a comparatively naive first-time user to get the package up and running entirely on his own.
If the package does not work. or if there are problems in sorting it out, it is probably cheap enough to be thrown away. The price of the microcomputer itself has always put an upper limit on the cost of the software, however brilliantly devised and written.
In the mainframe market there can be no question of “disposable software”. It is not unusual for a full suite of financial and payroll programs, or perhaps a set of development aids, to cost well over £20,000. High initial prices are followed by heavy maintenance costs.
Large packages require constant maintenance. Payroll packages need instant updating as laws and tax regulations change. Most micro packages, if they receive any maintenance at all, will be updated through the post.
Weak excuses
Often the end-user now has control of parameters and can do a certain amount of housekeeping maintenance, but the onus is still on the supplier to make sure that software is bug-free.
Large systems for mainframes involve a lot of high-level language programming, but with high-level languages now as much in evidence on micros as on mainframes, the excuse that such programming skills are expensive can hardly account for the difference in the cost of software.
Mainframe installations do demand more skills at all the different “layers” of software. The mammoth operating systems of mainframes make writing them far more difficult than wher dealing with TRSDOS or CP/M. Doctors or shopkeepers writing their own applications can be their own consultant, but for mainframes the role of consultant and the systems analyst remain vital. The system has to run efficiently, which may involve a systems programmer as well as the operator. No wonder it becomes expensive.
The mainframe market has been hampered by its complex and grossly inefficient operating systems, conceived when giving power to the end-user was out of the question, and portability to be avoided, since the manufacturer was anxious to keep his users well and truly locked in to his equipment.
The micro market proved that software portability was likely in the long run to benefit everyone since the more software is available, the more likely hardware is to sell, especially as the software decision has become the crucial part of buying a system. The choice of Unix, the timesharing operating system from Bell, for some 16-bit machines has opened up the possibility of software applications portable between micro and mainframe because mainframe manufacturers are also adopting Unix.
A lesson learned

Miraculously, the big boys seem to have learnt the lessons of the micro market. IBM finally put the stamp of respectability on micros by launching its own 16-bit machine last year with outside-written hardware: quite a U-turn for the company which originated the idea of lock-in operating systems and hairy system conversions.
IBM picked CP/M-6, and promises total compatibility with CP/M. Other applications were announced; Peachtree, for example, was approached for its financial packages to be supplied as the standard software applications with the IBM Personal Computer.
Software publishing, which gives the same service to program authors as book publishers give to novelists, has become the in thing in microcomputing. Caxton Software Publishing, which claims to be the first such London publisher, puts enormous emphasis on the quality of presentation.
The mini and mainframe markets are taking note, and organisations like Wang now actively encourage independent software suppliers. Even IBM looks with favour on suppliers of “alternative” applications.
Micros have made end-users more demanding. Data-processing managers can no longer ignore micros. The user who would once docilely accept a six-month wait for his application is now more likely to go out to buy a micro for his department.
The idea of de-skilling the use of a computer had already won acceptance in the micro field: soon people wanted to get their hands on the mainframe, too. This change was really just a process of moving the skills one step up the line. Programmers had to write software which was that much more clever so that users did not need to be.
Now the ultimate user-friendly tools are being developed at the mainframe end: speech synthesis and interpretation, expert systems, and natural-language systems which allow the users to communicate with the computer on their own terms. Some of these products, the result of artifical-intelligence research, are already being sold, but they are notoriously power-hungry and would chew up the processing power of a micro before you could do a syntactical analysis of Jack Robinson.
Not forgetting the matter of existing investment, mainframes are unlikely to be pushed out by micros simply because their immense processing power is still needed for the everyday running of companies. The micro excels as a flexible tool for the end-user, but the mainframe is still needed for the dirty work: the corporate processing of payroll and accounts.
The next step
Those micro users who declared UDI with their own departmental machine are beginning to discover that it would be very useful to be able to tap into the mainframe sometimes, for data, or sheer processing power. And the mainframes can get on with number crunching or data chewing far more efficiently if relieved of all those specialised applications. The next big issue for the computer community will be networking and telecommunications. If we can get it right, both micros and mainframes will find their niche in systems where the quality of the job matters more than the size of the mill.