I recently came across a book that I had rescued a long time ago from the discard pile of the technical library at work. Titled Multiprocessors and Parallel Processing, it was published in 1974 by John Wiley & Sons, Inc. and edited by Phillip H Enslow. What a trip down memory lane that was! One of the subtle points of the book was how mini-computers (DEC, etc) were encroaching on mainframe computers (Burroughs, IBM, etc). The whole microcomputer revolution that started in the late 1970’s and 1980’s was completely invisible to all but the most radical thinkers and future forecasters.
From a historical perspective, it’s interesting to see what subjects were covered. It begins with an overview of computer systems, including a four-page section describing “The Basic Five-Unit Computer”, including four block diagrams that illustrate the various configurations of the “Input Unit”, the “Arithmetic Logic Unit”, the “Memory Unit”, the “Control Unit”, and the “Output Unit.” I guess it was pretty revolutionary stuff in ’74, but it seems pretty simple stuff compared to what they pack into a dual-core processor these days.
One interesting emphasis in the book was the potential for improved reliability that multiprocessor computer could offer. It’s hard to remember that computer failures were a major source for worry in the ’60s and ’70s. There were many discussions during the Apollo missions about whether the guidance computers should be redundant system, but the added weight, complexity and fragility of the computers of their time made redundant systems unattractive, especially since redundant computers wouldn’t add significantly to the overall reliability. An MIT document on the Apollo Guidance system put the estimated failure rate at 0.01% per thousand hours, so the mission time of 4000 hours produced a “low” probability of success of 0.966, which required that repair components had to be included in every flight along with the tools and procedures to repair their computers during the mission. Our current computers now use components that have less than 1 failure per million hours, or one thousand times more reliable (or more).
One thing that hasn’t changed is the motivation for Multiproccessor and Parallel Processing Systems – improving system performance. Improving performance and reliability are the subjects covered in Chapter 1, along with the basic components and structure of computer systems. Chapter 1 takes 25 pages to cover its material.
Chapter 2, Systems Hardware, spends 55 pages describing memory systems and how to share memory between multiple processors. Also covered were fault tolerance, I/O, interfacing, and designing for reliability and availability. Nearly half of the pages in the first four chapters are in Chapter 2.
Chapter 3 covers Operating Systems and Other System Software in 27 pages. It covers the organization of multiprocessor systems (Master-Slave, Symmetric or Anonymous processors), Resource allocation and management, and special problems for multiprocessor software, such as memory sharing and error recovery.
Chapter 4, Today and the Future, is the last chapter and it summarizes the future of multiprocessor systems in 10 pages or less. The final section in the chapter, The Future of Multiprocessor, includes the statement, “. . . the driving function for multiprocessors would be applications where. . . availability and reliability are principal requirements; however the limited market for special systems of that type would not be enough incentive to develope [sic] a standard product specifically for those applications.” Sounds like they didn’t expect multiprocessor and parallel processing to be a big market in the future.
Which brings us to a total of about 130 pages of a 330 page book. The remaining 200 pages of the book are appendices that describe commercially available multiprocessor systems available from Burroughs (D 825 and B 6700), Control Data Corporation (CDC 6500, CYBER-70), Digital Equipment Corporation (DEC 1055 and 1077), Goodyear Aerospace STARTRAN, Honeywell Information Systems (6180 MULTICS System and 6000 series), Hughes Aircraft H4400, IBM (System 360 and 370), RCA Model 215, Sanders Associates OMEN-60, Texas Instruments Advanced Scientific Computer System, Sperry Rand (UNIVAC 1108, 1110, and AN/UYK-7) and Xerox Data Systems SIGMA 9 computers.
For those of you who actually read thru the list, you probably noticed that it included MULTICS system that inspired Thompson and Ritchie to develop Unix. MULTICS may not have been a commercial success, but one of those systems continued to run until the year 2000 when the Canadian Department of National Defence in Halifax, Nova Scotia, Canada finally shut it down. Must have been quite a system if it inspired a couple of people to write a completely different operating system.
What I think is most interesting is that the majority of the book is about the hardware and that system software (and the joys of parallel programming) are only discussed on 27 of the 330 total pages, and most of them discussed how the software interacted with the hardware. At the time, everyone saw the major hurdle as being the hardware, something that integrated circuits and massive shrinking of components has almost rendered irrelevant. The amazing efforts of Intel, AMD, TI, and other chip makers have made high-performance, high-availability, high-reliability hardware available at reasonable prices, components that hardware designers could only (just barely) dream of in the 1970’s.
Now it’s time for software developers to start thinking outside the box and come up with new ways to design software that is faster than the tried-and-true-slow-and-expensive-and-behind-schedule methods currently in use.