References - IEEE Xplore

5 downloads 0 Views 1MB Size Report
David Patterson and John Hennessy's response to our article, .... Robert C. Colwell, Charles Y. Hitch- cock III, H. M. Brinkley Sprunt, E. Douglas Jensen, and ...
1-l% o%o_f ko", "Am" cioa can naVO Tne iacTc -Any &kf fmk

but having opinions Is an art." Charles McCabe, San Francisco ChroNcle

lqw

More controversy about "Computers, Complexity, and Controversy" David Patterson and John Hennessy's article, "Computers, Complexity, and Controversy," seems to indicate that they have misunderstood many important points that we tried to make. In this brief space we can only highlight their most serious misunderstandings. 1,2 In our section on multiple register sets (MRS) we did not claim that RISC I's performance "has been shown to be attributable to (MRSs). 1 Nor did we state that "MRS is the 'real secret' behind RISCs. 1 Our report was much different. To quote ourselves in context, we stated that "a significant amount of the performance of RISC I for procedure-intensive environments has been shown to be attributable to"2 MRSs. Our experiments showed that the performance effects due to MRS are comparable for RISCs and CISCs. Consequently, we urge RISC researchers who incorporate this mechanism to factor out its performance effects when attempting to gauge performance due to the reduced nature of their machines. Similarly, one should do the same for any other mechanism which affects performance and is independent of instruction set complexity. In summary, it also seems necessary to restate that the goal of our MRS study was to evaluate the performance effects of MRS, not to re-evaluate the RISC I, the VAX-II or the 68000. We simulated (using our own simulators) only procedure-intensive C benchmarks (all of which were supplied to us by Patterson) because MRS mechanisms are not exercised when procedure calls are not used. These benchmarks were appropriate for this study because they expose the performance effects of MRS, the focus of the study. The tendency of many RISC supporters to emphasize throughput at the ex-

response to our

"

"

of architectural and other metrics is once again evident in the sentence, "The authors report that architectural metrics show that the VAX outperforms RISCs. " 1 This is an incorrect paraphrase: we never said that "the VAX outperforms RISCs." In fact, we very clearly stated that throughput per se is not being measured by the MCF architecture evaluation scheme, and that this is arguably a shortcoming of the MCF model that unfairly penalizes machines which attempt to optimize throughput by making tradeoffs across the architecture/implementation boundary, as do RISCs: "The MCF life-cycle cost models did not include execution throughput, so the RISC II performance related features were ignored."2 MCF was not designed to be a 100-yard dash for computer systems (as RISC performance studies usually are)-it was intended as a decathlon, reflecting our perception that actual systems in use need far more than large numbers of cycles-per-unit time in order to meet their goals. In their response, Patterson and Hennessy have missed the most important point about performance comparisons with regard to the Intel 432. The 432's object orientation exacts an intrinsic performance cost, since it is manifested as a set of runtime checks and a large amount of additional information to be manipulated. Patterson and Hennessy state that an improved, "hypothetical 432 is still three to six times slower than the 16-bit Motorola 68000." This comparison, which we did not make, is meaningless because the two machines are not doing the same "work." To make a fair comparison, one would need either to migrate the object orientation of the 432 to the 68000 or to remove such support from the 432 (an enormous pense

.

.

.

The Open Channel is exactly what the name implies: a forum for the free exchange of technical ideas. Try to hold your contribution to one page maximum in the final magazine format (about 1000 words). We'll accept anything (short of libel or obscenity) so long as it's submitted by a member of the Computer Society. If it's really bizarre we may require you to get another member to cosponsor your item. Send everything to Jim Haynes, Applied Sciences, UC Santa Cruz, CA 95065.

December 1985

task in either case). Perhaps Patterson and Hennessy's comparison addresses the throughput costs of supporting the 432's style of object orientation, but it does not "provide evidence ... for the value of the RISC approach." 2 As history shows, there is not necessarily any correspondence between technical merit and success in the senses of awards, salesmanship, and commercial products. Researchers should be willing to let their work stand or fall on the basis of its own scientific worth. While it may be that many computer scientists and commercial computer designers have been convinced by the research presented in the RISC papers and dissertations, it has been our experience that a significant number have not (and for very good reasons, as we have argued in our article). Robert P. Colwell Charles Y. Hitchcock III E. Douglas Jensen H.M. Brinkley Sprunt Carnegie-Mellon University

References 1. David Patterson and John Hennessy, "Response to 'Computers, Complexity, and Controversy,"' in "Open Channel," Computer, Vol. 18, No. 11, November 1985, pp. 142-143. 2. Robert C. Colwell, Charles Y. Hitchcock III, H. M. Brinkley Sprunt, E. Douglas Jensen, and Charles P. Kollar, "Computers, Complexity, and Controversy," Computer, Vol. 18, No. 9, September 1985, pp. 8-19.

In regard to the article "Computers, Complexity, and Controversy," I would like to make some more or less philosophical comments. This line of reasoning has not, to the best of my knowledge, been previously expressed and I continued on p. 99 93

Open Channel

continued from p. 93 offer it openly for thought and discussion to the readers of Computer.

What is a computer? A machine that computes, right? Partially. To compute implies a strong emphasis on mathematical manipulations of numbers. However, anyone who has used a word processor can attest that computers do more than mathematical calculations. A computer then is a data processor, a tool used to perform any of an enormous number of processes on data: numerical, textual, and otherwise. The versatility of a tool is (or should be) a prominent issue in its design, for versatility and usability are directly proportional. As an example, take the world's most-used materials processorthe human body. Due to its versatility it can perform any of an enormous number of processes on materials: solid, liquid, and gaseous. Sound familiar? Any and all specialization reduces the versatility, and hence the usability, of a tool; this includes the computer! The architectural designers of CPUs, MPUs, and other components for use in a computer should take this into serious consideration. Even the choice of instruction sets is a specialization which prohibits transport of machine code from one machine to another. What about microcode, you ask? Microcode is simply a sublevel of programming. If the microcode is transferable across machines, then a degree of versatility is achieved (all well and good). If not, then versatility is reduced and the microcode becomes an administrative sponge, soaking up overhead CPU time and effort better used directly on the application program (neither well nor good). This is one of the conceptual flaws in processor design that the pioneers of RISCs are attempting to reduce or eliminate. They first asked: what is it the users of our product, software engineers, need and use the most? Next, they removed the underused specializing frills from the product to increase its versatility. Finally, they experimented with methods for using the freed resources, namely space. I am not saying that specialization is inherently wrong. Highly critical tasks may require highly specialized equipment. Even the human body has specialized in some instances. Still, its extreme versatility has aided it, more than any of its specializations, to survive and prosper. Increased versatility will do the same for the computer! Daniel C. Becker December 1985

WIDE BAND SIGNAL ACQUISITION IS JUST PART OF OUR CHALLENGE.

A barrage of new signal types and increasing signal densities have created new demands for ESL's next generation reconnaissance and communications systems. But wide band signal acquisition is just one of the solutions and only part of the challenge. From VLSI applications and acousto-optics to adaptive digital signal processing and man-machine interfaces, we're exploring issues and technologies that are adding up to the unified whole that is ESL. As a "systems house" we have a high-involvement environment that makes your expertise an integral part of our answer to the future. U.S. CITIZENSHIP IS REQUIRED FOR ALL POSITIONS. Immediate Openings for

ENGINEERS SOFTWARE Senior level professionals with a range of 2-8 + years expe-

rience will enjoy the challenges of working on our state-ofthe-art, real-time imbedded tactical systems based on distributed microprocessor architectures. Responsibilities involve design, development and maintenance activities requiring programming in 68000 "C" and assembly as well as DG FORTRAN. Exposure to DG Eclipse or ROLM 16-bit minis and/or Motorola 68000 desirable, but an understanding of the software development process and ability to assimilate new machines/languages the most important asset.

ESL offers excellent salaries and creative benefits in a stimulating, health-oriented environment. Please send resume to Professional Employment, Dept. CO-601, ESL, 1345 Crossman Avenue, RO. Box 3510, Sunnyvale, CA 94088-3510. An equal opportunity employer.

Thor's a fomula fo lb future. An t's ESI.

ESL

A Subsidiary of TRW

TREm __..,