ACM Fellow Profile
Roger Needham
Roger Needham photo

Kindly elaborate on the work leading up to your achieving the distinction of ACM Fellow:
I was an academic for most of my life. I started computing research in 1957, but since 1961 I worked in the Systems Area. I worked on Operating Systems in the 1960s. In the 1970s I led a team that designed and built a computer with special features for memory protection. Then in the late 1970s and most of the 1980s, I was concerned with LANs and Distributed Systems; we did the Cambridge Distributed Computing Systems with a book published by myself and Andrew Herbert. Then I was a Cambridge leader for a project called Project Universe which was concerned with data connection of local area networks by satellite -- a big collaboration between seven organisations. But Iíve been interested for a long time in various security things. Probably in 1966, I introduced the now almost-universal practice of encrypting password files with one way functions. Then in 1977, when I was on leave at Xerox PARC, Michael Schroeder and I created the Needham-Schroeder protocol for authentication, which was one of the basic ones which Kerberos used. After all of that, I was elected in 1985 to the Royal Society, and when the ACM decided to have Fellows, they started up by making Fellows of everyone who was a member of their national scientific societies like the Royal Society in London, or the National Academy of Engineering in the U.S.

What are the best references to your work?
My 1978 paper with M.D. Schroeder, Communications of the ACM, 21, 1978, 993-999, and my paper with M. Burrows and M. Abadi: ĎA logic of authenticationí, 12th ACM Symposium on Operating System Principles (1989), Operating Systems Review, 23 (5), 1989, 1-13; and ACM Transactions on Computer Systems, 1990. Although for most of my career I was a practical builder of systems, the things Iím best known for are those two papers, both of a theoretical nature and both done when I was on sabbatical leave. So you can work away on a complicated system for seven years, and nobody remembers that.

Did your earlier practical work on building systems help you to abstract some fundamental principles which served as input to your theoretical papers?
I think so. I regard myself as a systems person, not an OS person, nor a communications systems person. I think all three systems require the same kind of skills.

What kind of skills?
Being able to take a broader view rather than very fine details, and somehow itís hard to be exact about it. For example, having some feeling for when something is not going to work, or that something is just too complicated.

Do you think these come with experience?
I think so. There was a well-known British computer scientist, Christopher Strachey, who died 25 years ago, who said some very wise things, and one of them was, "It is impossible to foresee the consequences of being clever, so you try to avoid it whenever you can." For much of the history of computing, people were trying to save memory.

Not CPU?
Maybe CPU, but certainly memory. Memory was a terrific constraint. People did all sorts of clever things to optimise the use of memory which had the consequences that systems were difficult to get right, very inflexible since they depended on an exact set of circumstances and if external circumstance changed a bit, it could be difficult to adapt the system. Somebody once said optimisation means making something smaller and faster and does nearly the right thing. And itís very easy for a designer to give away important bits of function, for he can see if he gives away some bits of function the whole thing gets smaller and goes faster. Throwing away a function is taking optimisation too far.

What are your current research interests?
To the extent that I have any, they are in security, largely. But in my current position, I donít get time to do research. When I joined Microsoft in mid-1997, it was with the job of starting up their European Research Labs from nothing, and that has kept me very busy. I was glad to join Microsoft. My predecessor as Head of the Computer Lab at Cambridge, Maurice Wilkes, retired in 1980 at 67 years. Immediately he went to the U.S. to work for Digital, and when heíd been there a while, he said to me, "If I had known what fun I was going to have in industry I would have done it sooner," and Iíd never forgotten that.

Which areas of security are important?
Iíll tell you what I think is important, though I donít know how to work on it, and that is, today one of the principal sources of in-security is difficulty in security management. Itís difficult to know whether the access controls organisations have set up really enforce the security policy the organisations have. In a big complicated situation, they probably cannot be tested nor proved. And also security procedures are often extremely cultural, and youíll find that local managers in organisations would take shortcuts because they donít know why the procedures were there. This is not a criticism of them, because the procedures ought not to be as difficult as they are.

Do you think the fault of the systems lie very much with the local policies?
Well, itís also due to the technology, because the technology is too difficult to manage. Just as running distributed systems and network management is difficult.

Is that because the field is very difficult to solve?
I think so. I think usability of security apparatus is a very important thing. I donít quite know how to attack it, but I think itís where progress needs to be made.

You mentioned Maurice Wilkes earlier. Maurice Wilkes wrote many books including generalist and computing books. Have you written such books?
Wilkes and I wrote a book together in the 1970s. I wrote a book with Andrew Herbert. Those are the only books Iíve done.

Do you have plans to write more?
I donít think so.

What are your current outside and community interests?
At the moment, I sail. I keep a boat on the Orwell.

Ah, the Orwell runs near Ipswich.
Yes, thatís right. Iíve kept a boat there for many years. I used to be much more community-involved. I used to be a district councillor for 15 years, a parish councillor for 20 years, but I gave that up as Iíve done them long enough.

Have the skills you gained from your community work been useful or applicable to your work as a manager or administrator?
Some of them were. But also at various times in my career, I have done a fair amount of quasi-governmental activities. I was a member of the University Grants Committee for a few years before it was abolished. I served on a number of Research Councils, Committee Reports, etc. Iím currently a member of the Defence Scientific Advisory Council. One of the reasons that I gave up my local government job was when I was on the University Grants Committee, I found I didnít have time to do both as well as doing my job.

What and who was or were the greatest influences on you?
Probably Wilkes.

Why is that?
He was the Head of the Department I worked in for 20 years.

What qualities did he have?
Well, what one learned from Wilkes was the importance of doing research in areas that actually matter. Itís very easy to do research, if you think research is just finding out what nobody knows. Well, thatís not good enough; if you want to do research, you want to do research that would have some influence. A lot of research is done which sure adds to our knowledge, but it adds to our knowledge in ways we didnít find very useful. If you look at the progress of computing, solutions are found by research. They are usually not complete, but quite commonly, before you answer all the outstanding questions, things are bypassed by technology. A beautiful example of this is virtual memory management. When virtual memory was first used in the 1960s, all attention was given to page replacement algorithms. There are all sorts of intellectual questions around there which were never answered. Youíd be stupid to do research on them today, because memory is so much bigger now that they just donít matter.

I read something from Dijkstra some years ago, and paraphrasing what he said, "The more powerful our machines get, and the more memory they have, the more complex the tasks software engineers are called upon to solve." Do you agree with this?
I think so. Well, I always advise my Ph.D. students -- I think Iíve had about 40 -- that good research is done with a shovel, not with tweezers: you should find an area where you can get a lot out of it fast.

What was your greatest influence?
The security work. Needham and Schroeder was about the first readily accessible paper for authentication protocols. One of the things we did was to establish the style which established the notation which is still used. And the Logical Authentication paper gave rise to a small industry doing that sort of research. It grieves me to say these were much more influential than all the other practical engineering I did.

What do you think has made the greatest impact on software engineering?
Itís difficult to say, as software engineering has been regarded as a problem for a very long time.

What do you mean?
What is good software engineering practice -- is it a characteristic discipline of software engineering? People have been asking that question for 30 years or more. There isnít a widely accepted set of practice.

There may be, but just applicable to particular sets of environments or companies.
That could very well be. As you remarked yourself a little while ago, the more computing capacity youíve got, the more complicated things you try to write, and if you develop a set of software engineering techniques which would enable you to write the applications of today with great ease, youíll probably find that the applications of tomorrow wonít be able to be built with techniques of today because the applications of tomorrow are just bigger.

Why is that? Canít we have an underlying foundation or pattern that one can use? For example, mathematics research is still going on but mathematicians tend to build on past work.
Yes, sure, but it took a long time. A great deal of the mathematics that we use came from the eighteenth century and after all, when Newton invented calculus, he used what we now consider a silly notation.

Wasnít calculus invented by Leibniz?
Itís now Leibnizí notation that we use. But Newton was right in there. Things like that take a long time and computing is 50 years old, more or less. And the level of serious, large implementations no more than 40 years. Of course, thereís a tendency, because itís a young manís subject, to re-invent the wheel at frequent intervals.

People have said that computing is a fast moving subject and what they mean is that the wheel of re-incarnation goes faster.

So in your opinion, thereís no field or discipline that has made the greatest impact.
No. Iíve never been in that sense a software engineer, and if you regard software engineering as the art or science of building very large complicated software systems, in that sense itís not really a subject Iíve given much thought about.

You told me youíve built quite a number of systems.
Yes, sure. I worked on Operating Systems in the 1960s; in those days the expression software engineering had not been invented. We built another system in the 1970s; it was as much a hardware project as a software project. Working for Microsoft, I think of building Windows 2000 as a software engineering challenge.

When did it dawn on you that majority of the most interesting work in the future would be more software-based and not hardware?
For a while, I worked in communications, and in communications, the interest in hardware has been more durable. Communications require things to work reliably. People talked for many years about the convergence of the communications and computing industries, which for many years failed to happen, and I think the difference was cultural, but I think the cultural differences are going away. The communications industry is now more competitive, and is starting to be quicker on its feet, and the computing industry has always had to be quicker on its feet. Thatís causing the communications business to become more like the computing business. On the other hand, the computing business has had to get more the other way, simply because of the enormous investment in software, and you canít just say we ought to throw away all this software and start again anymore than you can say we ought to throw away all these telephone exchanges in Britain and have better ones.

Do you think having the communications business going the way of the computing business may impact the reliability of communications structures? Software and computing break down a lot while communications people actually try to engineer things.
This is undoubtedly true, and I think youíll probably find, if youíre talking about very established communications companies like BT, their reliability is so extremely built into their culture even if they behave differently they wouldnít throw that out. I would refuse to give any example of newer communications companies.

Whatís your favourite story about software engineering or development, in terms of successes or mishaps or failures?
The thing which struck me as most bizarre, when we had built our computer in the 1970s, we ran it with a temporary OS and we found to our amazement that the OS would boot satisfactorily every third time you pressed the button. Not about every third time, but exactly every third time, and this is sort of thing youíll at first attribute to the intervention of the devil, but after a very great deal of thought, it proved to be a logic error in the computer, in the cache initialisation, and nobody could ever explain why such error would cause such an effect.

Which computer-related areas are most in need of investment by government, business, or education?
This is a British point of view, of course. Government investment is required in equipment for education, and very good progress has been made in computers for education. In general, it doesnít appear to me to be an area where government has a very large amount to do, because industry is doing OK. I think in education itself, there may be room for thought as to what one is trying to do. Are you trying to produce kids with IT skills or are you trying to produce kids who understand things about computers? The two are very different things. I think itís fortunate the computing industry is getting on quite nicely without the government being involved.

Yes, I agree with you. I do think that schools need to instill into students thinking skills, the way to solve problems more than instilling the way computers work.
I think thatís true. I think understanding problems -- understanding the way to go about solving problems -- at any level of education is worthwhile. Whether youíre in high school, whether youíre in university, whether youíre doing a Ph.D., understanding how to solve problems is a tremendously valuable ability.

What advice do you have for computer science / software engineering students?
The important things are:
(i) understanding solving problems, and
(ii) being organised rather than being chaotic.

Why is the second one important?
Well, when youíre a student, the computing exercises you work on are enormously smaller than real world software. Somebody who is very bright but quite disorganised can do the exercises and do them very well without having any kind of disciplined approach to oneís work thatís extremely necessary in real life. I think one of the huge challenges in computing education at university level, which is partly addressed by having group projects, is inculcating the importance of being organised. I think this is something that matters very much to students. So organising your thoughts and your work and being good with communicating with others are important, because Iím sure a great many difficulties in correct functioning and reliability of software come from people working on two sides of an interface and not understanding the interface with each other properly. So they have a mismatch of expectations and this is where you get bad behaviour from.

What is software? Is it engineering, or mathematics, or design?
I think of design as part of engineering. I think the whole of computer science is engineering. Not everybody agrees with me, of course.

What are your favourite books on software design, software architecture, and programming?
I never read books on my own subjects, itís one of the things you donít do when youíve been in it for such a long time. I donít even have a computer at home.

What is the most often-overlooked risk in software engineering?
Not being able to adapt your programs to slightly changed circumstances, because one of the things that is not terribly well-understood is requirements capture. A growing danger is that you have a statement of requirements, you design a system that would do exactly that, and then you find that the requirements were not exactly in correct relationship with the real world, and you have built a system which you need to alter and you find it hard to alter. And I think one of the important things in designing a system is to say, well, how are we going to ...? Whatís the sort of area within which we ought to adapt the system? You donít want to design a system which has got rigidity in its design which means you canít easily change it. And I think thatís a great source of problems. It all comes back to the undesirability of being clever. You look at some features of the requirements and say "My goodness, that feature enables me to do these tremendous optimisations," and it all turns out that the feature is not what you want.

So should one keep the user more involved?
Yes, keep the user involved, but also the good designer has some sort of feeling for what kind of variability he is gong to cope with.

But what if the software designer doesnít work in that area, e.g., if a software engineer is called upon to design a mathematical application, and he/she has no background in mathematics?
Yes, sure. Suppose youíre designing a database system for Cambridge University. Cambridge University has 15,000 students. Itís pretty sure you have no need to design your system for 100,000, because Cambridge University will never be like that.

Never? "Never" is a dangerous term to use.
Well, not within the lifetime of a system. It would be extremely foolish to design your system to break if there were more than 16,000 students. You canít build a system which scales indefinitely. You need to know what are sensible limits. If you were doing a system for the Cambridgeshire police, for example, you could perfectly well say you would not scale up to 16,000 policemen, as there would never be anything like 16,000 policemen in Cambridgeshire; I think there are at present 2,000 policemen.

Does that come with experience then?
I wish I knew.

What is the most-repeated mistake in software engineering?
I really donít know.

What are the research issues for the new century?
User interfaces. The sort of UI we use today was a result of research 25 years ago. Things are changing in technology now, speech is becoming much better, vision is making great progress and I think weíre going to see furious experimentation in how you integrate speech and vision into using computers.

But if you regard the computer as a machine, does it need to have sensing features? You donít expect your car, as a machine, to have visual and auditory abilities.
Sure, but whatís not known is in which circumstances people would find it more convenient to communicate with their machines. I donít think itís obvious how these things would get applied: itís obvious they would get applied. I think there will be furious experimentation. Graphical User Interfaces are 25 years old, we ought to be able to do much better now.

What topics do you think will be exciting to work on in the next few years, say five to ten years?
There will be exciting work in adaptive systems, learning theory.

Whether you call it A.I. is neither here nor there. I think adaptive systems, learning theory, understanding the world on a statistical basis.

When you read of or hear of many research areas under development, how do you judge which ones are ripe for the picking within the next ten to 15 years, and which ones will be more difficult and take longer to achieve headway?
I donít think anyone can say how one judges what a good area of work is. You hear of a good idea, you see the odd interesting papers at conferences, you say, Ah, thatís a different approach, letís give it a try. Itís a matter of always looking, always keeping your eyes open for things that you ought to be doing. A lot of luck is involved. Twice in my career Iíve been fortunate enough to write the papers which everybody has to cite because they're the first ones. I think whatís important for a computer researcher is not to just sit in there and wait. When I was an academic I used to interview candidates for professorships for other universities and I always asked them what they thought their research activities would be in the next ten years, and the only correct answer was, "I donít know." And thatís why computing research is an exciting thing: you donít know where the actions are going to be. I think it was Napoleon that was asked "What is the attribute of being a good general?" He said "March towards the sound of the guns." Thatís a good maxim.

How does one become good in software research?
Working with people who are good.

What are your plans for the future?
Iím 65. I have no intention of retiring. For the next two or three years, I want to see this place, i.e. Microsoft Research Europe, grow and flourish.

Industrial laboratories, which proliferated in the first quarter of the last century, have been accused of favouring conservative invention. I think it was the chemist, Carl Duisberg, a director of Bayer before World War I, who said inventions of industrial labs have "no trace of a flash of genius." Do you agree with this or is this different in software research labs?
I certainly hope not. One of the things I was told when I started this lab was if every project you start succeeds, youíve failed. Iím supposed to take risks. Without taking risks you can never make anything. This is different from lab to lab, of course, and from company to company. Iíll quote Napoleon again who said, "Nobody who has made anything has never made mistakes."

Thank you very much for your time, Professor Needham.
Thank you.

Profiled by Tope Omitola