ACM Fellow Profile
Eugene H. Spafford

spaf@cerias.purdue.edu
http://www.cerias.purdue.edu/homes/spaf/

Gene Spafford ("Spaf") is professor of Computer Sciences, professor of Philosophy, and Director of the Center for Education and Research in Information Assurance and Security (CERIAS) at Purdue University. He is also the interim Information Systems Security Officer for Purdue University.

ACM awarded Fellow status to Spafford for "continuing scholarship and community service in promoting computing technology and networks, with particular emphasis on issues of security, ethics, safety, and responsible use." Spafford was named a Fellow of the IEEE, effective 1 January 2001, for "leadership and contributions in the field of information security." He has also been awarded status of Fellow in the AAAS for his contributions to the field of computing.

Spafford serves as one of ACM's two members of the Board of Directors of the Computing Research Association and continues as co-chair of ACM's Committee on US Public Policy.

Spafford's research interests include computer and network security, computer crime and ethics, and the social impacts of computing.

Among other firsts, Spafford coined the term "firewall" in 1990 as applied to networks. He coauthored "Practical Unix and Internet Security" and "Web Security and Commerce" with Simson Garfinkel (both published by O'Reilly and Associates). His recent papers include

  • J.B.D. Joshi, W.G. Aref, A. Ghafoor and E.H. Spafford, "Security Models for Web-based Applications", Communications of the ACM, vol. 44 no. 2, 2001
  • J.B.D. Joshi, W.G. Aref, A. Ghafoor and E.H. Spafford, "Digital Government Security Infrastructure Design Challenges", Computer, vol. 34 no. 2, 2001
  • T. Daniels and E.H. Spafford, "Identification of Host Audit Data to Detect Attacks on Low-level IP Vulnerabilities", Journal of Computer Security, vol. 7 no. 1, 1999
  • M.J. Atallah, K.N. Pantazopoulos, J.R. Rice, and E.H. Spafford, "Secure Outsourcing of Scientific Computations", to appear in Advances in Computers, 2001
  • E.H. Spafford and D. Zamboni, "Intrusion Detection Using Autonomous Agents", in Computer Networks (Elsevier), vol. 34 no. 4, 2000

Spafford was interviewed for this profile by Greg Cooper in February, 2001.



About a year ago, you were invited to the Infosecurity Summit at the White House, at which President Clinton made it clear that information security was an area of national importance and that it was taken seriously by him and his administration. Do you see the current administration according it the same level of significance so far?

I haven't seen any such interest yet from the current administration. They haven't appointed anyone senior so far in the science and technical area, so I have some qualms. However, we will need to see what develops in the coming months.

What about on-going offices like NIPC, the National Infrastructure Protection Center in the FBI, which says it's the government's focal point on critical infrastructure?

Different organizations in the executive branch claim the leadership role. There was a rapport and expertise between members of the security community and the last administration that was built up by political appointees in the executive branch, but now they're gone. There are a number of people at NSA, FBI, and NIST that try to do the right thing but they don't have sufficient resources or scope of authority.

Politics complicates the picture. For example, about a year ago there was a largely non-partisan effort to create an Institute for Information Infrastructure Protection. It was developing in a truly collaborative way, but as the result of a last-minute amendment placed on a budget bill, the funding went to an academic institution with little or no expertise in this area -- the institution was in the district of the senator who attached the rider.

Most of the academic centers have ended up with nothing from the Federal government towards their core needs. This doesn't bode well for the next few years. The fledgling organizations that have gotten resources either have to cannibalize the places that have existing programs and no funds, or they'll possibly exhaust their resources over several years without accomplishing anything. As someone who's been in the field a long time, I find it very discouraging.

Could the funding be made stable somehow?

The big problem is funding for core infrastructure: the continuity of the people such as systems administrators, continued access to journals, and so on, and this is particularly important for the state schools. At these places, the school administrators say, "If it was important, the government would pay for it." And government says, "If it was important, industry would be doing it." And industry says, "If the schools want to do it, they or the government should support it."

So it's a Catch-22?

Yes. Most industry isn't generally looking to the far horizon. It's looking just two steps ahead. The bottom line is the next quarter. There are several issues:

There don't seem to be any recent high-profile or sector-wide damages being sought yet. Therac comes to mind out of the past, but it seems that many vendors figure they're not making life-support or air traffic control software (viz. the Java license disclaimer), so they don't have to worry about quality.

There are a number of big vendors pushing to get UCITA passed. It's an interesting strategy: instead of trying to make things better, these vendors are trying to cover their backsides and keep the ability to market shoddy products. Under UCITA the consumers could be prohibited from even writing a negative review of a software product without the company's permission. UCITA passed in Virginia and Maryland, and is under consideration in several other states now, including Texas. Consumers remain blissfully ignorant of UCITA and similar dangers.

...despite the efforts by ACM and IEEE. What's the predisposition of practitioners to ethical behavior?

In medicine there's an awareness of the impact of the practitioner's actions on people because they see them up close. But computer specialists haven't had the perception that ethics or social concerns are important, because they don't see the human impact of their work. They never see who uses the software. If I'm writing a subroutine library, I may not even know what software it's used in.

We think "We're writing throw-away code," or "We have to get it to market right now." There's a whole sense that these other issues don't apply, and we end up with systems that ignore safety and privacy concerns. We're building systems that capture information because we can, without considering whether we should do so. We add features that the user doesn't want or need or use, because we can. And then people act surprised when those extra features contribute to failure or misuse.

Too many computer specialists don't stop to think of ethical issues, because they've been taught that it's a mathematical science -- there are supposedly no values inherent in the application of technology. But we should be designing with safety, security and privacy in mind. Practitioners usually do recognize this when they hear it at conferences or presentations, but it needs to be ingrained and pervasive from the first educational exposure to computing.

But isn't there a temptation to fall into the old ways when they return to work?

It's about cost. If you make software better, you may be slower to market. Even if your product is more reliable and more secure, and does what the user really needs, you may go out of business while the makers of inferior products continue to prosper. The software that most people use today reinforces that.

It's the Fast Food syndrome: people want it because it's quick [to buy] and cheap. Never mind that the taste is monotonous, and may give you high blood pressure or clogged arteries, contributing to your demise. But the vegetarian and health restaurants are harder to find. The menus there have wider selection and the food is better for you, but it takes more time to prepare and it costs more.

We have a problem with the marketplace. There are forcing functions: you can make something cheaper, faster, increase the demand or increase consequential costs. The history of software engineering shows that a lot of our techniques make software better, but usually they don't make it faster -- there's a cost to bounds checking and other runtime checks. If you can't make it cheaper or affect the demand while making it better, increasing those consequential costs is all that's left to effect a change.

What's the most oft-repeated mistake in software engineering?

I don't think there is a single mistake, per se, but a small group of them. First, we have many people who've never been taught how to do things right. The demand for writing software is so high and the number of places teaching how to do it right is so low. You can't blame many of the people for doing it wrong because they've never been taught the right way.

A second part of the problem is the mindset that we must create software to be compatible with the old junk that predominates. You could create a new system that's secure and reliable, but isn't written in C or C++ and doesn't run Microsoft Word, and it wouldn't succeed. The hackers wouldn't want it because it isn't written in C, and the business users wouldn't want it because it can't run Word. So, any products we develop must incorporate the problems and vulnerabilities of the past.

Third, there's a lack of understanding of the consequences of poor quality. If we don't fix the defects, people put up with them, sometimes without realizing the vulnerabilities they have.

The combination of these things is not a mistake. Ignorance is a foundational property. The pressure by the consumer community to simply crank out people who can do something overwhelms the capacity of the places that are structured to produce people who can do things correctly and well. Until we find how to make better software with only marginal (or no) increases in cost or execution speed, it is unlikely we can stem that tide.

Look where people invest for long-term research and education: there are endowed chairs and funded centers in e-commerce and computer games, but you don't see people endowing chairs in security, and only a few in software engineering. What are the priorities of the people who have money?

The vast majority of users doesn't understand what's going on in their OS or CPU, and they trust the people who brought them the technology to do things right.

We have a history of technology that seemed to solve problems that was rolled out to the public before the full impact was known: X-rays, antibiotics, DDT, and more. Experts pronounced these things safe, but there turned out to be hidden effects and hazards that were apparent after time or overuse. This is what happens when people rely too much on technology without experience with its long-term effects. The technology is too complex for the average person to understand and so they need to depend on the experts -- but the experts sometimes don't understand the issues, and sometimes they're not even experts! There's also a problem when the most notable experts have an emotional or financial stake in the marketing of the technology.

We're also seeing this with bioengineering: "we tested it and there were no ill effects after 10 generations." But maybe after 100 or 1000 generations there will be, or they may be some other interaction that we don't recognize yet. That is what scares many people who look at those issues. We're not doing a good job of being good stewards for the public -- not in every case. The public's eagerness to drive the new economy with the next new discovery doesn't help.

This isn't to suggest I'm against technology. Rather, I think there are consequences to pushing out technology because it's novel. We need to do a better job of thinking about long-term effects and attempt to act in the best interests of a population that isn't able to understand that the new features that excite them may also harm them.

Do you think the desire not to have stringent controls is a particularly North American attitude?

Yes, I think it is, more so than many other places. It's a cultural thing, and it extends to other areas of infrastructure, such as power distribution, telecommunications, highways, banking and others. However, other places come to mind with even more control; Singapore, for instance.

As abuses continue more people will lose confidence. There are reports of people who've suffered credit card fraud and have stopped going online because they don't know how to protect themselves. And that lack of confidence may lead to the general public clamoring for even more controls to protect them. It will hurt us all in the long run because we didn't design it right in the first place.

Lack of industry/academic interchange has been described as a problem for several years, typically in areas such as testing and verification. In the case of security and ethics it seems that the problems from practitioners not hearing or heeding the warnings could be potentially far more costly.

This tends to happen in any field when there are no consequences for making bad decisions and not staying current in the field. The ACM Code of Ethics has as one of its points the responsibility to acquire and maintain professional competence. This is recognized as being part of any profession. The problem is that many people don't see it as a profession -- they see it as a job. Right now the demand is so intense that employers are willing to hire and keep anyone who can program, and the difference doesn't matter.

Fred Brooks wrote that there's "no silver bullet" for improving programming productivity, that complexity is the business we're in. Peter Neumann wrote of the "not-so-accidental holist" and the need for a systems perspective when dealing with the risks of computers and related technology. How can practitioners begin to develop a holistic view and behavior?

First, they should join a professional society that helps to keep its members current. Second, they need to be aware of what's going on in the world. Maybe the most important aspects of being a professional are not only being technically adept, but being aware of one's place in society. I see some students who don't read even the weekly news outlets, let alone the daily ones. Awareness of our context is part of being a well-rounded individual. You have to read more than simply the trade papers. Find out what society thinks is important.

ACM, the IEEE Computer Society, Usenix, SIAM, and others put resources into producing journals and conferences designed to keep their members aware of what's going on in their fields. With a profession, you're expected to set aside time for continuing education, for going to conferences and seminars, for finding out how others do things. These are simple steps, but they're a start.

The position of Privacy Officer is becoming a corporate staple. We're also seeing creation of the Corporate Ethicist, whose office lies outside the normal chain of reporting, so anyone can take a problem or issue directly to them. These are good things and are indicative of a change in the industry. Being aware of those trends and contributing to them will help grow the profession and develop a more holistic view.

20 years ago we had graduate students and others who refused to take jobs working on SDI (the Strategic Defense Initiative). And people took a very strong stance about dealing with companies who did business with South Africa during apartheid. These are examples of roles we can play as individuals where our collective acts speak loudly. On the other hand, consider what happens when people simply go along with business -- the news that recently came out about IBM allegedly selling equipment to Nazi Germany for use in concentration camps has caused some significant outrage.

The tightness of the market can be to your advantage. If you're coding and you're being asked to do something unsafe, or to do some unwarranted data collection, you can try to educate your bosses, but if that doesn't work, you can vote with your feet. You don't have to work on projects that are unethical or unsafe. Consumer demand isn't the only factor in how work gets done or what gets done.

Each of us, as an individual, has the power to choose. In today's market we have more freedom to choose the right thing than ever before.

It seems that the majority of consumers, and a significant fraction of system or network managers, seek an appliance-like quality in their computing. In the face of this attitude, how can public debate on security and privacy issues be initiated so the market can be changed?

I believe that much of this is traceable to one fundamental problem we have. People who build software are better educated than average: they've had extra training, they're generally better at algorithmic thinking, reading and so on. Much of the general population is not as well-educated as we are: many are actually functionally illiterate.

Too often in the universities and companies where software people learn and work, they associate with others of similar abilities. They don't see what different people are like. When they're writing on-line help or manuals they say, "Anybody should be able to figure out how to use this..." but they don't know who "anybody" is, or what their capabilities really are. As a result, a lot of things get built and shipped that average people can't use fully. The "stupid user" help-desk stories people laugh over are actually pitiful.

We also say, well if the users are concerned about security, they can buy and install this firewall product. But it's a huge leap to get those people to even understand the need for such a thing. We have to do a much better job of understanding users and describing things to them.

And the people we're designing for may never comprehend certain things. In that case we have to make informed decisions on their behalf. We have to be good stewards and recognize that they may never appreciate the nuances of what we've built for them. This applies to security, privacy, fault tolerance, reliability -- all areas. Those users expect an appliance that works. We should be trying to design it for them, rather than expecting them to become computing experts.

You've said, "If we try to impose technological solutions as the only control on people, it will fail."

That's right. The technology isn't foolproof, and sometimes the fools are too clever. But more importantly, there are things that are better imposed by getting people to understand, or to agree, than by trying to build technology around them. Computing technology should be something to enable and enrich our lives -- not control them. That should be the goal of every computer professional. Unfortunately, not everyone working in the field is really a "professional."

There are many things that define the profession. Years ago there was the distinction between data entry clerks and programmers. Another example is the distinction between bookkeepers and accountants. In each case, one title represents a job, the other a profession, and the distinction between them is based on the trust we place in their results and their skill sets.

People who code up web pages are not necessarily professionals, although they should be aware of privacy, usability and other issues, and maybe should be supervised by someone who is a professional. The effort to license software engineers that's being seen now in some states may apply some pressure in that direction.

We have a profession already. Among other places, you can find the evidence in the kinds of work, techniques and contributions of the people who've been made Fellows of ACM or IEEE. The work is recognized not only for its technical accomplishments but also for helping to define the profession and for service to society. That the professional organizations have codes of ethics or conduct is also a sign that the profession values more than technical expertise.

Do we want people who're currently satisfied simply to get a paycheck to aspire to be part of the profession? Some of them only want to build video games and get stock options.

But as part of the profession you're doing something to make a difference. It happens when you stop focusing on writing something for yourself and start giving something back. It happens a step at a time. Members don't belong to ACM or the Computer Society because it raises their salary -- it doesn't. They participate in order to make things better for the profession and for society.

Thanks for sharing your insights with us!