Of all the important issues in modern large systems, there is general agreement about all issues save one: security. Talk to senior IT managers about development backlog, fourth generation languages, and relational databases. They will know what you mean, and will agree as to their importance. But ask about security: that is a different story. Some reactions are of contempt; some managers consider it a waste of time and money; but others, who have had problems in the past, value security very highly indeed.

I won't talk about the need for security in modern systems. There are many reasons, and they are understood by most people. What I want to discuss is the issue of standards in security. For several years now, the only published documents relating to computer security have been the "Rainbow Book" or "Orange Book" series from the American National Council for Computer Security (NCSC). A new set of standards have been published earlier this year by a European security conference (ITSEC), and there has been some controversy over this.

To understand why this controversy has arisen, it will help to look at how our existing security standards evolved.

In the early days of computing, only one person could use the system at a time, from the master console. Security was easy: if you could reach the console, you had passed several physical identification checks. Once you had access to the computer, you could do anything, with no risk of damaging other people's data (because they didn't store any data long-term on the system).

It wasn't long before multi-tasking, multi-user systems were introduced, and we all ran the risk of overwriting other people's data, and crashing their programs. Some way of controlling access to system resources was required; and so resource allocation systems were born. Systems programmers would have to do extra work to install sign-on exits for TSO, CICS or IMS, and they made the assumption that guarding access to terminals was outside their province. At that time, most or all terminals were in the computer room, or in special terminal rooms within the computer centre.

Two major advances in technology since the early seventies have destroyed the assumptions made by the designers of these early security systems: networking between peer systems, and 'fast' modems that allow terminals to operate from remote locations.

Now, every computer system in the world, or so it seems, has network access to every other. Not only that, but people are used to this type of access and understand how to use it.

The front-line of computer security is the password check. Most computer systems weakest point is the password system; systems are either too lax about password management, or, curiously enough, too strict. If a password is reasonably secure from guesswork, it will be hard for the password's owner to remember. If it is hard to remember, the chances are that he will write it down somewhere relatively open, thereby breaching the password management system.

There are a number of other security issues which are at least as important as password management. Unless all of these factors are correctly used together, the system will not be secure. It is easy to focus on just one element, at the expense of the rest.

The most widely known system that covers all of the elements that contribute to security is the NCSC system of classification. This was developed specifically for the US Department of Defense.

The standards are divided into several levels (D, C1, C2, B1, B2, B3, A1), and cover a specific set of topics (Security Policy, Accountability, Assurance, and Documentation). A component that does not address all of these issues, and so is not a complete 'computer system', can be called a 'Security Service'. Interestingly enough, methods of design, testing and source code control are all specified.

D is the lowest level, and covers all systems that don't meet higher levels. This might be completely unprotected systems, or systems with very complex security, but with weaknesses.

C covers security for users who are cooperating, against accidental loss of data. A C classified system is not considered protection against a deliberate attempt at entry. The 'secure' versions of MVS and Unix are only considered to be C level systems.

The first level that might be considered to be adequately secure against attack is B. A B1 or B2 system will be very resistant to penetration. Different components of the system will be kept logically separate when designed, so that if one part is broken, the rest is still secure, and so that it isn't possible to gather data about the system by indirect observation.

The highest NCSC level is A; this is a system that is mathematically provably secure. As you might imagine, this is a suitable subject for research papers, not for real world systems!

Since its introduction only five years ago, the NCSC has come in for a lot of criticism. It is military security, designed to prevent exposures at all costs. That means that the system running slow is acceptable; that complete system shutdown is the normal response to a possible intruder; and inconvenience isn't an issue. For business users, this isn't acceptable.

So the ITSEC conference in Europe has proposed a White Paper on security, which is now part of the official standards of the European Community. The paper has not been discussed openly, although copies have been available to interested parties who already knew of its existence. This White paper is an official standard from June 1991, and will not be reconsidered for the next three years.

The ITSEC draft covers roughly the same area as NCSC, but with extra standards for data base recovery, fault tolerance, and encryption. In the introduction, it states that the reason for 'harmonisation' is to share experiences from many countries; to provide a single standard for industry; and because security concepts are international.

The criteria are based on three objectives: confidentiality, integrity of data, and system availability. The document divides into three areas that address these objectives: security functions; assurance of 'correctness' and assurance of 'effectiveness'; and the evaluation process. The latter is clearly intended to be run as a commercial service, which will be paid for by a sponsor. It makes a clear statement of intent that it recognises security products as well as systems, although no further consideration is given to security products. The emphasis is on improving security through consultancy and custom development.

A sponsor will provide a list of objectives, known as a 'security target'. This will include desired functions, and perceived threats. A required level of protection is agreed with the evaluator. The sponsor must provide all resources required to conduct the test, and must even conduct the test itself.

The ITSEC standard is a paper drawn up by a consultancy, for the advantage of consultancies. It bears as much relevance to commercial security interests as it does to a banana. In general, it draws very heavily on the Orange book. Early drafts of the paper included direct quotes from, and references to, the Orange book. The NCSC included a separate volume on password management; ITSEC ignore the issue altogether. Alternatives to password systems are ignored; encryption (one of those alternatives) is mentioned, but only in vague terms.

The general European business community had an opportunity to build a new acceptance of security issues in the IT industry. To my regret, they have now lost that opportunity.