Center for High Assurance Computing Systems (CHACS) Publications

1994

Costich, O., J. D. McLean, and J. P. McDermott, "Confidentiality in a Replicated Architecture Trusted Database System: A Formal Model," Proc. IEEE Computer Security Foundations Workshop VII, IEEE CS Press, IEEE Cat. 94TH0686-6, ISBN 0-8186-62340-1, June, 1994, pp. 60-65. PostScript
Unlike previous approaches to developing a trusted database system, the replicated architecture approach provides access control at a high level of assurance through replication of data and operations. We present a model of the SINTRA replicated architecture trusted database system which shows how the logical (users') view of the system and its security policy is translated into the physical structure and operations of the SINTRA system. We formalize the intended security policy for replicated architecture and demonstrate that a high level of assurance can be obtained solely from replication with virtually no change to the sturcture of the underlying database systems or the security kernel.
Costich, O., M. H. Kang, J. N. Froscher, "The SINTRA Data Model: Structure and Operations,", Proceedings of the 8th Annual IFIP WG 11.3 Working Conference on Database Security. Bad Salzdetfurth, Germany, August 1994.
Relational database systems are based on a powerful abstraction: the relational data model with the relational algebra and update semantics. If the database design (i. e., the way the data is organized) satisfies criteria provided by this foundation, users have assurance that they can retrieve information in a consistent, predictable way. Multilevel secure database systems must not only provide assurance that information is protected based on its sensitivity, but should be based on a data model as sound and complete as the conventional relational model. In this paper, we present a data model with a relational algebra and update semantics for a multilevel secure database system whose protection mechanisms are provided by the replicated architecture. The approach is to systematically describe the effects of treating security labels as data and to define explicitly the semantics of these data labels for relational database operations. We also briefly compare the SINTRA data model to earlier ones from the SeaView project and their derivations.
Froscher, J. N., M. H. Kang, J. P. McDermott, O. Costich, and C. E. Landwehr, "A Practical Approach to High Assurance Multilevel Secure Computing Service," submitted to Tenth Annual Computer Security Applications, Orlando, FL, Dec., 1994. PostScript
Current projects to provide MLS computing services rarely seem to exploit advances in related fields. Specifically, the concepts of data distribution, replication, and interoperation are currently receiving much attention in the commercial database system sector but have yet to be applied to the delivery of MLS computing services. This paper explains how these concepts might be applied to help deliver MLS computing services relatively quickly and cheaply, and how they can ease integration of legacy systems and new technology into future MLS cooperative, distributed computing environments.
Goldschlag, David ``A Formal Model of Several Fundamental VHDL Concepts,'' Proc. of COMPASS, '94, IEEE Press, 1994.

Goldschlag, David ``Mechanically Verifying Safety and Liveness Properties of Delay Insensitive Circuits,'' To appear Formal Methods in System Design, Fall 1994.

Heitmeyer, C.L. and N. Lynch, "The Generalized Railroad Crossing: A Case Study in Formal Verification of Real- Time Systems", NRL Memorandum Report 7619, Wash, D.C, 1994.

A new solution to the Generalized Railroad Crossing problem, based on timed automata, invariants and simulation mappings, is presented and evaluated. The solution shows formally the correspondence between four system descriptions: an axiomatic specification, an operational specification, a discrete system implementation, and a system implementation that works with a continuous gate model. The details of the proofs, which were omitted from the conference paper due to lack of space, are included.
Clements, P.C., C.L. Heitmeyer, B.G., Labaw and A.T., Rose, "MT: A Toolset for Specifying and Analyzing Real-Time Systems", Proceedings of the Real-Time Systems Symposium, IEEE Computer Society, Raleigh-Durham, North Carolina, 1994.
This paper introduces MT, a collection of integrated tools for specifying and analyzing real-time systems using the Modechart language. The toolset includes facilities for creating and editing Modechart specifications. Users may symbolically execute the specifications with an automatic simulation tool to make sure that the specified behavior is what was intended. They may also invoke a verifier that uses model-checking to determine whether the specification imply (satisfy) any of a broad class of safety assertions. To illustrate the toolset's capabilities as well as several issues that arise when formal methods are applied to real-world systems, the paper includes specifications and analysis procedures for a software component taken from an actual Navy real-time system.
Heitmeyer, C.L. and N. Lynch, "The Generalized Railroad Crossing: A Case Study in Formal Verification of Real-Time Systems", Proceedings, 15th IEEE Real-Time Systems Symposium, Dec 7-9, 1994.
A New Solution to the Generalized Railroad Crossing problem, based on time automata, invariants and simulation mappings, is presented and evaluated. The solution shows formally the correspondence between four system descriptions: an axiomatic specification, an operational specification, a discrete system implementation, and a system implementation that works with a continuous gate model.
Heitmeyer, C.L., "The Role of HCI in CASE Tools Supporting Formal Methods", Proceedings, Workshop on Software Engineering an human-Computer Interaction, Sorrento, Italy, May 16-17, 1994.
This paper describes a number of issues in human-computer interaction that arose in two projects which are developing CASE tools to support formal methods. It also proposes a research agenda.
Kang, M. H., J. N. Froscher and Ravi Mukkamala, "Architectural Impact on Performance of a Multilevel Database System," Proc. Tenth Annual Computer Security Applications, Orlando, FL, Dec., 1994. PostScript
There are many known approaches for multilevel secure database systems. Since protection and assurance are the primary concerns in MLS databases, performance has often been sacrificed. Motivated by performance concerns, a replicated architecture approach which uses a physically distinct backend database management system for each security level is being investigated. This is a report on the behavior and performance issues for the replicated architecture approach. Especially, we compare the performance of the Secure INformation Through Replicated Architecture (SINTRA) MLS database system to that of a typical conventional (non-secure, single-level) database system. After observing the performance bottlenecks for the SINTRA, we present solutions that can alleviate them.
Kang, M. H., H. Dietz, and B. Bhargava, "Multiple-query Optimization at Algorithm-level", Data & Knowledge Engineering Journal, Vol. 14, pp. 57-75, Elsevier Science B.V..
The database multiple-query optimization can be achieved by analyzing multiple-query sequences at a level below that used by current optimizers, but above the low-level executable code. In this paper, the concept of the "algorithm-level" representation of a database program is defined and optimization techniques that can be applied to the algorithm-level representation are discussed. Some techniques extend existing concepts, while others are new. In this paper, we also show multiple-query optimization can be performed across the update queries.
Kang, M. H., J. N. Froscher, J. McDermott, O. Costich, and R. Peyton "Achieving Database Security through Data Replication: The SINTRA Prototype," Proc. 17th National Computer Security Conference, Baltimore, MD, Sept, 1994, pp. 77-87. PostScript
There are several proposed approaches for multilevel secure (MLS) database systems which protect classified information. The SINTRA (Secure INformation Through Replicated Architecture) database system, which is currently being prototyped at the Naval Research Laboratory, is a multilevel trusted database system based on a replicated data approach. This approach uses physical separation of classified data as a protection measure. Each database contains data at a given security level and replicas of all data at lower security levels. Project goals include good performance and full database capability. For practical reasons (e.g., ease of evaluation, portability) the SINTRA database system uses as many readily-available commercial components as possible. In this paper, security constraints and the rationale for the SINTRA prototype are described. We also present the structure and function of each component of the SINTRA prototype: the global scheduler, the query preprocessor, and the user interface. A brief description of the SINTRA recovery mechanism is also presented.
Kang, M. H., O. Costich, and J. N. Froscher, "Using Object Modeling Techniques To Design MLS Data Models," in Security for Object-Oriented Systems, B. Thuraisingham, R. Sandhu, and T.C. Ting, eds., Springer-Verlag, London (ISBN 3540198776), 1994.
The expressiveness of the data model has a significant impact on the functionality of the resulting database system. The more general the data model, the less need be lost when the conceptual model is mapped onto a particular data model. In this paper, we explain how MLS data models can lead to a loss of database functionality or the inability to model some real world phenomena if data models are not kept general and independent of other considerations. We also present our positions with respect to developing MLS data models for MLS database systems using the object modeling technique (OMT).
Landwehr, C. E., A. R. Bull, J. P. McDermott, and W. S. Choi, "A Taxonomy of Computer Program Security Flaws, with Examples," ACM Computing Surveys, Vol. 26, No. 3 (Sept. 1994), pp. 211-254. PostScript [Earlier version published as NRL Formal Report 9591]

Landwehr, C. E., "Hidden Safety Requirements in Large-scale Systems," in Proc. 13th World Computer Congress, IFIP Congress 94, Vol. 3, K. Duncan and K. Krueger, eds., Elsevier Science B.V. (North-Holland) pp.295-302. PostScript

To avoid hidden safety problems in future large scale systems, we must be able to identify the crucial assumptions underlying the development of their components and to enunciate straightforward rules for safe component interconnection.
McDermott, J. "The B2/C3 problem: How big buffers overcome covert channel cynicism in trusted database systems." Proc. 8th Annual IFIP Conference on Database Security. Bad Salzdetfurth, Germany, August 1994. PostScript
We present a mechanism for communication from low to high security classes that allows partial acknowledgments and flow control without introducing covert channels. By restricting our mechanism to the problem of maintaining mutual consistency in replicated architecture database systems, we overcome the negative general results in this problem area. A queueing theory model shows that big buffers can be practical mechanisms for real database systems.
McLean, J.D., "Quantitative Measures of Security", Preprints of the Fourth International Working Conference on Dependable Computing for Critical Applications, San Diego, CA, 1994.
One of the most striking properties of the ``Trusted Computer System Evaluation Criteria'' and its international successors is that none of these documents contain any attempt to relate their evaluation levels to a measure of how much effort must be expended to break into a system. As a consequence, it's impossible to evaluate rationally the marginal benefit of spending the extra money necessary to obtain a higher rating than a lower one. One reason for this gap between evaluation levels and cost of system penetration is the difficulty of quantifying penetration costs. In this paper I hope to shed some light on the questions of what is needed, what we have, and what it would be useful to have in the future.
McLean, J. ``Security Models,'' Encyclopedia of Software Engineering (ed. John Marciniak), Wiley & Sons, Inc., 1994. PostScript

McLean, J.D., "Assurance Risk Assessment, and Fuzzy Logic", Proceedings of the 17th National Computer Security Conference, Baltimore, MD, 1994, pp 483-486, 1994.

To evaluate the effectiveness of techniques used to build secure systems some sort of quantitative measure of penetration resistance is desirable. However, fuzzy logic is the wrong way to go since a single fuzzy metric of system security hides the information that was used to generate the metric and since there is an inherent danger of giving quantitative fuzzy metrics more credence than they really deserve. Certainly with respect to confidentiality, and possibly with respect to integrity and availability as well, information theoretic approaches, though not perfect, are more suitable. Money would be better spent furthering such approaches, rather than developing fuzzy new ones.
McLean, J. ``A General Theory of Composition for Trace Sets Closed Under Selective Interleaving Functions,'' Proceedings of 1994 IEEE Symposium on Research in Security and Privacy, IEEE Press, 1994. PostScript

Meadows, C.A., "Tradeoff Areas in Secure System Development", Proceedings of CSESAW '94, Naval Surface Warfare Center, July 1994.

In this paper we identify several areas in which the satisfaction of security requirements can affect the cost and performance of a system, and describe what is known about tradeoffs in these areas. The areas we investigate include both features offered by the system and the procedures that are involved in building the system.
Meadows, C.A., "The Feasibility of Quantitative Assessment of Security" Proceedings of DCCA4, Springer-Verlag, To Appear, 1994.
We discuss the feasibility of quantitative assessment of security, and outline several areas in which quantitative assessment may be possible and of practical use.
Meadows, C.A., "The Need for a Failure Model for Security", Proceedings of DCCA4, Springer-Verlag, To Appear, 1994.
We discuss the necessity and practicality of constructing a failure model of security similar to that for fault-tolerance.
Catherine Meadows, ``A Model of Computation for the NRL Protocol Analyzer,'' Proceedings of 1994 Computer Security Foundations Workshop, IEEE Computer Society, June, 1994. PostScript
In this paper we develop a model of computation for the NRL Protocol Analyzer by modifying and extending the model of computation for Burroughs, Abadi, and Needham (BAN) logic developed by Abadi and Tuttle. We use the results to point out the similarities and differences between the NRL Protocol Analyzer and BAN logic, and discuss the issues this raises with respect to the possible integration of the two.
Richard Kemmerer, Catherine Meadows, and Jonathan Millen, "Three Systems for Cryptographic Protocol Analysis,'' Journal of Cryptology, Vol. 7, no. 2,1994.
Three experimental methods have been developed to help apply formal methods to the security verification of cryptographic protocols of the sort used for key distribution and authentication. Two of these methods are based on Prolog programs, and one is based on a general-purpose specification and verification system. All three combine algebraic with state-transition approaches. For purposes of comparison, they were used to analyze the same example protocol with a known flaw.
Catherine Meadows, ``The NRL Protocol Analyzer: An Overview,'' Proceedings of the 2nd Conference on the Practical Applications of Prolog, Association for Logic Programming, April, 1994. PostScript
The NRL Protocol Analyzer is a special-purpose verification tool, written in Prolog, that has been developed for the analysis of cryptographic protocols that are used to authenticate principals and services and distribute keys in a network. In this paper we give an overview of how the Analyzer works and describe its achievements so far. We also show how our use of the Prolog language benefited us in the design and implementation of the Analyzer.
Moskowitz, I. S. and M. H. Kang, "Covert Channels -- Here to Stay?" Proc. COMPASS '94, Gaithersburg, MD, IEEE Press, IEEE Cat. 94CH3415-7, ISBN 0-7803-1855-2, June, 1994, pp. 235-243.
We discuss the difficulties of satisfying high-assurance system requirements without sacrificing system capabilities. To alleviate this problem, we show how trade-offs can be made to reduce the threat of covert channels. We also clarify certain concepts in the theory of covert channels. Traditionally, a covert channel's vulnerability was measured by the capacity. We show why a capacity analysis alone is not sufficient to evaluate the vulnerability and introduce a new metric referred to as the ``small message criterion''.
Moskowitz, I.S., and M. H. Kang, "Discussion of a Statistical Channel", in Proc. IEEE Information Theory and Statistics Workshop, Alexandria, VA, 1994, p. 95.
This paper deals with a new type of covert channel problem that arose when we designed a multilevel secure computer (MLS) system, using a quasi-secure, asynchronous, communication device called the Pump. We call this new type of covert channel a statistical channel. It is our hope to get feedback from experts who work in the intersection of information theory and statistics.
Moskowitz, Ira and A. Miller, ``Simple Timing Channels'' Proc. 1994 IEEE Symposium on Research in Security and Privacy, IEEE Press, 1994. PostScript

Payne, C. N., A. P. Moore, and D. M. Mihelcic, "An Experience Modeling Critical Requirements," Proc. COMPASS '94, Gaithersburg, MD, IEEE Press, IEEE Cat. 94CH3415-7, ISBN 0-7803-1855-2, June, 1994, pp.245-256. PostScript

Previous work at NRL demonstrated the benefits of a security modeling approach for building high assurance systems for particular application domains. This paper introduces an application domain called "selective bypass" that is prominent in certain network security solutions. We present a parameterized modeling framework for the domain and then instantiate a confidentiality model for a particular application, called the External COMSEC Adaptor (ECA), within the framework. We conclude with lessons we learned from modeling, implementing and verifying the ECA. Our experience supports the use of the application-based security modeling approach for high assurance systems.
McHugh, J., C.N. Payne and C. Martin ``Assurance Mappings'', a chapter in the Handbook for the Computer Security Certification of Trusted Systems, Naval Research Laboratory, To appear.

Payne, C.N. ``Security Policy Model'', a chapter in the Handbook for the Computer Security Certification of Trusted Systems, Naval Research Laboratory, To appear.

Rose, A.T., M.A. Perez, and P.C. Clements, "Modechart Toolset User's Guide", NRL Memorandum Report "NRL/MR/5540--94-7427", 14 February 1994.

This document describes how to use the Modechart Toolset (MT). MT is a set of tools designed to facilitate the specification, modeling, and analysis of real-time embedded systems using the Modechart language. MT supports the creation, modification, and storage of Modechart specifications. It also supports the analysis of Modechart specifications via a consistency and completeness checker, a simulator and a verifier.
Paul Syverson and Catherine Meadows, ``Formal Requirements for Key Distribution Protocols,'' Proceedings of Eurocrypt '94, to appear. PostScript
We discuss generic formal requirements for reasoning about two party key distribution protocols, using a language developed for specifying security requirements for security protocols. Typically earlier work has considered formal analysis of already developed protocols. Our goal is to present sets of formal requirements for various contexts which can be applied at the design stage as well as to existing protocols. We use a protocol analysis tool we have developed to determine whether or not a specific protocol has met some of the requirements we specified. We show how this process uncovered a flaw in the protocol and helped us refine our requirements.
Paul Syverson, ``A Taxonomy of Replay Attacks,'' Proceedings of the Computer Security Foundations Workshop VII, Franconia NH, 1994 IEEE CS Press (Los Alamitos, 1994) PostScript
This paper presents a taxonomy of replay attacks on cryptographic protocols in terms of message origin and destination. The taxonomy is independent of any method used to analyze or prevent such attacks. It is also complete in the sense that any replay attack is composed entirely of elements classified by the taxonomy. The classification of attacks is illustrated using both new and previously known attacks on protocols. The taxonomy is also used to discuss the appropriateness of particular countermeasures and protocol analysis methods to particular kinds of replays.
Paul F. Syverson and Paul C. van Oorschot, ``On Unifying Some Cryptographic Protocol Logics,'' Proceedings of the 1994 IEEE Computer Society Symposium on Research in Security and Privacy Oakland CA May 1994 IEEE CS Press (Los Alamitos, 1994) PostScript
We present a logic for analyzing cryptographic protocols. This logic encompasses a unification of four of its predecessors in the BAN family of logics including BAN itself. We also present a model-theoretic semantics with respect to which the logic is sound. The logic herein captures all of the desirable features of its predecessors and more; nonetheless, it accomplishes this with no more axioms or rules than the simplest of its predecessors.
Syverson, P.F., "An Epistemic Logic of Situations", Theoretical Aspects of Reasoning About Knowledge (TARK 1994) ed. by Ronald Fagin, Pacific Grove CA, Morgan Kaufman Pub. Inc., pp. 109-121, March '94.
In this paper we present a first order epistemic logic that incorporates the essentially finite character of what is actually known by any knower. Our logic and language allows us to represent familiarity with individuals including individual situations. It is also a logic of limited awareness in the manner of [Fagin & Halpern 88]. It is adequate for the syntactic characterization of the shared-situation account of common knowledge.e presented semantics.