versión On-line ISSN 0717-5000
CLEIej vol.14 no.3 Montevideo dic. 2011
Semantics for Interactive Sequential Systems and Non-Interference Properties
An interactive system is a system that allows communication with the users. This communication is modeled through input and output actions. Input actions are controllable by a user of the system, while output actions are controllable by the system. Standard semantics for sequential system [1, 2] are not suitable in this context because they do not distinguish between the different kinds of actions. Applying a similar approach to the one used in  we define semantics for interactive systems. In this setting, a particular semantic is associated with a notion of observability. These notions of observability are used as parameters of a general definition of non-interference. We show that some previous versions of the non-interference property based on traces semantic, weak bisimulation and refinement, are actually instances of the observability-based non-interference property presented here. Moreover, this allows us to show some results in a general way and to provide a better understanding of the security properties.
Un sistema interactivo es un sistema que permite comunicación con los usuarios. Esta comunicación es modelada a través de acciones de entrada y de salida. Las acciones de entrada son controladas por un usuario del sistema, mientras las acciones de salida son controladas por el sistema. Las semánticas estándares para sistemas secuenciales [1, 2], no se adaptan bien para este contexto porque éstas no distinguen entre estos tipos de acciones. Aplicando un enfoque similar al utilizado en  definimos semánticas para sistemas interactivos. En este contexto, una semántica particular está asociada a una ”noción de observabilidad”. Estas nociones de observabilidad son usadas como parámetro para una definición general de no interferencia. En este trabajo demostramos que versiones anteriores de la propiedad de no-interferencia, basadas en semácticas de trazas, bisimulación débil y refinamiento, son en realidad instancias de la propiedad de no-interferencia basada en nociones de observabilidad presentada en este trabajo. Más aún, este nuevo enfoque permite demostrar algunos resultados en forma general y permite un mejor entendimiento de las propiedades de seguridad.
Keywords: process theory, semantic, interactive systems, interface automata, non interference, secure information flow, refinement, composition.
Teoría de procesos, semántica, sistemas interactivos, autómata de interfaz, no-interferencia, flujos de información seguros, refinamiento, composición
Received: 2011-03-30 Revised: 2011-10-06 Accepted: 2011-10-06
An interactive system is a system that allows communication with the users. Usually, to carry out this communication, the system provides an interface that is used by them. Through the interface, the user sends messages to the system and receives messages from it. Interface Automata (IA) [3, 4, 5] is a light-weight formalism that captures the temporal aspects of interactive system interfaces. In this formalism, the messages sent by the user are represented as input actions, while the received messages are represented as output actions.
Interface structure for security (ISS)  is a variant of IA, where there are two different types of visible actions. One type carries public or low confidential information and the other carries private or high confidential information. For simplicity, we call them low and high actions, respectively. Low actions are intended to be accessed by any user while high actions can only be accessed by those users having the appropriate clearance. In this context the desired requirement is the so-called non-interference property . In the setting of ISS, bisimulation based notion of non-interference has been considered, more precisely, the so called BSNNI and BNNI properties . Informally, these properties state that users with no appropriate permission cannot deduce any kind of confidential information or activity by only interacting through low actions. Since it is expected that a low-level user cannot distinguish the occurrence of high actions, the system has to behave the same when high actions are not performed or when high actions are considered as hidden actions. To formalize the idea of “behave the same”, the concept of weak bisimulation is used.
In  it was argued that the BSNNI/BNNI properties are not quite appropriate to formalize the concept of secure interface. To illustrate this point the following two examples are presented: in the first one (Figure 1), we get that the system does not satisfy neither BNNI nor BSNNI but we show that it could be considered secure since no information is actually revealed to low users. The main problem is the way in which weak bisimulation relates output transitions. On the other hand, the second example (Figure 2) shows that weak bisimulation based security properties may fail to detect an information leakage through input transitions.
Figure 1 models a credit approval process of an on-line banking service using an ISS. As usual, outputs are suffixed by ! and inputs by ?. At the initial state , a client can request a credit (cred_req?). The credit approval process can be carried on locally or by delegating it to an external component. This decision is modeled by a non deterministic choice. If it is locally processed (loc_ctrl!), an affirmative or negative response is given to the client (yes!/no!) and the process returns to the initial state. On the other hand, if the decision is delegated (ext_ctrl!), the process waits until it receives a notification that the control is finished (done?), returning then to the initial state. Besides, in the initial state, an administrator can configure the system to do only local control (only_loc?). This action is high and is not visible for low users. (We underline private/high actions.) In state , the administrator can configure the system to return to the original configuration using action only_loc_off?.
The Credit Request does not satisfy the BSNNI property (nor the BNNI property) and hence it is considered insecure in this setting. The system behaves differently depending on whether the private action only_loc? is performed or not. If only_loc? is not executed, after action cred_req?, it is possible to execute action ext_ctrl!. This behavior is not possible after the action only_loc?. Notice nevertheless that output actions are not visible for the user until they are executed. Then, from a low user perspective, the system behavior does not seem to change: the same input is accepted at states and , and then, the low user cannot distinguish whether the observation of loc_ctrl! is a consequence of the unique option (at state ) or it is just an invariable decision of the Credit Request Process (at state ). Hence we expect the system to be classified as secure by the formalism.
We consider this example to be secure because a user does not know exactly what output action can be executed by an interface if he has no knowledge of the current state, he can observe the output actions only when they are executed.
On the other hand, a user may try to guess the behavior of the system by performing input actions: wrong inputs will be rejected/ignored; otherwise, they will be accepted. Based on this fact, the following example shows that weak bisimulation based non-interference may fail to detect an information leakage.
Figure 2 depicts the component that executes the external control. In the initial state, the interface waits for input ext_ctrl? from the Credit Request Process. After this stimulus, a response about the credit request is given. If the credit is denied (ext_no!), the client can either ask for a decision review (review?) or accept the decision (accept?). In both cases, the decision is processed by the component (process;). This action is internal and is not visible by users (hidden/internal action are suffixed by semicolon). The process finishes with action done! returning to the initial state. If the credit is approved (ext_yes!), the client can accept or decline the credit (accept?/decline?). The decision is processed, the component informs that the task is done and it returns to the initial state. As in the first example, the behavior of the component can be modified by an administrator, which can configure the interface to reject all credit requests (reject_all?). For this reason, if reject_all? is received at the initial state, after an input action ext_ctrl?, the process can only execute action ext_no!. At this point, clients are not allowed to ask for a decision review. Then, at state , the interface accepts only input action accept?. However, based on the client records, the review may be enabled; this is represented with the internal transition , notice state accepts both inputs actions accept? and review?. In any case, after the client response, the result is processed, the component informs that the task is done, and the process is restarted.
Suppose that the bank requires that the client cannot detect whether the external process is denying all credit request. Since a low user cannot see the output action until they are executed, he cannot differentiate between the executions and . If we compare states and under weak bisimulation, both state can execute the same visible transitions and no security problem is detected. Notice that at state , the process cannot respond immediately to a review? input, but it can execute (recall allow; is an internal action). In fact, low users can distinguish state from : testing the interface at state , the low user can find out that input action review? is not enabled, while at it is. Hence, we consider that the interface is not secure.
These observations are based on the fact that input and output actions are conceptually very different. Input actions are controllable by the user while output actions are controllable by the system. Therefore, some behavior one would expect from input actions may be inappropriate for outputs and vice-versa. For instance, the assumption that “wrong inputs will be rejected/ignored; otherwise, they will be accepted” in the second example above, makes no sense if applied to outputs because the malicious user is interested in collecting all possible information rather than in rejecting it.
In  and , a deep study about semantic for sequential system is done, but they do not take in account systems where both kinds of actions coexist. In their setting all actions are controlled by one entity: the user or the system. For example, in Fail Trace Semantic a user executes (input) actions until one action is rejected by the system, in this case the user has the control of which action is executed. A different case is Trace Semantic where the system has the control of the actions and the user can only observe the executions of the system. Also in stronger semantics, for example with global testing, the control belongs to one entity. For instance, Weak Bisimulation equivalence is also called observational equivalence and its intuitive notion is “two system are observational equivalence if they cannot be distinguished by an observer”, ie the user observes and the system executes (controls) the actions. Notice the subtlety in this case: global testing allows the user to force the system to execute all possible executions but, which actions can be executed in each state is controlled/defined by the system.
In this work we define semantics for systems where both coexist, actions controlled by the user (input actions) and actions controlled by the system (output actions). We have used an approach similar to the one used in . First we define types of observations, an information record that can be performed by a user. Second, we define a notion of observability as a set of types of observations. Each notion of observability is a particular semantic. This approach is simple, elegant and allows to be exhaustive: when the types of observation and notion of observability are defined one has all the possible semantics that could be defined.
These new semantics are suitable to study secure information flow properties over ISS. Moreover, the definition of non-interference presented in this work has as parameter a notion of observability. This generalization through types of observations provides a framework to prove generic theorems that extends to families of security properties. In addition, the approach subsumes previous definitions of non-interference for ISS, in particular the one based on traces , the one based on weak bisimulation  and the one based on refinement .
We also focus our attention in non-interference based on refinement. We give sufficient and simple conditions to ensure compositionality. We also provide two algorithms. The first one determines if an ISS satisfies the refinement-based non-interference property. The second one, determines if an ISS can be made secure by controlling some input actions, and if so, synthesizes the secure ISS. Both algorithms are polynomial in the number of states of the ISS under study. These results are relevant because they could be adapted to other instances of non interference based on notion of observability.
This paper is an extension of . In  we introduce non-interference based on refinement to resolve some shortcomings in the non-interference based on weak bisimulation properties. The approach based on notions of observability shows that the shortcomings do not exist because the properties should be considered in different contexts. We explain this in the last section of the paper.
Organization of the paper. In section 2 we recall definitions of IA, composition and ISS. In section 3 we define the types of observations, notion of observability and the set of observable behaviors of an IA. In section 4 we present the notion of non-interference based on notion of observability. We show that the approach subsumes previous definition of non-interference for ISS and we proof some general properties of non-interference. In section 5 we review the definitions of non-interference based on refinement, and we show that these definitions also are subsumed by the new approach. We study compositionality in this setting and define two algorithms: one to check whether an interface satisfies the property and the another to derive a secure interface from a given (non-secure) interface by controlling inputs actions. Section 6 concludes the paper.
Definition 1. An Interface Automaton (IA) is a tuple where: (i) is a finite set of states with being the initial state; (ii) , , and are the (pairwise disjoint) finite sets of input, output, and hidden actions, respectively, with ; and (iii) is the transition relation that is required to be finite and input deterministic (i.e. implies for all and ). In general, we denote , , , etc. to indicate that they are the set of states, input actions, transitions, etc. of the IA .
As usual, we denote whenever , if there is s.t. , and if this is not the case. An execution of is a finite sequence s.t. , and for . An execution is autonomous if all their actions are output or hidden (the execution does not need stimulus from the environment to run). If there is an autonomous execution from to and all action are hidden, we write . Notice this includes case . We write if there are and s.t. . Moreover denotes or and . We write if there is s.t. and . A trace from is a sequence of visible actions such that there are states such that is an execution. The set of traces of an IA , notation , is the set of all traces from the initial state of .
Composition of two IA is only defined if their actions are disjoint except when input actions of one of the IA coincide with some of the output actions of the other. Such actions are intended to synchronize in a communication.
The product of two composable IA and is defined pretty much as CSP parallel composition: (i) the state space of the product is the product of the set of states of the components, (ii) only shared actions can synchronize, i.e., both component should perform a transition with the same synchronizing label (one input, and the other output), and (iii) transitions with non-shared actions are interleaved. Besides, shared actions are hidden in the product.
- with ;
- , , and ; and
- if any of the following holds:
- , , and ;
- , , and ;
- , , and .
There may be reachable states on for which one of the components, say , may produce an output shared action that the other is not ready to accept (i.e., its corresponding input is not available at the current state). Then violates the input assumption of and this is not acceptable. States like these are called error states.
If the product does not contain any reachable error state, then each component satisfies the interface of the other (i.e., the input assumptions) and thus are compatible. Instead, the presence of a reachable error state is evidence that one component is violating the interface of the other. This may not be a major problem as long as the environment is able to restrain of producing an output (an input to ) that leads the product to the error state. Of course, it may be the case that does not provide any possible input to the environment and reaches autonomously (i.e., via output or hidden actions) an error state. In such a case we say that is incompatible.
Definition 5. Let and be composable IA and let be its product. A state is an incompatible state if there is an error state reachable from through an autonomous execution. If a state is not incompatible, it is compatible. If the initial state of is compatible, then and are compatible.
Finally, if two IA are compatible, it is possible to define the interface for the resulting composition. Such interface is the result of pruning all input transitions of the product that lead to incompatible states i.e. states from which an error state can be autonomously reached.
An Interface Structures for Security is an IA, where visible actions are divided in two disjoint sets: the high action set and the low action set. Low actions can be observed and used for any user, while high actions are intended only for users with the appropriate clearance.
If necessary, we will write and instead of and , respectively, and write instead of with and .
Extending the definition of composition of IA to ISS is straightforward.
Semantic equivalences for sequential systems with silent moves are studied in . Resulting in 155 notions of observability and a complete comparison between them. Unfortunately, these results cannot be applied straightforward to the IA context. For example, studied machines in  have not notions of input and output actions over the same machine. Moreover, in  there is not a notion of the internal structure of the analyzed machine. This situation have forced them to talk about definite and hypothetical behaviors of the machine. Despite these differences, we use  as a reference to define different semantics for IA. To avoid the distinction between definite and hypothetical behaviors, we use the transition relation of the IA to present the set of observable behaviors.
First we define type of observation, an information record that can be done by the user. Second, we define a notion of observability as a set of types of observations. Each notion of observability defines a particular semantic. Third, using the transition relation of the IA, we define the semantic of each type of observation and therefore a semantic for each possible notion of observability.
Given a system, a type of observation is an information that can be recorded by a user with respect to the interface. To define our types of observations we consider the following assumptions: input and output actions are observable when they are executed. Inputs are executed by a user, while outputs are executed by the interface. Then, input actions are controllable by the user and output actions are controllable by the interface. Internal transitions are controllable by the interface. In some cases, internal transitions can be detectable by the user but the user cannot distinguish between different internal actions. An user can observe how the interface interact with another user or he can be the one who interacts. If the user is interacting, the interface can behave in different ways as a result of some violation of its input assumptions: () it does not show any error and continues with the execution, () it stops the execution and shows an error to the user, () it shows an error to the user and continues with the execution; () finally, an interface could provide a special service to inform which inputs are enabled in its current state. In this way, the user can avoid input assumption violations. Notice that cases (), () and () determine, at the semantic level, a sort of input-enableness. In these cases we fix the behavior of input actions that are not defined in a particular state. The last four assumptions do not increase the expressiveness power of the model, as consequence they can be implemented in any IA. For example: let be an IA, the assumption can be implemented with self loops with action for all state state and . Using the same reasoning, we assume an interface could provide a service to detect the end of an execution, where the end is reached when no more transitions are possible. In addition, a user can make copies of the interface with the objective of studying the interface in more detail. Finally, a user can do global testing. Under this assumption it is possible to say that a particular observation will not happen.
Based on these assumptions, we introduce the following types of observations:
-  The execution of external actions are detectable.
-  The case of internal transitions are detectable is denoted with . Otherwise .
-  The session is terminated by the user. This is possible in any time. After this no more records are possible
-  If a user only observes the actions that are executed by an interface and cannot send stimuli to it, then there is no interaction. We denote this with . The case where the interaction is possible is denoted by .
-  The user interacts with system and the interface stops the execution whenever it receives an input action that is not enabled. In this case, the stop is observable.
-  Suppose the previous type but now whenever the interface receives an input action that is not enabled, the error is informed to the user and the execution continues.
-  To avoid the error of sending an input action that is not enabled, the interface can provide a method to check what input actions are enabled in its current state. In this case, the observation includes the set of enabled inputs.
-  This type is used if it is detectable when an interface reachs a final state, i.e. no more activity is possible.
-  Suppose the user has a machine to make arbitrary number of copies of the system. These copies reveal more information about the interface because one could observes different execution from the same interface. If the user makes copies and in each copy executes for , this observations is denoted with .
-  It is possible to test the interface over all possible condition. This allows to ensure that a particular observation is not possible; then a user can do an observation whenever is not possible execution of the system.
The types of observations studied here are not the studies in . On one hand, we decided to skip some types for the sake of simplicity. For example we did not include -replication nor continuous copying, which are different forms of make copies of the system. We did not include the notion of stable state, this avoids the inclusion of some variant of types of visibilities presented here. On the other hand, we have added new features. First, we differentiate between a user that interacts with the interface and a non-interacting user. Second, the knowledge of the internal structure of the interface allow us to know exactly when an internal action could be executed and define if the internal transitions are observable or not. This is a relevant feature in the context of security, because it could be used to represent covert channels.
A set of types of observations defines a notion of observability, see Definition 9. The notion of observability determines what information can be observed by a user. This has to be consistent, for example, types of observations “a user cannot interact with the interface” () and “a user can detect that the input sent was not enabled” () cannot belong to same notion of observability. Note that the definition of notion of observability ensures consistency.
Condition (1) ensures that input and output actions are always visible and that the user can terminate the session when he wants. Condition (2) ensures that internal transitions are detectable or not. Condition (3) ensures that a user can interact with the interface () or not (), and if he interacts, he will do in one particular way.
In  other kind of restrictions were added to simplify the study of which semantics make more differences: for example conditions as “if then ” are added. This reflects the fact that if the interface stops when a disable input is received, all observations that one can do in this scenario, can be done in the same machine configured to continue when the error occurs. Since we are not interested in studying which semantics is coarser than others, we omit these conditions.
Semantic. First we define all possible observations as a set of logic formulas called execution formulas. Then the set of observable behavior of an IA is the set of execution formulas that are satisfied by the initial state of the interface.
Definition 10. The set of execution formulas for an IA is the smallest set satisfying rules in Table 1.
Definition 11. Given an IA and a notion of observability , the satisfaction relation is defined for each type of observation in by clauses in Table 2. The observables behavior of an IA with notion of observability is
First we introduce a general notion of non-interference. Informally, non-interference states that users with no appropriate permission cannot deduce any kind of confidential information or activity by only interacting through low actions. Since it is expected that a low-level user cannot distinguish the occurrence of high actions, the system has to behave the same when high actions are not performed or when high actions are considered as hidden actions. Hence, restriction and hiding are central to our definitions of security.
- the restriction of in by where iff and .
- the hiding of in by .
Given an ISS define the restriction of in by and the hiding of in by .
- is strong non-deterministic non-interference (-SNNI) if .
- is non-deterministic non-interference (-NNI) if .
Notice the difference between the two definitions. -SNNI formalizes the security property as we described so far: a system satisfies -SNNI if a low-level user cannot distinguish (up to notion of observability ) by means of low level actions (the only visible ones) whether the system performs high actions (so they are hidden) or not (high actions are restricted). In the definition of -NNI only high input actions are restricted since the low-level user cannot provide this type of actions; instead high output actions are only hidden since they still can autonomously occur. The second notion is considered as it seems appropriate for IA where only input actions are controllable.
The approach of non-interference based on notion of observability generalizes other notion of non-interference for IA. For example Non deterministic Non-Interference (NNI), Strong Non deterministic Non-Interference (SNNI), both based on trace equivalence; Bisimulation NNI (BNNI) and Bisimulation SNNI (BSNNI) both based on bisimulation equivalence. To prove our statement, we recall the definitions of trace equivalence, weak bisimulation and non-interference properties.
- for all and , implies that there exists s.t. and ; and
- for all and , implies that there exists s.t. and .
We say that and are bisimilar, notation , if there is a bisimulation between and . Moreover, we say that two ISS and are bisimilar, and write , whenever the underlying IA are bisimilar.
- satisfies strong non-deterministic non-interference (SNNI) if .
- satisfies non-deterministic non-interference (NNI) if .
- satisfies bisimulation-based strong non-deterministic non-interference (BSNNI) if .
- satisfies bisimulation-based non-deterministic non-interference (BNNI) if .
We prove how to represent these notions of security with notions of observability.
Proof. First we prove (2). For this, we have to show that for all states and it holds iff . Suppose and . Let a function defined as:
We define in general for all since we will make use of it again later. We proceed by complete induction. In the base case then because and since is an observation for every state . By induction suppose that if then, if and it holds . Let , we do case analysis according to the shape of the formula. Suppose with . implies and (see and in Table 2). Since there is state such that . By induction , therefore . Now let . Since for all , by induction . Therefore . Now suppose then and , by induction . Therefore , ie . The other cases are outside of the observation defined by . The symmetric case is analogous.
Let and . We have to show that there is such that and . Since we have . Let be . If for all it holds then there is (as consequence of ). Then for any it holds (at least one fails). But then contradicting . The symmetric case is analogous.
To prove (1) we show that given two IA and it holds iff . We reduce this to prove iff . This proof is straightforward. __
The relation between -SNNI and -NNI depends on the notion observability . In general, we only can ensure -NNI is not stronger than -SNNI for all .
Proof. Let the following ISS with . Notice is always -NNI. On the other hand is not -SNNI: if then and ; if then and . __
This result is not novel. In , it is shown that SNNI is stronger than NNI. Therefore as trace semantic is the coarsest sensible semantic on labeled transition system, it is natural that the result holds for all other semantic. The Theorem 2 only formalize this fact for IA semantics.
The other relations depend on and we state in the following two theorems. Previously an auxiliary lemma.
Proof. The proof is straightforward by induction in where and is the function defined in (1). __
Proof. If is -SNNI then . Notice is obtained by removing some hidden transitions from , then by Lemma 1, and therefore . On the other hand is obtained by removing some hidden transitions from then by Lemma 1. Both inclusions imply . __
Proof. Define as ISS in Figure 3 with . Clearly is -SNNI for all . Suppose : if then while ; if then while . Then is not -NNI for any such that . The case is analogous. __
The approach based on notion of observability also allows to show that security properties are not preserved by composition.
Proof. Let and be ISS depicted in Figure 4. Both interfaces are -(S)NNI for all notion of observability but is not. If then while if then . In any case, and . Then is not -(S)NNI. __
In , we presented definitions of non interference based on refinement. The new versions of non-interference were introduced to solve some shortcomings detected in the definitions of non interference based on bisimulation of , ie BSNNI and BNNI. In this section we review the results obtained.
To address the shortcomings detected in B(S)NNI properties, a variation of non-interference based on refinement was introduced. These variants are obtained from the definition of BSNNI and BNNI by replacing weak bisimulation by a new relation. Under this new relation, two states and are related if they are able to receive the same input actions; in addition, for every output transition that can execute , the state can execute zero or more hidden transitions before executing the same output; finally, all hidden transitions that can execute can be “matched” by with zero or more hidden transitions. In all cases, the reached states have to be also related. In this way state does not reveal new visible behavior w.r.t. the state . Formally:
We say is refined (strictly on inputs) by , or, refines (strictly on inputs) to , notation , if there is a SIR s.t. . Let and be two ISS, we write if the underlying IA satisfy .
The definition of SIR is based on the definition of refinement of  only that restriction (b) is new with respect to the original version. Based on this relation are defined non-interference properties based on refinement. They are called SIR-NNI and SIR-SNNI.
This new formalization of security ensures that under the presence of high level activity no new information is revealed to low users w.r.t. the system with only low activity, because the interface (resp. ) is refined by .
Now we show there is a notion of observability such that -(S)NNI is equivalent to SIR-(S)NNI. To prove the result we need the following theorem:
Proof. For this, we have to show that for all states and it holds iff . Suppose and . Let the function defined in (1). We proceed by complete induction. In the base case then because and since is an observation for every state, then . Inductive case. By induction suppose that if then, if and it holds . Let , we do case analysis according to the shape of the formula. Suppose . Since then . Moreover, implies and therefore using induction. Cases and are like this respective case in proof of Theorem 1.
Let . Case : we have to show there is such that and . If then and therefore because and . Let such that , notice is unique because IA are input deterministic. If there is . This implies and we get a contradiction. In the case , we have to show there is such that and , this proof is similar to the previous one. Let now , we have to show there is such that and . Let be . If for all it holds then there is . Then for any it holds (at least one fails). But then contradicting . Case is analogous. __
Now we are able show the statement.
Two properties about SIR-NNI and SIR-SNNI were introduced in . The first one, if an ISS is SIR-(S)NNI then it is (S)NNI. This is straightforward using their respective equivalent definition with notion of observability, ie -(S)NNI and -(S)NNI. The second one, if an ISS is SIR-SNNI then it is SIR-NNI. This is a particular case of Theorem 3.
Theorem 5 shows that non-interference properties are not preserved for all notion of observation . This implies SIR-SNNI and SIR-NNI properties are not preserved by the composition.
Despite this, we give sufficient conditions to ensure that the composition of ISS results in a non-interferent ISS (always with respect to SIR-SNNI and SIR-NNI). Basically, these conditions require that (i) the component ISS are fully compatible, i.e. no error state is reached in the composition (in any way, not only autonomously), and (ii) they do not use confidential actions to synchronize. This is stated in the following theorem.
Proof. Define by iff and with being a SIR between and and similarly for . We show that is a SIR between and where .
Suppose . We proceed by case analysis on the different transfer properties on Def 17. For case (a) suppose and . Then there is such that and . As a consequence of the absence of error state in the product, we can ensure and . The case is analogous. In the same way we prove that condition (b) holds. For condition (c), let and . Then there is such that and . Let be a state s.t. . Notice that all internal transition used to reach in can be executed in . Then and . The case is analogous. We finally prove that condition (d) holds. Cases and are similar to the previous one. Suppose now where is an internal action resulting from a synchronization between and on common action . Notice . W.l.o.g suppose and . Repeating previous reasoning, we can ensure there is state such that and . __
This result is useful when we develop all the components of a complex system. As we have total control of each component design, it is possible to achieve full compatibility. In this way, to ensure that the composed system is secure, we only have to develop secure components s.t. every high action of the component is a high action of the final system. This result can also be used when we are not in control of all components, i.e. we want use components not developed by us. The idea is simple, given two ISS, define the high actions used in the communication process as low and check if the resulting ISS satisfies the hypothesis of Theorem 7.
This result is based on the fact that actions used in the synchronization become hidden in the composition, then it is not important the confidential level of the actions.
As we have seen, the composition of secure interfaces may yield a new insecure interface. This may happen when the components are already available but they were designed independently and they were not meant to interact. The question that arises then is if there is a way to derive a secure interface out of an insecure one. To derive the secure interface, we adapt the idea used to define ISS composition (see Def. 6); i.e. we restrict some input transitions in order to avoid insecure behavior. We then obtained a composed system that offers less services than the original one but is secure. In this section we present an algorithm to derive an ISS satisfying SIR-SNNI (or SIR-NNI) from a given ISS whenever possible. Since the method is similar in both cases, we focus on SIR-SNNI.
This algorithm is based on the algorithm presented in  to derive interfaces that satisfy BSNNI/BNNI, which in turn is based on the algorithm for bisimulation checking of . The differences between both algorithm are consequence of the definition of SIR but the idea behind the procedure is the same. The new algorithm works as follows: given two interfaces and , the second without high actions, (i) is semi-saturated adding all weak transitions ; (ii) a semi-synchronous product of and is constructed where transitions synchronize whenever they have the same label and satisfy some particular conditions; (iii) whenever there is a mismatching transition, a new transition is added on the product leading to a special fail state; (iv) if reaching a fail state is inevitable then ; if there is always a way to avoid reaching a fail state, then . We later define properly semi-saturation, semi-synchronous product and what means inevitably reaching a fail state. In this way, given an ISS , we can check if , if the check succeeds, then satisfies SIR-SNNI (see Theorem 8). If it does not succeed, then we provide an algorithm to decide whether can be transformed into a secure ISS by controlling (i.e. pruning) input transitions. This decision mechanism categorizes insecure interfaces in two different classes: the class of interfaces that can surely be transformed into secure one and the class in which this is not possible.
The algorithm to synthesize the secure ISS (once it is decided that it is possible) selects an input transition to prune, prune it, and checks whether the resulting ISS is secure. If it is not, a new input transition is selected and pruned. The process is repeated until it gets a secure interface. This process is shown to terminate (see Theorem 9).
Checking Strict Inputs Refinement. Different labels for internal actions do not play any role in a SIR relation. Then, to simplify, we replace all labels of internal action for two new ones: and . The label is used to represent an internal transition that can be removed; in our context, an internal action can be removed because it is a high input action that was hidden in order to check for security. Label is used to identify internal action that cannot be removed. This is formalized in the following definition, which includes self-loops with and for future simplifications.
A natural way to check weak bisimulation is to saturate the transition system i.e., to add a new transition to the model for each weak transition , and then checking strong bisimulation on the saturated transition system. Applying a similar idea we can check if there is a SIR relation. We add a transition whenever with an output action. We call this process semi-saturation.
Given an ISS , its semi-saturation, , is the ISS obtained by saturating the underlying IA.
The last definition ensure that: if then iff .
Following  and , the definition of the synchronous products follows from the conditions of the relation being checked, in this case SIR. First, we recapitulate these conditions and then we present the formal definition. If then for two states and s.t. , every output/hidden action that can execute has to be simulated by (probably using internal action); on the other hand, is not forced to simulate output/hidden actions from . Finally, both states have to simulate all input action that can be executed by the other one without performing previously any internal action. All these restrictions become evident from the definition of SIR. When a condition is not satisfied, a transition to a special state fail is created. Taking this into account we define the semi-synchronized product.
Let us show how we can use synchronous product to check and derive, whenever it is possible, a SIR relation. If there is a state such that then it is evident that . Moreover, suppose the synchronous product only has states and and the transition . If , as the progress from is autonomous, there is no way to control the execution of and hence there is no way to avoid . Then, we say that fails the SIR-relation test. On the other hand, if , a state offers a service that the other does not. In this case, removing the input transition (the interface offers less services), we avoid transition in the synchronous product and we get two states such that , moreover, we get two interfaces related by a SIR relation. In this case, we say that may pass the SIR relation test. In a more complex synchronous product, the “failure” in the state has to be propagated backwards appropriately to identify pairs of states that cannot be related. This propagation is done by the definitions of two different sets: and . The set contains those pairs that are not related by a refinement and there is no set of input transitions to prune so that the pair may become related by the refinement. On the other hand, contains pairs of states that are not related but will be related if some transition is pruned. States not in , belong to the set . All pairs in are related by a SIR relation.
- where is defined in Table 3. If , we say that the pair fails the SIR relation test.
- where is defined in Table 4. If , we say that the pair may pass the SIR relation test.
- . If , we say that the pair passes the SIR relation test
If the initial state of the underlying IA of an ISS passes (may pass, fails) the SIR relation test, we say that passes (may pass, fails) the SIR relation test.
The proof of the following lemma is based on the proof of the algorithm to check bisimulation in , for this reason we only present a proof sketch. Our proof deviates a little from the original as a consequence of not all mismatching transitions are problematic.
Proof sketch. Since , we only have to prove that (i) implies and (ii) if then . The proof of (i) is by induction on in and . The proof of (ii) is straightforward after showing that, given a state , then:
- if and then there is a state s.t. there is a transition and .
- if then there is a state s.t. there is a transition and .
The proof of both statements is by case analysis on obtaining always a contradiction. __
Using this lemma, we can verify if an interface is SIR-SNNI, since is SIR-SNNI if is refined by . Notice that we cannot use and to create a semi-synchronized product; in general, does not satisfy and it is not semi-saturated. This can be solved marking in and then semi-saturating the interface, i.e. we work with instead of . Similarly, does not satisfy . Since is used to represent the internal action that can be removed, we solve this problem marking in , i.e. we replace by . Therefore, verifying that satisfies SIR-SNNI amounts to checking whether passes the refinement test. Applying a similar reasoning, if we are interested on verifying SIR-NNI, we can check if passes the SIR-relation test. Then we have a decision algorithm to check whether an ISS satisfies SIR-SNNI or SIR-NNI. We state it in the following theorem.
- satisfies SIR-SNNI iff passes the SIR-relation test.
- satisfies SIR-NNI iff passes the SIR-relation test.
Synthesizing Secure ISS. In the following, we show that if a synchronized product may pass the SIR relation test then there is a set of input transition that can be pruned so that the resulting interface is secure. First, we need to select which are the candidate input actions to be removed. So, if is an ISS such that may pass the SIR-relation test, the set (see Table 5) is the set of eliminable candidates.
All transitions in are involved in a synchronization that connects a source pair that may pass the SIR-relation test and a failing target. This can happen in four different situations. The first one is the basic case, in which one of the components of the pair can perform a low input transition that cannot be matched by the other. The following two cases are symmetric and consider the case in which both sides can perform an equally low input transition but end up in a failing state. The last case includes high input actions that are hidden in the synchronized product and always reach a pair that fails. Notice that if may pass the bisimulation test then .
An important result is that no new failing pair of states is introduced by removing eliminable candidates. Moreover, if a pair of states fails in the synchronous product of the original ISS and it is also present in the synchronous product of the reduced ISS, then it also fails in this ISS. This ensures that a synchronous product that may pass the SIR-relation test, will not fail after pruning. In a sense, Lemma 4 below states that the sets and remain invariant.
Lemma 4. Let be an ISS s.t. may pass the SIR-relation test. Let be an ISS obtained by removing one transition in from (i.e. , provided , and unreachable states are removed form ). Then it holds that: (i) ; (ii) (Subindices in , , etc. indicate that these sets were obtained from the synchronous product .)
(Case ). Clearly . Suppose is the transition that is removed. By induction on we show for all . This implies and then . Suppose . By definition, action and . Then and therefore belongs to . Then . Suppose now . Then and . Notice that as consequence of . By induction hypothesis , then and we get and .
(Case .) We show by induction on that for all . Let . Moreover, w.l.o.g. suppose . Since , the transition cannot be removed and since , then it holds that . For the induction case, suppose w.l.o.g. . Then . Since is reachable in and , all pair is reachable in . By induction hypothesis, and then . __
The following theorem is the main result of this section. Notice that its proof defines the algorithm to prune input actions and obtain a secure interface. A similar result holds for SIR-NNI.
Proof. We only report a proof sketch. The complete proof follows in the same way as the proof of Theorem 4.10 in . Let be an ISS obtained from by removing one transition from the set . Lemma 4 ensures that may pass or passes the SIR relation test. If passes the SIR relation test, we stop. If may pass the SIR relation test, we repeat the process until we obtain an ISS that passes the test. Since the transition set is finite, in the worst case, we will continue with the process until obtaining an ISS with an empty set of eliminable candidates. If this ISS may pass the SIR-relation test we get a contradiction with the fact that the set of eliminable candidates is empty, then this ISS has to pass the test. Finally, is composed by the set of transitions removed along the way. __
In this work, we have presented semantics for interactive sequential systems. In this way we have extended the work of  and  to models where the control of the actions is shared by the user and the system. To reduce complexity, we did not include all types of observations presented in , thus limiting ourselves to work with a subset of them. We do not foresee major problems in extending our theory to the types of observations we left out.
The approach to define non-interference security properties through types of observations gives important insight about the security model, in particular about the characteristics of the attacker. For instance, if the attacker can make use of the covert channels, then the type of observation should be chosen. Another example is the type of observation which can be interpreted as the system detecting an attack and aborting the execution. In this way, the types of observations define a catalogue to characterize the attackers that could be considered.
This general definition encloses previous definitions of non-interference for ISS. We found notions of observability to represent (S)NNI, B(S)NNI and SIR-(S)NNI (Theorem 1 and Lemma 2). This approach also provides a better understanding of the security properties. In , SIR-(S)NNI is introduced to resolve some shortcomings found in B(S)NNI, but in fact, these shortcomings do not exist because the properties should be considered in a different context. B(S)NNI should be considered in a context where an attacker can only observes how the system behaves. On the other hand, SIR-(S)NNI should be considered in a context where the attacker can interact through the interface. This is obvious when we see the notions of observability used to represent each property: for B(S)NNI and for SIR-(S)NNI. Notice that B(S)NNI has the no interaction type while in SIR-(S)NNI the interaction is explicit due to the type .
In addition, the different types of observations provide a simple way to chose the appropriate notion of security. For example notice interface in Figure 5. One could argue that still there is an information leakage, because the execution of action is an evidence that the high user has not interacted with the interface. If this information is sensitive and the attacker interacts with the interface, one could use the notion of observability to detect this kind of problem. Notice this notion of observability is stronger than the notion used for B(S)NNI.
Future Works. We have identified two different research lines to continue this work. At first place, the types of observations presented in  that have been omitted, have to be addressed, and a deep study comparing the different semantics should be carried out to get a better understanding of them. Second, we also plan to study how the new semantics for interactive systems affect the different models with both input/controllable and output/uncontrollable actions and the results obtained for them.