SciELO - Scientific Electronic Library Online

 
vol.14 número3Application of Bio-inspired Metaheuristics in the Data Clustering ProblemIs it Safe to Adopt the Scrum Process Model? índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Links relacionados

Compartir


CLEI Electronic Journal

versión On-line ISSN 0717-5000

CLEIej vol.14 no.3 Montevideo dic. 2011

 

Semantics for Interactive Sequential Systems and Non-Interference Properties
Matias Lee
Universidad Nacional de Córdoba, Fa.M.A.F. - CONICET,
Córdoba, Argentina,
lee@famaf.unc.edu.ar
Pedro R. D’Argenio
Universidad Nacional de Córdoba, Fa.M.A.F. - CONICET,
Córdoba, Argentina,
dargenio@famaf.unc.edu.ar



Abstract


An interactive system is a system that allows communication with the users. This communication is modeled through input and output actions. Input actions are controllable by a user of the system, while output actions are controllable by the system. Standard semantics for sequential system [12] are not suitable in this context because they do not distinguish between the different kinds of actions. Applying a similar approach to the one used in [2] we define semantics for interactive systems. In this setting, a particular semantic is associated with a notion of observability. These notions of observability are used as parameters of a general definition of non-interference. We show that some previous versions of the non-interference property based on traces semantic, weak bisimulation and refinement, are actually instances of the observability-based non-interference property presented here. Moreover, this allows us to show some results in a general way and to provide a better understanding of the security properties. 

Un sistema interactivo es un sistema que permite comunicación con los usuarios. Esta comunicación es modelada a través de acciones de entrada y de salida. Las acciones de entrada son controladas por un usuario del sistema, mientras las acciones de salida son controladas por el sistema. Las semánticas estándares para sistemas secuenciales [12], no se adaptan bien para este contexto porque éstas no distinguen entre estos tipos de acciones. Aplicando un enfoque similar al utilizado en [2] definimos semánticas para sistemas interactivos. En este contexto, una semántica particular está asociada a una ”noción de observabilidad”. Estas nociones de observabilidad son usadas como parámetro para una definición general de no interferencia. En este trabajo demostramos que versiones anteriores de la propiedad de no-interferencia, basadas en semácticas de trazas, bisimulación débil y refinamiento, son en realidad instancias de la propiedad de no-interferencia basada en nociones de observabilidad presentada en este trabajo. Más aún, este nuevo enfoque permite demostrar algunos resultados en forma general y permite un mejor entendimiento de las propiedades de seguridad.


Keywords: process theory, semantic, interactive systems, interface automata, non interference, secure information flow, refinement, composition. 

Teoría de procesos, semántica, sistemas interactivos, autómata de interfaz, no-interferencia, flujos de información seguros, refinamiento, composición 

Received: 2011-03-30 Revised: 2011-10-06 Accepted: 2011-10-06

1 Introduction


An interactive system is a system that allows communication with the users. Usually, to carry out this communication, the system provides an interface that is used by them. Through the interface, the user sends messages to the system and receives messages from it. Interface Automata (IA) [345] is a light-weight formalism that captures the temporal aspects of interactive system interfaces. In this formalism, the messages sent by the user are represented as input actions, while the received messages are represented as output actions

Interface structure for security (ISS) [6] is a variant of IA, where there are two different types of visible actions. One type carries public or low confidential information and the other carries private or high confidential information. For simplicity, we call them low and high actions, respectively. Low actions are intended to be accessed by any user while high actions can only be accessed by those users having the appropriate clearance. In this context the desired requirement is the so-called non-interference property [7]. In the setting of ISS, bisimulation based notion of non-interference has been considered, more precisely, the so called BSNNI and BNNI properties [8]. Informally, these properties state that users with no appropriate permission cannot deduce any kind of confidential information or activity by only interacting through low actions. Since it is expected that a low-level user cannot distinguish the occurrence of high actions, the system has to behave the same when high actions are not performed or when high actions are considered as hidden actions. To formalize the idea of “behave the same”, the concept of weak bisimulation is used. 

In [9] it was argued that the BSNNI/BNNI properties are not quite appropriate to formalize the concept of secure interface. To illustrate this point the following two examples are presented: in the first one (Figure 1), we get that the system does not satisfy neither BNNI nor BSNNI but we show that it could be considered secure since no information is actually revealed to low users. The main problem is the way in which weak bisimulation relates output transitions. On the other hand, the second example (Figure 2) shows that weak bisimulation based security properties may fail to detect an information leakage through input transitions. 



                -onlyxlocxoff?-             ----- onlyxloc? -----         --s1-----------------s5- yes!/no!----  -  ---       credxreq?---     -credxreq?-    -done?        s6 -yes!/no!    --    -s2--   --      locxctrl!---    -locxctrl!-   -extxctrl!-          --   s3              s4         s7

Figure 1: Credit approval process of an on-line banking service




Figure 1 models a credit approval process of an on-line banking service using an ISS. As usual, outputs are suffixed by ! and inputs by ?. At the initial state s1  , a client can request a credit (cred_req?). The credit approval process can be carried on locally or by delegating it to an external component. This decision is modeled by a non deterministic choice. If it is locally processed (loc_ctrl!), an affirmative or negative response is given to the client (yes!/no!) and the process returns to the initial state. On the other hand, if the decision is delegated (ext_ctrl!), the process waits until it receives a notification that the control is finished (done?), returning then to the initial state. Besides, in the initial state, an administrator can configure the system to do only local control (only_loc?). This action is high and is not visible for low users. (We underline private/high actions.) In state s5  , the administrator can configure the system to return to the original configuration using action only_loc_off?

The Credit Request does not satisfy the BSNNI property (nor the BNNI property) and hence it is considered insecure in this setting. The system behaves differently depending on whether the private action only_loc? is performed or not. If only_loc? is not executed, after action cred_req?, it is possible to execute action ext_ctrl!. This behavior is not possible after the action only_loc?. Notice nevertheless that output actions are not visible for the user until they are executed. Then, from a low user perspective, the system behavior does not seem to change: the same input is accepted at states s1  and s5  , and then, the low user cannot distinguish whether the observation of loc_ctrl! is a consequence of the unique option (at state s6  ) or it is just an invariable decision of the Credit Request Process (at state s2  ). Hence we expect the system to be classified as secure by the formalism. 

We consider this example to be secure because a user does not know exactly what output action can be executed by an interface if he has no knowledge of the current state, he can observe the output actions only when they are executed. 

On the other hand, a user may try to guess the behavior of the system by performing input actions: wrong inputs will be rejected/ignored; otherwise, they will be accepted. Based on this fact, the following example shows that weak bisimulation based non-interference may fail to detect an information leakage. 



          t1 --rejectxall?---t9 ----       -----extxctrl?       -extxctrl?-----      --   -    ---     - -extxno!- --done! done!--  --t2--extxyes!-done!t10allow;---t11--   -extxno!-   --     --    ---accept?- -  --  t3-     t4     - t13-accept?-t12--  review?---accept?-      -- -review?process;-- t----t-  -t--accept?/dectline?t- -------t--  5process;6   7  process;   8 14 process;  15

Figure 2: External Control Process in an on-line banking service




Figure 2 depicts the component that executes the external control. In the initial state, the interface waits for input ext_ctrl? from the Credit Request Process. After this stimulus, a response about the credit request is given. If the credit is denied (ext_no!), the client can either ask for a decision review (review?) or accept the decision (accept?). In both cases, the decision is processed by the component (process;). This action is internal and is not visible by users (hidden/internal action are suffixed by semicolon). The process finishes with action done! returning to the initial state. If the credit is approved (ext_yes!), the client can accept or decline the credit (accept?/decline?). The decision is processed, the component informs that the task is done and it returns to the initial state. As in the first example, the behavior of the component can be modified by an administrator, which can configure the interface to reject all credit requests (reject_all?). For this reason, if reject_all? is received at the initial state, after an input action ext_ctrl?, the process can only execute action ext_no!. At this point, clients are not allowed to ask for a decision review. Then, at state t11  , the interface accepts only input action accept?. However, based on the client records, the review may be enabled; this is represented with the internal transition t  -al-lo-w-→; t  11       13  , notice state t  13  accepts both inputs actions accept? and review?. In any case, after the client response, the result is processed, the component informs that the task is done, and the process is restarted. 

Suppose that the bank requires that the client cannot detect whether the external process is denying all credit request. Since a low user cannot see the output action until they are executed, he cannot differentiate between the executions   extxctrl?    extxno! t1 -----→ t2----→  t3  and    extxctrl?    extxno! t9-----→  t10- ---→ t11  . If we compare states t3  and t11  under weak bisimulation, both state can execute the same visible transitions and no security problem is detected. Notice that at state t11  , the process cannot respond immediately to a review? input, but it can execute t11-a-ll-ow-;→ t13-re-v-ie-w-→? t14  (recall allow; is an internal action). In fact, low users can distinguish state t3  from t11  : testing the interface at state t11  , the low user can find out that input action review? is not enabled, while at t3  it is. Hence, we consider that the interface is not secure. 

These observations are based on the fact that input and output actions are conceptually very different. Input actions are controllable by the user while output actions are controllable by the system. Therefore, some behavior one would expect from input actions may be inappropriate for outputs and vice-versa. For instance, the assumption that “wrong inputs will be rejected/ignored; otherwise, they will be accepted” in the second example above, makes no sense if applied to outputs because the malicious user is interested in collecting all possible information rather than in rejecting it. 

In [1] and [2], a deep study about semantic for sequential system is done, but they do not take in account systems where both kinds of actions coexist. In their setting all actions are controlled by one entity: the user or the system. For example, in Fail Trace Semantic a user executes (input) actions until one action is rejected by the system, in this case the user has the control of which action is executed. A different case is Trace Semantic where the system has the control of the actions and the user can only observe the executions of the system. Also in stronger semantics, for example with global testing, the control belongs to one entity. For instance, Weak Bisimulation equivalence is also called observational equivalence and its intuitive notion is “two system are observational equivalence if they cannot be distinguished by an observer”, ie the user observes and the system executes (controls) the actions. Notice the subtlety in this case: global testing allows the user to force the system to execute all possible executions but, which actions can be executed in each state is controlled/defined by the system. 

In this work we define semantics for systems where both coexist, actions controlled by the user (input actions) and actions controlled by the system (output actions). We have used an approach similar to the one used in [2]. First we define types of observations, an information record that can be performed by a user. Second, we define a notion of observability as a set of types of observations. Each notion of observability is a particular semantic. This approach is simple, elegant and allows to be exhaustive: when the types of observation and notion of observability are defined one has all the possible semantics that could be defined. 

These new semantics are suitable to study secure information flow properties over ISS. Moreover, the definition of non-interference presented in this work has as parameter a notion of observability. This generalization through types of observations provides a framework to prove generic theorems that extends to families of security properties. In addition, the approach subsumes previous definitions of non-interference for ISS, in particular the one based on traces [9], the one based on weak bisimulation [6] and the one based on refinement [9]

We also focus our attention in non-interference based on refinement. We give sufficient and simple conditions to ensure compositionality. We also provide two algorithms. The first one determines if an ISS satisfies the refinement-based non-interference property. The second one, determines if an ISS can be made secure by controlling some input actions, and if so, synthesizes the secure ISS. Both algorithms are polynomial in the number of states of the ISS under study. These results are relevant because they could be adapted to other instances of non interference based on notion of observability. 

This paper is an extension of [9]. In [9] we introduce non-interference based on refinement to resolve some shortcomings in the non-interference based on weak bisimulation properties. The approach based on notions of observability shows that the shortcomings do not exist because the properties should be considered in different contexts. We explain this in the last section of the paper. 

Organization of the paper.  In section 2 we recall definitions of IA, composition and ISS. In section 3 we define the types of observations, notion of observability and the set of observable behaviors of an IA. In section 4 we present the notion of non-interference based on notion of observability. We show that the approach subsumes previous definition of non-interference for ISS and we proof some general properties of non-interference. In section 5 we review the definitions of non-interference based on refinement, and we show that these definitions also are subsumed by the new approach. We study compositionality in this setting and define two algorithms: one to check whether an interface satisfies the property and the another to derive a secure interface from a given (non-secure) interface by controlling inputs actions. Section 6 concludes the paper.

2 Interfaces Automata and Interface Structure for Security


In the following, we define Interface Automata (IA) [34] and Interface Structure for Security (ISS) [6], and introduce some notations. 

2.1 Interfaces Automata


Definition 1. An Interface Automaton (IA) is a tuple         0 S = ⟨Q,q ,    I  O  H A  ,A  ,A  ,-→ ⟩ where:  (i) Q  is a finite set of states with  0 q  ∈ Q  being the initial state;  (ii)   I A  ,   O A  , and   H A  are the (pairwise disjoint) finite sets of input, output, and hidden actions, respectively, with      I   O    H A = A ∪ A  ∪ A  ; and  (iii) -→  ⊆ Q ×A × Q  is the transition relation that is required to be finite and input deterministic (i.e. (q,a,q1),(q,a,q2) ∈ δ  implies q1 = q2   for all      I a ∈ A  and q,q1,q2 ∈ Q  ). In general, we denote QS  ,  I AS  , →S  , etc. to indicate that they are the set of states, input actions, transitions, etc. of the IA S  .


As usual, we denote q a-→ q′ whenever (q,a,q′) ∈ -→ , q a-→ if there is q′ s.t. q-a→ q′ , and q-a→⁄ if this is not the case. An execution of S  is a finite sequence q0a0q1a1...qn  s.t. qi ∈ Q  , ai ∈ A  and   ai qi-→  qi+1  for 0 ≤ i < n  . An execution is autonomous if all their actions are output or hidden (the execution does not need stimulus from the environment to run). If there is an autonomous execution from q  to ′ q and all action are hidden, we write   ε  ′ q⇒  q . Notice this includes case     ′ q = q . We write   a  ′ q ⇒ q if there are q1  and q2  s.t.   ε   a    ε q⇒  q1-→  q2 ⇒ q′ . Moreover   ˆa q⇒ q′ denotes   a q⇒  q′ or a ∈ AH  and q = q′ . We write     a q ⇒ε-→ if there is q′ s.t. q ⇒εq′ and   a q′-→ . A trace from q0  is a sequence of visible actions a0,a1 ⋅⋅⋅ such that there are states q1,q2,⋅⋅⋅ such that q0⇒a0 q1 a⇒1 q2 ⋅⋅⋅ is an execution. The set of traces of an IA S  , notation Traces(S)  , is the set of all traces from the initial state of S

Composition


Composition of two IA is only defined if their actions are disjoint except when input actions of one of the IA coincide with some of the output actions of the other. Such actions are intended to synchronize in a communication.


Definition 2. Let S  and T  be two IA, and let shared(S,T) = (AS ∩ AT)  be the set of shared actions. We say that S  and T  are composable whenever                I   O     O    I shared(S,T) = (A S ∩ AT) ∪(AS ∩ AT )  . Two ISS         h   l S = ⟨S,AS,A S⟩ and          h  l T = ⟨T,A T,AT⟩ are composable if S  and T  are composable.


The product of two composable IA S  and T  is defined pretty much as CSP parallel composition: (i) the state space of the product is the product of the set of states of the components, (ii) only shared actions can synchronize, i.e., both component should perform a transition with the same synchronizing label (one input, and the other output), and (iii) transitions with non-shared actions are interleaved. Besides, shared actions are hidden in the product.


Definition 3. Let S  and T  be composable IA. The product S ⊗ T  is the interface automaton defined by:

  • QS ⊗T = QS × QT  with  0       0  0 qS⊗T = (qS,qT )  ;
  •  I       I   I AS⊗T = A S ∪ AT - shared(S,T )  ,  O      O    O AS⊗T = AS ∪ AT - shared(S,T)  , and  H       H    H AS⊗T = A S ∪A T ∪shared(S,T)  ; and
  •        a       ′ ′ (qS,qT )-→S ⊗T (qS,qT)  if any of the following holds:
    • a ∈ AS - shared(S,T)  ,    a   ′ qS-→S  qS  , and       ′ qT = qT  ;
    • a ∈ AT - shared(S,T)  ,    a   ′ qT-→S qT  , and       ′ qS = qS  ;
    • a ∈ shared(S,T )  ,    a   ′ qS-→S qS  , and    a    ′ qT -→T  qT  .

There may be reachable states on S ⊗ T  for which one of the components, say S  , may produce an output shared action that the other is not ready to accept (i.e., its corresponding input is not available at the current state). Then S  violates the input assumption of T  and this is not acceptable. States like these are called error states.


Definition 4. Let S  and T  be composable IA. A product state (qS,qT) ∈ QS ⊗T  is an error state if there is an action a ∈ shared(S,T )  s.t. either a ∈ AOS  , qS-→aS  and qT-→a⁄ T  , or a ∈ AOT  , qT-→aT  and q -→a⁄  S  S  .


If the product S ⊗ T  does not contain any reachable error state, then each component satisfies the interface of the other (i.e., the input assumptions) and thus are compatible. Instead, the presence of a reachable error state is evidence that one component is violating the interface of the other. This may not be a major problem as long as the environment is able to restrain of producing an output (an input to S ⊗ T  ) that leads the product to the error state. Of course, it may be the case that S ⊗ T  does not provide any possible input to the environment and reaches autonomously (i.e., via output or hidden actions) an error state. In such a case we say that S ⊗ T  is incompatible.


Definition 5. Let S  and T  be composable IA and let S ⊗ T  be its product. A state (qS,qT) ∈ QS ⊗T  is an incompatible state if there is an error state reachable from (qS,qT)  through an autonomous execution. If a state is not incompatible, it is compatible. If the initial state of S ⊗ T  is compatible, then S  and T  are compatible.


Finally, if two IA are compatible, it is possible to define the interface for the resulting composition. Such interface is the result of pruning all input transitions of the product that lead to incompatible states i.e. states from which an error state can be autonomously reached.


Definition 6. Let S  and T  be compatible IA. The composition S ∥ T  is the IA that results from S ⊗ T  by removing all transition q-→a    q′    S⊗T s.t. (i) q  is a compatible state in S ⊗T  , (ii) a ∈ AI      S⊗T  , and (iii) q′ is an incompatible state in S ⊗ T  .


2.2 Interface Structure For Security


An Interface Structures for Security is an IA, where visible actions are divided in two disjoint sets: the high action set and the low action set. Low actions can be observed and used for any user, while high actions are intended only for users with the appropriate clearance.


Definition 7. An Interface Structure for Security (ISS) is a tuple ⟨S,Ah,Al⟩ where S = ⟨Q,q0,  AI,AO, AH, -→ ⟩ is an IA and Ah  and Al  are disjoint sets of actions s.t. Ah ∪ Al = AO ∪ AI  .


If necessary, we will write Ah  S  and Al   S  instead of Ah  and Al  , respectively, and write AX,m  instead of AX  ∩Am  with X  ∈ {I,O } and m ∈ {h,l}

Extending the definition of composition of IA to ISS is straightforward.


Definition 8. Let S = ⟨S,AhS,AlS⟩ and T = ⟨T,AhT,AlT⟩ be two ISS. S and T are composable if S  and T  are composable. Given two composable ISS, S and T , their composition, S ∥ T , is defined by the ISS ⟨S ∥ T,(AhS ∪ AhT)- shared(S,T ),(AlS ∪ AlT) - shared(S,T)⟩ .


3 Observability


Semantic equivalences for sequential systems with silent moves are studied in [2]. Resulting in 155 notions of observability and a complete comparison between them. Unfortunately, these results cannot be applied straightforward to the IA context. For example, studied machines in [2] have not notions of input and output actions over the same machine. Moreover, in [2] there is not a notion of the internal structure of the analyzed machine. This situation have forced them to talk about definite and hypothetical behaviors of the machine. Despite these differences, we use [2] as a reference to define different semantics for IA. To avoid the distinction between definite and hypothetical behaviors, we use the transition relation of the IA to present the set of observable behaviors. 

First we define type of observation, an information record that can be done by the user. Second, we define a notion of observability as a set of types of observations. Each notion of observability defines a particular semantic. Third, using the transition relation of the IA, we define the semantic of each type of observation and therefore a semantic for each possible notion of observability. 

Given a system, a type of observation is an information that can be recorded by a user with respect to the interface. To define our types of observations we consider the following assumptions: input and output actions are observable when they are executed. Inputs are executed by a user, while outputs are executed by the interface. Then, input actions are controllable by the user and output actions are controllable by the interface. Internal transitions are controllable by the interface. In some cases, internal transitions can be detectable by the user but the user cannot distinguish between different internal actions. An user can observe how the interface interact with another user or he can be the one who interacts. If the user is interacting, the interface can behave in different ways as a result of some violation of its input assumptions: (i  ) it does not show any error and continues with the execution, (ii  ) it stops the execution and shows an error to the user, (iii  ) it shows an error to the user and continues with the execution; (iv  ) finally, an interface could provide a special service to inform which inputs are enabled in its current state. In this way, the user can avoid input assumption violations. Notice that cases (i  ), (ii  ) and (iii  ) determine, at the semantic level, a sort of input-enableness. In these cases we fix the behavior of input actions that are not defined in a particular state. The last four assumptions do not increase the expressiveness power of the model, as consequence they can be implemented in any IA. For example: let S  be an IA, the assumption (i)  can be implemented with self loops with action a?  for all state state s ∈ Q      S  and a? ∈ AI - I(s)  . Using the same reasoning, we assume an interface could provide a service to detect the end of an execution, where the end is reached when no more transitions are possible. In addition, a user can make copies of the interface with the objective of studying the interface in more detail. Finally, a user can do global testing. Under this assumption it is possible to say that a particular observation will not happen. 

Based on these assumptions, we introduce the following types of observations:

  • [a  ] The execution of external actions AI ∪AO  are detectable.
  • [ε,ε⁄ ] The case of internal transitions are detectable is denoted with ε  . Otherwise ε⁄ .
  • [T  ] The session is terminated by the user. This is possible in any time. After this no more records are possible
  • [⇄⁄ ,⇄ ] If a user only observes the actions that are executed by an interface and cannot send stimuli to it, then there is no interaction. We denote this with ⇄⁄ . The case where the interaction is possible is denoted by ⇄ .
  • [F  ] The user interacts with system and the interface stops the execution whenever it receives an input action that is not enabled. In this case, the stop is observable.
  • [FT  ] Suppose the previous type but now whenever the interface receives an input action that is not enabled, the error is informed to the user and the execution continues.
  • [RT  ] To avoid the error of sending an input action that is not enabled, the interface can provide a method to check what input actions are enabled in its current state. In this case, the observation includes the set X  of enabled inputs.
  • [0  ] This type is used if it is detectable when an interface reachs a final state, i.e. no more activity is possible.
  • [∧ ] Suppose the user has a machine to make arbitrary number of copies of the system. These copies reveal more information about the interface because one could observes different execution from the same interface. If the user makes N  copies and in each copy executes ϕi  for i ∈ {1,...,N} , this observations is denoted with ∧   ni=1ϕi  .
  • [¬ ] It is possible to test the interface over all possible condition. This allows to ensure that a particular observation is not possible; then a user can do an observation ¬ ϕ  whenever ϕ  is not possible execution of the system.

The types of observations studied here are not the studies in [2]. On one hand, we decided to skip some types for the sake of simplicity. For example we did not include η  -replication nor continuous copying, which are different forms of make copies of the system. We did not include the notion of stable state, this avoids the inclusion of some variant of types of visibilities presented here. On the other hand, we have added new features. First, we differentiate between a user that interacts with the interface and a non-interacting user. Second, the knowledge of the internal structure of the interface allow us to know exactly when an internal action could be executed and define if the internal transitions are observable or not. This is a relevant feature in the context of security, because it could be used to represent covert channels. 

A set of types of observations defines a notion of observability, see Definition 9. The notion of observability determines what information can be observed by a user. This has to be consistent, for example, types of observations “a user cannot interact with the interface” (⇄⁄ ) and “a user can detect that the input sent was not enabled” (F  ) cannot belong to same notion of observability. Note that the definition of notion of observability ensures consistency.


Definition 9. A set V  is a notion of observability (for IA) if V ⊆ {a,ε,ε⁄ ,0,⇄, ⇄⁄ ,T, F,FT ,RT ,∧,¬ } and V  satisfies the following conditions:

1.
{a,T } ⊆ V  ,
2.
|{ε,ε⁄ }∩ V| = 1  ,
3.
|{⇄, ⇄⁄ ,F,F T,RT }∩ V| = 1  .

Condition (1) ensures that input and output actions are always visible and that the user can terminate the session when he wants. Condition (2) ensures that internal transitions are detectable or not. Condition (3) ensures that a user can interact with the interface (⇄,F,F T,RT  ) or not (⇄⁄ ), and if he interacts, he will do in one particular way. 

In [2] other kind of restrictions were added to simplify the study of which semantics make more differences: for example conditions as “if FT ∈ V  then F ∈ V  are added. This reflects the fact that if the interface stops when a disable input is received, all observations that one can do in this scenario, can be done in the same machine configured to continue when the error occurs. Since we are not interested in studying which semantics is coarser than others, we omit these conditions. 

Semantic.  First we define all possible observations as a set of logic formulas called execution formulas. Then the set of observable behavior of an IA is the set of execution formulas that are satisfied by the initial state of the interface.




                                              I   O T ∈ L    0 ∈ L   ⁄a ∈ L  ∀a ∈ AI    ϕ ∈-L-a-∈-A-∪-A--∪-{ε}-                                            aϕ ∈ L  ϕ ∈-L-a-∈-AI-   ϕ-∈-L--X-⊆-AI-    ϕi ∈-L-i ∈-I   -ϕ-∈ L-     ⁄aϕ ∈ L           Xϕ ∈ L         ∧i∈I ϕi ∈ L   ¬ ϕ ∈ L

Table 1: Recursive rules for definition of execution formulas.



Definition 10. The set of execution formulas L for an IA S = ⟨Q, q0,  AI,AO, AH,-→ ⟩ is the smallest set satisfying rules in Table 1.




(T)    q |= T       ∀q ∈aQ (0)    q |= 0       if q-→⁄ for all a ∈ A (a)    q |= aϕ      if a ∈ AI ∪ AO and ∃q′ ∈ Q: q-→a q′ and q′ |= ϕ                          H       ′       a  ′    ′ (ε⁄ )   q |= ϕ       if a ∈ A and ∃q ∈ Q: q-→a q and q |= ϕ (ε)    q |= εϕ      if a ∈ AH and ∃q′ ∈ Q: q-→ q′ and q′ |= ϕ (⇄ )   q |= aϕ      if a ∈ AI - I(q) and q |= ϕ (F)    q |= ⁄a       if a ∈ AI - I(q) (FT )  q |= ⁄aϕ      if a ∈ AI - I(q) and q |= ϕ (R∧T )  q |= X∧ ϕ     if X = I(q) and q |= ϕ ( )    q |=  i∈I ϕi  if q |= ϕi for all i ∈ I (¬)    q |= ¬ϕ      if q ⁄|= ϕ

Table 2: Semantic of the observations



Definition 11. Given an IA S = ⟨Q,q0,  AI,AO, AH, -→⟩ and a notion of observability V  , the satisfaction relation |=V  ⊆ Q × L is defined for each type of observation in V  by clauses in Table 2. The observables behavior of an IA S  with notion of observability V  is OV (S) = {ϕ ∈ L : q0 |=V ϕ}

4 Non interference based on Notion of Observability.


First we introduce a general notion of non-interference. Informally, non-interference states that users with no appropriate permission cannot deduce any kind of confidential information or activity by only interacting through low actions. Since it is expected that a low-level user cannot distinguish the occurrence of high actions, the system has to behave the same when high actions are not performed or when high actions are considered as hidden actions. Hence, restriction and hiding are central to our definitions of security.


Definition 12. Given an IA S  and a set of actions X  ⊆ AIS ∪AOS  , define:

  • the restriction of X  in S  by            0   I      O       H S\X = ⟨QS,qS,A S - X, AS - X,A S ,-→S \X⟩ where   a     ′ q -→S \X q iff   a q-→S q′ and a ∕∈ X  .
  • the hiding of X  in S  by             0  I      O       H S ∕X  = ⟨QS, qS,AS - X,AS - X, AS ∪ X,-→S ⟩ .

Given an ISS S = ⟨S,Ah,Al ⟩         S   S define the restriction of X  in S by S \X = ⟨S\X,Ah - X, Al - X ⟩               S      S and the hiding of X  in S by S ∕X = ⟨S∕X,Ah - X, Al - X ⟩               S      S .


Definition 13. Let S = ⟨S,Ah,Al⟩ be an ISS and V  a notion of observability, then:

  • S is V  strong non-deterministic non-interference (V  -SNNI) if        h          h OV(S∕A  ) = OV (S \A )  .
  • S  is V  non-deterministic non-interference (V  -NNI) if O  (S∕Ah ) = O ((S \Ah,I)∕Ah,O)  V           V  .

Notice the difference between the two definitions. V  -SNNI formalizes the security property as we described so far: a system satisfies V  -SNNI if a low-level user cannot distinguish (up to notion of observability V  ) by means of low level actions (the only visible ones) whether the system performs high actions (so they are hidden) or not (high actions are restricted). In the definition of V  -NNI only high input actions are restricted since the low-level user cannot provide this type of actions; instead high output actions are only hidden since they still can autonomously occur. The second notion is considered as it seems appropriate for IA where only input actions are controllable. 

The approach of non-interference based on notion of observability generalizes other notion of non-interference for IA. For example Non deterministic Non-Interference (NNI), Strong Non deterministic Non-Interference (SNNI), both based on trace equivalence; Bisimulation NNI (BNNI) and Bisimulation SNNI (BSNNI) both based on bisimulation equivalence. To prove our statement, we recall the definitions of trace equivalence, weak bisimulation and non-interference properties.


Definition 14. Let S  and T  be two IA. S  and T  are trace equivalent, notation S ≈T T  , if Traces(S ) = Traces(T)  . We say that two ISS S and T are trace equivalent, and write S ≈T T , whenever the underlying IA are trace equivalent.


Definition 15. Let S  and T  be two IA. A relation R ⊆ QS × QT  is a (weak) bisimulation between S  and T  if s0 R t0   and, for all s ∈ QS  and t ∈ QT  , s R t  implies:

  • for all a ∈ AS  and s′ ∈ QS  , s-→aS  s′ implies that there exists t′ ∈ QT  s.t. t⇒ˆaT  t′ and s′ R t′ ; and
  • for all a ∈ AT  and t′ ∈ QT  ,   a t-→T  t′ implies that there exists s′ ∈ QS  s.t.   ˆa s⇒S  s′ and s′ R t′ .

We say that S  and T  are bisimilar, notation S ≈ T  , if there is a bisimulation between S  and T  . Moreover, we say that two ISS S and T are bisimilar, and write S ≈ T , whenever the underlying IA are bisimilar.


Definition 16. Let S = ⟨S,Ah,Al⟩ be an ISS.

1.
S satisfies strong non-deterministic non-interference (SNNI) if S \Ah ≈T  S∕Ah  .
2.
S satisfies non-deterministic non-interference (NNI) if     h,I   h,O       h S \A  ∕A    ≈T S∕A  .
3.
S satisfies bisimulation-based strong non-deterministic non-interference (BSNNI) if S\Ah ≈ S∕Ah  .
4.
S satisfies bisimulation-based non-deterministic non-interference (BNNI) if S\Ah,I∕Ah,O ≈ S∕Ah  .

We prove how to represent these notions of security with notions of observability.


Theorem 1. Let S = ⟨S,Ah,Al⟩ be an ISS then

1.
S is (S)NNI iff S is V  -(S)NNI with V = {a,T,ε⁄ ,⇄⁄ } .
2.
S is B(S)NNI iff S is V  -(S)NNI with               ∧ V = {a,T,ε⁄ ,⇄⁄ , ,¬ } .


Proof. First we prove (2). For this, we have to show that for all states s ∈ Q      S  and t ∈ Q     T  it holds s ≈ t  iff O (s) = O (t)  V       V  . (⇒)  Suppose s ≈ t  and ϕ ∈ O  (s)      V  . Let f : L → ℕ  a function defined as:

f(T) = f(0) = f (⁄a) = 0 f (aϕ) = f(⁄aϕ) = f(X ϕ) = f(¬ϕ) = f(ϕ )+ 1 f (∧  ϕi) = maxi(f(ϕi))+ 1                                                                     i

We define f  in general for all L since we will make use of it again later. We proceed by complete induction. In the base case f(ϕ) = 0  then ϕ = T  because V = {a,T,ε⁄ ,⇄⁄ ∧,¬} and since T  is an observation for every state ϕ ∈ OV (t)  . By induction suppose that if s ≈ t  then, if f (ϕ) ≤ k  and ϕ ∈ OV (s)  it holds ϕ ∈ OV (t)  . Let f(ϕ ) = k+ 1  , we do case analysis according to the shape of the formula. Suppose ϕ = aϕ′ with a ∈ AI ∪AO  . s |= aϕ  implies s⇒a s′ and ϕ ∈ O (s′)  (see (a)  and (ε⁄ )  in Table 2). Since s ≈ t  there is state t′ such that s′ ≈ t′ . By induction ϕ′ ∈ OV(t′)  , therefore aϕ′ ∈ OV(t)  . Now let ϕ = ∧ ϕi       i  . Since f(ϕi) ≤ k  for all i  , by induction ϕi ∈ OV (t)  . Therefore ϕ = ∧ ϕi ∈ OV (t)      i  . Now suppose ϕ = ¬ϕ′ then f(ϕ′) = k  and s ⁄|= ϕ′ , by induction t ⁄|= ϕ′ . Therefore t |= ¬ϕ′ , ie ϕ ∈ OV (t)  . The other cases are outside of the observation defined by V  . The symmetric case is analogous. 

(⇐ )  Let OV (s) = OV(t)  and   a  ′ s -→ s . We have to show that there is  ′ t such that  a  ′ t⇒ t and      ′       ′ OV (s) = OV(t)  . Since OV (s) = OV (t)  we have   a t ⇒ . Let Q  be   ′   a ′ {t : t⇒ t} . If for all ′ t∈ Q  it holds      ′       ′ OV (s) ⁄= OV(t)  then there is          ′       ′ ϕs′ ∈ OV (s )- OV (t )  (as consequence of (¬)  ). Then for any ′ t∈ Q  it holds ∧            ′       ′  q∈Q ϕq ∈ OV (s )- OV (t )  (at least one ϕq  fails). But then  ∧ a  q∈Q ϕq ∈ OV (s)- OV (t)  contradicting OV (s) = OV (t)  . The symmetric case is analogous. 

To prove (1) we show that given two IA S  and T  it holds S ≈T T  iff OV (S) = OV (T)  . We reduce this to prove ϕ ∈ Traces(S)  iff ϕT ∈ OV (S)  . This proof is straightforward. __


The relation between V  -SNNI and V  -NNI depends on the notion observability V  . In general, we only can ensure V  -NNI is not stronger than V  -SNNI for all V  .


Theorem 2. For all notion of observability V  there is an ISS S such that S is V  -NNI and S is not V  -SNNI.



Proof. Let S the following ISS    H!    a s0 --→ s1-→ s2  with      I   O a ∈ A ∪ A  . Notice S is always V  -NNI. On the other hand S is not V  -SNNI: if ε⁄ ∈ V  then             h aT ∈ OV ((S∕A ))  and              h aT ⁄∈ OV ((S \A  ))  ; if ε ∈ V  then               h εaT ∈ OV ((S∕A  ))  and              h εaT ⁄∈ OV ((S\A ))  . __


This result is not novel. In [8], it is shown that SNNI is stronger than NNI. Therefore as trace semantic is the coarsest sensible semantic on labeled transition system, it is natural that the result holds for all other semantic. The Theorem 2 only formalize this fact for IA semantics. 

The other relations depend on V  and we state in the following two theorems. Previously an auxiliary lemma.


Lemma 1. Let S  be an IA and V  a notion of observability such that {0,¬} ∩V = ∅ . Let S ′ be an IA obtained by removing a set of internal transitions from S  . Then O (S) ⊇ O (S ′)  .



Proof. The proof is straightforward by induction in f(ϕ)  where ϕ ∈ OV (S′)  and f  is the function defined in (1). __


Theorem 3. Let S be an ISS and V  a notion of observability such that {0,¬ }∩ V = ∅ . If S is V  -SNNI then S is V  -NNI.



Proof. If S is V  -SNNI then OV (S∕AH ) = OV (S \AH )  . Notice S\AH  is obtained by removing some hidden transitions from (S\Ah,I)∕Ah,O  , then OV ((S\Ah,I)∕Ah,O) ⊇ OV(S\AH )  by Lemma 1, and therefore OV ((S\Ah,I)∕Ah,O) ⊇ OV (S∕AH )  . On the other hand (S\Ah,I)∕Ah,O  is obtained by removing some hidden transitions from S∕AH  then OV (S∕AH ) ⊇ OV ((S \Ah,I)∕Ah,O)  by Lemma 1. Both inclusions imply OV(S∕AH ) = OV ((S\Ah,I)∕Ah,O)  . __



 

s0-b;-s1 a--s2-b;-s3-a-s4  H1! s--a-s H2?-s -a-s  5    6     7    8         S

Figure 3: S is V  -SNNI does not imply S is V  -NNI if V ∩{0,¬} ⁄= ∅

 



Theorem 4. For all notion of observability V  such that V ∩ {0,¬ } ⁄= ∅ there is an ISS S such that S is V  -SNNI and S is not V  -NNI.



Proof. Define S as ISS in Figure 3 with a ∈ AI ∪AO  . Clearly S is V  -SNNI for all V  . Suppose ε⁄ ∈ V  : if ¬ ∈ V  then a¬a ∈ OV ((S \Ah,I)∕Ah,O)  while a¬a ⁄∈ OV (S \AH)  ; if 0 ∈ V  then a0 ∈ OV((S\Ah,I)∕Ah,O)  while a0 ⁄∈ OV (S\AH )  . Then S is not V  -NNI for any V  such that V ∩ {0,¬} ⁄= ∅ . The case ε ∈ V  is analogous. __


The approach based on notion of observability also allows to show that security properties are not preserved by composition. 


 

                           s    a;    b?    c!   a;   a;--5                             a;                    a; s0 ---s1---s2---s3 ---s4----         c?   H1!-t2    s0,t0----s1,t0    H1;--s10,t2---s11,t2  -H2?        H ?       H1? s6     t0----t1----        -H2?             -- s7 b?-s8-c!-s9--1s10a;-s11                   d; -t3    s7,t0-b?-s8,t0-c;-s9,t1-d;-s9,t3               S                        T                                S ∥ T

Figure 4: V  -SNNI and V  -NNI properties are not preserved by composition.
 



Theorem 5. For all notion of observability V  there are ISS S and T such that S and T are V  -(S)NNI and composable, and the composition S ∥ T is not V  -(S)NNI.



Proof. Let S and T be ISS depicted in Figure 4. Both interfaces are V  -(S)NNI for all notion of observability V  but S ∥ T is not. If ε⁄ ∈ V  then b?T ∈ OV ((S ∥ T)∕Ah)  while if ε ∈ V  then εb?T ∈ OV ((S ∥ T)∕Ah)  . In any case, OV ((S ∥ T)\Ah) = OV (((S ∥ T )\Ah,I)∕Ah,O )  and b?T,εb?T ⁄∈ OV ((S ∥ T )\Ah )  . Then S ∥ T is not V  -(S)NNI. __

5 Non-interference based on refinement.


In [9], we presented definitions of non interference based on refinement. The new versions of non-interference were introduced to solve some shortcomings detected in the definitions of non interference based on bisimulation of [6], ie BSNNI and BNNI. In this section we review the results obtained.


 

   -                 - S  -  H?          V  -  H?   s1 -----s2        v1 ------v4 -ε-v5  a!- --b!- -b!       -a?∕b?   -a?   a?∕b?   s3     -s4        v2       v6    v7

Figure 5: In these interfaces, BSNNI and BNNI are not appropriate properties to denote security.
 



To address the shortcomings detected in B(S)NNI properties, a variation of non-interference based on refinement was introduced. These variants are obtained from the definition of BSNNI and BNNI by replacing weak bisimulation by a new relation. Under this new relation, two states s  and t  are related if they are able to receive the same input actions; in addition, for every output transition that can execute t  , the state s  can execute zero or more hidden transitions before executing the same output; finally, all hidden transitions that can execute t  can be “matched” by s  with zero or more hidden transitions. In all cases, the reached states have to be also related. In this way state t  does not reveal new visible behavior w.r.t. the state s  . Formally:


Definition 17. Given two IA S  and T  , a relation ≽ ⊆ QS  ×QT  is a Strict Input Refinement (SIR) of S  by T  if q0S ≽ q0T  and for all qS ≽ qT  it holds:

(a)
∀a ∈ AIS,q′S ∈ QS,  if    a qS -→S  q′S  then              a ∃q′T ∈ QT : qT -→T q′T  and q′S ≽ q′T  ;
(b)
∀a ∈ AIT,q′T ∈ QT  , if    a qT -→T  q′T  then              a ∃q′S ∈ QS : qS -→S q′S  and q′S ≽ q′T  ;
(c)
∀a ∈ AOT,q′T ∈ QT  , if    a qT-→T q′T  then             ε  a ∃q′S ∈ QS : qS ⇒S -→S q′S  and q′S ≽ q′T  ;
(d)
∀a ∈ AHT ,q′T ∈ QT  , if    a qT -→T q′T  then             ε ∃q′S ∈ QS : qS ⇒S q′S  and q′S ≽ q′T  .

We say S  is refined (strictly on inputs) by T  , or, T  refines (strictly on inputs) to S  , notation S ≽ T  , if there is a SIR ≽ s.t. S ≽ T  . Let S and T be two ISS, we write S ≽ T if the underlying IA satisfy S ≽ T  .


The definition of SIR is based on the definition of refinement of [5] only that restriction (b) is new with respect to the original version. Based on this relation are defined non-interference properties based on refinement. They are called SIR-NNI and SIR-SNNI.


Definition 18. Let S be an ISS. (i) S is SIR-based strong non-deterministic non-interference (SIR-SNNI) if     h      h S \A  ≽ S ∕A  (ii) S is SIR-based non-deterministic non-interference (SIR-NNI) if    I,h  O,h       h S\A   ∕A    ≽ S∕A  .


This new formalization of security ensures that under the presence of high level activity no new information is revealed to low users w.r.t. the system with only low activity, because the interface S\Ah  (resp. S\AI,h∕AO,h  ) is refined by S∕Ah

Now we show there is a notion of observability V  such that V  -(S)NNI is equivalent to SIR-(S)NNI. To prove the result we need the following theorem:


Theorem 6. Given two IA S  and T  , S  is refined strictly on inputs by T  , ie S ≽ T  iff OV (S) ⊇ OV (T)  with V = {a,T,ε⁄ ,RT ,∧} .



Proof. For this, we have to show that for all states s ∈ QS  and t ∈ QT  it holds s ≽ t  iff OV (s) ⊇ OV (t)  . (⇒ )  Suppose s ≽ t  and ϕ ∈ OV (t)  . Let f : L → ℕ  the function defined in (1). We proceed by complete induction. In the base case f(ϕ) = 0  then ϕ = T  because               ∧ V = {a,T, ε⁄,RT ,  } and since T  is an observation for every state, then ϕ ∈ OV (s)  . Inductive case. By induction suppose that if s ≽ t  then, if f(ϕ) ≤ k  and ϕ ∈ OV (t)  it holds ϕ ∈ OV (s)  . Let f (ϕ) = k+ 1  , we do case analysis according to the shape of the formula. Suppose ϕ = X ϕ′ . Since t |= X ϕ′ then t |= ϕ′ . Moreover, s ≽ t  implies I(s) = I(t)  and therefore s |= X ϕ′ using induction. Cases aϕ′ and ∧   iϕi  are like this respective case in proof of Theorem 1

(⇐ )  Let OV(s) ⊇ OV (t)  . Case t-a→? t′ : we have to show there is s′ such that s a-→? s′ and      ′       ′ OV (s) ⊇ OV(t)  . If   a? s-→⁄ then I(s) ⁄= I(t)  and therefore OV (s) ⁄⊇ OV(t)  because t |= I(t)T  and s ⁄|= I(t)T  . Let s′ such that s a-?→ s′ , notice s′ is unique because IA are input deterministic. If OV (s′) ⁄⊇ OV(t′)  there is ϕ′ ∈ OV (t′)- OV (s′)  . This implies a?ϕ′ ∈ OV(t)- OV (s)  and we get a contradiction. In the case   a?  ′ s-→  s , we have to show there is  ′ t such that  a?  ′ t-→ t and      ′       ′ OV (s) ⊇ OV (t)  , this proof is similar to the previous one. Let now   a! t-→ t′ , we have to show there is s′ such that    a! s ⇒ -→ s′ and OV (s′) ⊇ OV (t′)  . Let Q  be {s′ : t ⇒ a-→ s′} . If for all s′ ∈ Q  it holds OV (s′) ⁄⊇ OV (t′)  then there is ϕs′ ∈ OV (t′)- OV (s′)  . Then for any s′ ∈ Q  it holds ∧    ϕq ∈ OV (t′)- OV (s′)   q∈Q  (at least one ϕq  fails). But then   ∧ a  q∈Q ϕq ∈ OV (t)- OV (s)  contradicting OV (s) ⊇ OV (t)  . Case  a;  ′ t-→ t is analogous. __


Now we are able show the statement.


Lemma 2. An IA S is SIR-(S)NNI iff S is {a,T,ε⁄ ,RT ,∧} -(S)NNI.



Proof. If S is SIR-SNNI then    h      h S\A  ≽ S∕A  . By Theorem 6 we have        h           h OV (S\A ) ⊇ OV(S∕A  )  . On the other hand, by Lemma 1 we have         h            h OV ((S∕A  )) ⊇ OV((S\A  ))  . Finally         h            h OV ((S∕A  )) = OV((S\A  ))  . The case S is SIR-NNI is analogous. __


Two properties about SIR-NNI and SIR-SNNI were introduced in [9]. The first one, if an ISS is SIR-(S)NNI then it is (S)NNI. This is straightforward using their respective equivalent definition with notion of observability, ie {a,T,ε⁄ ,RT ,∧} -(S)NNI and {a,T,ε⁄ ,⇄⁄ } -(S)NNI. The second one, if an ISS is SIR-SNNI then it is SIR-NNI. This is a particular case of Theorem 3.

5.1 Composition


Theorem 5 shows that non-interference properties are not preserved for all notion of observation V  . This implies SIR-SNNI and SIR-NNI properties are not preserved by the composition. 

Despite this, we give sufficient conditions to ensure that the composition of ISS results in a non-interferent ISS (always with respect to SIR-SNNI and SIR-NNI). Basically, these conditions require that (i) the component ISS are fully compatible, i.e. no error state is reached in the composition (in any way, not only autonomously), and (ii) they do not use confidential actions to synchronize. This is stated in the following theorem.


Theorem 7. Let S = ⟨S,AhS,AlS ⟩ and T = ⟨T,AhT ,AlT⟩ be two composable ISS such that shared(S,T)∩ (AhS ∪AhT) = ∅ . If S ⊗ T has no reachable error states and S and T satisfy SIR-SNNI (resp. SIR-NNI) then S ∥ T satisfies SIR-SNNI (resp. SIR-NNI).



Proof. Define ≽ by (sr,tr) ≽ (sa,ta)  iff sr ≽S sa  and tr ≽T ta  with ≽S being a SIR between S\AhS  and S∕AhS  and similarly for ≽T . We show that ≽ is a SIR between (S ∥ T )\Ah  and (S ∥ T)∕Ah  where Ah = (AhS ∪ AhT)- shared(S,T) = AhS ∪AhT

Suppose (sr,tr) ≽ (sa,ta)  . We proceed by case analysis on the different transfer properties on Def 17. For case (a) suppose        a?  ′ (sr,tr) -→  (sr,tr)  and sr ≽S sa  . Then there is  ′ sa  such that    a? ′ sa-→  sa  and s′r ≽ s′a  . As a consequence of the absence of error state in the product, we can ensure        a? (sa,ta)-→  (s′a,ta)  and (s′,t ) ≽ (s′,t )   r r     a a  . The case (s,t )-a?→ (s ,t′)   r r      r r  is analogous. In the same way we prove that condition (b) holds. For condition (c), let        a! (sa,ta) -→  (s′a,ta)  and sr ≽S sa  . Then there is s′r  such that      a! sr ⇒ -→  s′r  and s′≽S s′  r    a  . Let ˆs  be a state s.t. sr ⇒ ˆs a-!→ s′           r  . Notice that all internal transition used to reach ˆs  in     h S \A  can be executed in          h (S ∥ T )\A  . Then               a!  ′ (sr,tr) ⇒ (ˆs,tr) -→ (sr,tr)  and   ′       ′ (sr,tr) ≽ (sa,ta)  . The case (sa,ta) a-→! (sa,t′a)  is analogous. We finally prove that condition (d) holds. Cases (sa,ta)-→ε (s′a,ta)  and (s ,t) ε-→ (s ,t′)   a a      a a  are similar to the previous one. Suppose now (s ,t) ε-→c (s′,t′)   a a      a  a  where ε  c  is an internal action resulting from a synchronization between S and T on common action c  . Notice c ∈ Al ∩Al      S    T  . W.l.o.g suppose    c? sa -→  s′a  and   c! ta -→  t′a  . Repeating previous reasoning, we can ensure there is state ˆt such that (sr,tr) ⇒ (sr,ˆt) c-→;(s′r,t′r)  and (s′r,t′r) ≽ (s′a,t′a)  . __


This result is useful when we develop all the components of a complex system. As we have total control of each component design, it is possible to achieve full compatibility. In this way, to ensure that the composed system is secure, we only have to develop secure components s.t. every high action of the component is a high action of the final system. This result can also be used when we are not in control of all components, i.e. we want use components not developed by us. The idea is simple, given two ISS, define the high actions used in the communication process as low and check if the resulting ISS satisfies the hypothesis of Theorem 7.


Corollary 1. Let         h  l S = ⟨S,A S,AS⟩ and          h  l T = ⟨T,A T,AT⟩ be two composable ISS. Let  ′      h               l S = ⟨S,AS - shared(S,T),AS ∪shared(S,T)⟩ and   ′      h               l T  = ⟨T,AT - shared(S,T),AT ∪ shared(S,T)⟩ . If S ⊗ T has no reachable error states and  ′ S and   ′ T satisfy SIR-SNNI (resp. SIR-NNI) then S ∥ T satisfies SIR-SNNI (resp. SIR-NNI).


This result is based on the fact that actions used in the synchronization become hidden in the composition, then it is not important the confidential level of the actions. 

5.2 Deriving Secure Interfaces


As we have seen, the composition of secure interfaces may yield a new insecure interface. This may happen when the components are already available but they were designed independently and they were not meant to interact. The question that arises then is if there is a way to derive a secure interface out of an insecure one. To derive the secure interface, we adapt the idea used to define ISS composition (see Def. 6); i.e. we restrict some input transitions in order to avoid insecure behavior. We then obtained a composed system that offers less services than the original one but is secure. In this section we present an algorithm to derive an ISS satisfying SIR-SNNI (or SIR-NNI) from a given ISS whenever possible. Since the method is similar in both cases, we focus on SIR-SNNI. 

This algorithm is based on the algorithm presented in [6] to derive interfaces that satisfy BSNNI/BNNI, which in turn is based on the algorithm for bisimulation checking of [10]. The differences between both algorithm are consequence of the definition of SIR but the idea behind the procedure is the same. The new algorithm works as follows: given two interfaces V and V′ , the second without high actions, (i) V is semi-saturated adding all weak transitions ⇒ -→a ; (ii) a semi-synchronous product of V and V′ is constructed where transitions synchronize whenever they have the same label and satisfy some particular conditions; (iii) whenever there is a mismatching transition, a new transition is added on the product leading to a special fail state; (iv) if reaching a fail state is inevitable then V ⁄≽ V′ ; if there is always a way to avoid reaching a fail state, then V ≽ V′ . We later define properly semi-saturation, semi-synchronous product and what means inevitably reaching a fail state. In this way, given an ISS S , we can check if S\Ah ≽ S ∕Ah  , if the check succeeds, then S satisfies SIR-SNNI (see Theorem 8). If it does not succeed, then we provide an algorithm to decide whether S can be transformed into a secure ISS by controlling (i.e. pruning) input transitions. This decision mechanism categorizes insecure interfaces in two different classes: the class of interfaces that can surely be transformed into secure one and the class in which this is not possible. 

The algorithm to synthesize the secure ISS (once it is decided that it is possible) selects an input transition to prune, prune it, and checks whether the resulting ISS is secure. If it is not, a new input transition is selected and pruned. The process is repeated until it gets a secure interface. This process is shown to terminate (see Theorem 9). 

Checking Strict Inputs Refinement.  Different labels for internal actions do not play any role in a SIR relation. Then, to simplify, we replace all labels of internal action for two new ones: ε  and ε′ . The label ε′ is used to represent an internal transition that can be removed; in our context, an internal action can be removed because it is a high input action that was hidden in order to check for security. Label ε  is used to identify internal action that cannot be removed. This is formalized in the following definition, which includes self-loops with ε  and ε′ for future simplifications.


Definition 19. Let S  be an IA and B ⊆ AHS  . Define S  marking B  or marking B  in S  as the IA SB = ⟨QS,q0S,AIS,AOS,{ε,ε′},-→SB ⟩ where -→SB  is the least relation satisfying following rules:

                       a   ε         ε′       q-→S--q′-a∈-AIS-∪AOS q-→SB  q   q-→SB q         q a-→S  q′                               B    q a-→S-q′-a∈-B-   q-a→S-q′-a∈-AHS---B-        ε′   ′           q ε-→   q′      q-→SB q               SB
Given an ISS S , the marking B  in S , notation S   B  , is the ISS obtained after marking B  in the underlying IA.

A natural way to check weak bisimulation is to saturate the transition system i.e., to add a new transition q-a→ q′ to the model for each weak transition q a⇒ q′ , and then checking strong bisimulation on the saturated transition system. Applying a similar idea we can check if there is a SIR relation. We add a transition   a  ′ q-→  q whenever    a  ′ q ⇒ -→ q with a  an output action. We call this process semi-saturation.


Definition 20. Let S  be an IA such that AHS = {ε,ε′} . The semi-saturation of S  is the IA -- S = ⟨QS,q0S,AIS,AOS,{ε,ε′},-→S-⟩ where -→S--  is the smallest relation satisfying the following rules:

  a         ε        a q--→S-q′   q--→S-q′-q′-→S--q′′-a-∈AOS q-a→- q′           q a-→ -q′′    S                  S

Given an ISS S , its semi-saturation, -- S , is the ISS obtained by saturating the underlying IA.


The last definition ensure that: if a ∈ AO  then     a q ⇒ -→S q′ iff   a q -→S--q′

Following [6] and [10], the definition of the synchronous products follows from the conditions of the relation being checked, in this case SIR. First, we recapitulate these conditions and then we present the formal definition. If S ≽ T  then for two states s ∈ QS  and t ∈ QT  s.t. s ≽ t  , every output/hidden action that t  can execute has to be simulated by s  (probably using internal action); on the other hand, t  is not forced to simulate output/hidden actions from s  . Finally, both states have to simulate all input action that can be executed by the other one without performing previously any internal action. All these restrictions become evident from the definition of SIR. When a condition is not satisfied, a transition to a special state fail is created. Taking this into account we define the semi-synchronized product.


Definition 21. Let S  be a semi-saturated IA and T  be an IA such that  X     X    X AS = A T = A  for X  ∈ {I,O } and  H     H      ′ AS = A T = {ε,ε } . The semi-synchronous product of S  and T  is the IA                            0  0   I  O     ′ S × T = ⟨(QS × QT )∪ {fail},(qS,qT),A  ,A  ,{ε,ε},-→S ×T⟩ where -→S ×T  is the smallest relation satisfying following rules:

         a   ′     a   ′        ε′   ′     ε  ′       -qS--→S-qS--qT-→T--qT     qS-→S-qS--qT-→T--qT--       (qS,qT) a-→S×T (q′S,q′T)     (qS,qT) ε-′→S×T (q′S,q′T)              ′ qS ε-→S q′S  qT ε-→T q′T qS a-→S  qT-→a⁄ T a ∈AI  qS a-→⁄ S qT a-→T -------ε′------′-′-----------a--------- -------a------- (qS,qT)-→S ×T (qS,qT)  (qS,qT) -→S×T fail   (qS,qT)-→S ×T fail
Given         h   l S = ⟨S,AS,A S⟩ and         h   l T = ⟨T,AT,A T⟩ with S  and T  satisfying conditions above and  m     m    m AS = A T = A  for m ∈ {l,h} , then the semi-synchronous product of S and T is defined by the ISS                h   l S ×T  = ⟨S × T,A ,A ⟩ .

Let us show how we can use synchronous product to check and derive, whenever it is possible, a SIR relation. If there is a state (qS,qT)  such that (qS,qT) a-→S ×T fail  then it is evident that qS ⁄≽ qT  . Moreover, suppose the synchronous product only has states (qS,qT )  and fail  and the transition        a (qS,qT )-→S ×T fail  . If      O a ∈ A  , as the progress from (qS,qT)  is autonomous, there is no way to control the execution of a!  and hence there is no way to avoid qS ⁄≽ qT  . Then, we say that (qS,qT )  fails the SIR-relation test. On the other hand, if      I a ∈ A  , a state offers a service that the other does not. In this case, removing the input transition a  (the interface offers less services), we avoid transition         a (qS,qT)-→S ×T fail  in the synchronous product and we get two states such that qS ≽ qT  , moreover, we get two interfaces related by a SIR relation. In this case, we say that (qS,qT)  may pass the SIR relation test. In a more complex synchronous product, the “failure” in the state (qS,qT)  has to be propagated backwards appropriately to identify pairs of states that cannot be related. This propagation is done by the definitions of two different sets: Fail and May  . The set Fail  contains those pairs that are not related by a refinement and there is no set of input transitions to prune so that the pair may become related by the refinement. On the other hand, May  contains pairs of states that are not related but will be related if some transition is pruned. States not in Fail∪ May  , belong to the set Pass  . All pairs in Pass  are related by a SIR relation. 



                   0                 a           I                 Fail = {(qS,qT):(qS,qT)-→S ×T fail,a⁄∈ A }∪ {fail} Failk+1 = Failk ∪{(q ,q) :a∈ AO ∪A,q a-→ q′,(∀q′ :(q ,q ) a-→ (q′,q′):(q′,q′)∈ Failk)}                 S T             T    T   S   S  T     S T    S  T

Table 3: The Fail  set.





          0       ⋃          0          k+1      k       ⋃          k+1       May  =  a          May q- a→q′  May    = May ∪  a          May q- a→q′              q-→q′∈(-→S ∪-→T )                         q-→q′∈(-→S ∪-→T )                0                                I        a             Mayq a-→q′ ={(qS,qT ):(q = qS ∨q = qT),a∈ A ,(qS,qT)-→S ×T fail}                                 a                a Mayk+1a ′= {(qS,qT)⁄∈ Fail:a ∈ A,qS -→ q′S,(∀q′T :(qS,qT)-→  (q′S,q′T):(qS′,q′T)∈ Fail∪ Mayk)}     qS-→q S Mayk+1a  = {(qS,qT)⁄∈ Fail:a ∈ A,qT a-→ q′T,(∀q′S :(qS,qT)-a→ (q′S,q′T):(q′S,q′T) ∈Fail∪ Mayk)}    qT-→q ′T

Table 4: The definition of May  set .



Definition 22. Let S × T  be a synchronous product. We define the sets Fail,May, Pass ⊆ QS×T  respectively by:

  • Fail = ∪∞i=0Faili  where Faili  is defined in Table 3. If q ∈ Fail  , we say that the pair q  fails the SIR relation test.
  • May = ∪∞i=0Mayi  where Mayi  is defined in Table 4. If q ∈ May  , we say that the pair q  may pass the SIR relation test.
  • Pass = QS ×T - (May ∪ Fail)  . If q ∈ Pass  , we say that the pair q  passes the SIR relation test

If the initial state of the underlying IA of an ISS S × T passes (may pass, fails) the SIR relation test, we say that S × T passes (may pass, fails) the SIR relation test.


The proof of the following lemma is based on the proof of the algorithm to check bisimulation in [10], for this reason we only present a proof sketch. Our proof deviates a little from the original as a consequence of not all mismatching transitions are problematic.


Lemma 3. A semi-synchronized product S ×T  passes the SIR relation test iff S ≽ T  .



Proof sketch. Since (May ∪ Fail)∩ Pass = ∅ , we only have to prove that (i)  (qS,qT) ∈ May ∪ Fail  implies qS ⁄≽ qT  and (ii)  if (qS,qT) ∈ Pass  then qS ≽ qT  . The proof of (i) is by induction on k  in Mayk  and Failk  . The proof of (ii) is straightforward after showing that, given a state (s,t) ∈ QS×T ∩ Pass  , then:

1.
if   a  ′ s -→ s and      I a ∈ A  then there is a state  ′ t s.t. there is a transition      a   ′ ′ (s,t) -→ (s,t )  and   ′ ′ (s,t) ∈ Pass  .
2.
if t a-→ t′ then there is a state s′ s.t. there is a transition (s,t)-→a (s′,t′)  and (s′,t′) ∈ Pass  .

The proof of both statements is by case analysis on a  obtaining always a contradiction. __


Using this lemma, we can verify if an interface is SIR-SNNI, since S is SIR-SNNI if    h S\A  is refined by     h S∕A  . Notice that we cannot use     h S \A  and    h S∕A  to create a semi-synchronized product; in general,     h S\A  does not satisfy  H       ′ A  = {ε,ε} and it is not semi-saturated. This can be solved marking ∅ in     h S\A  and then semi-saturating the interface, i.e. we work with ----h-- (S\A )∅ instead of    h S\A  . Similarly,     h S∕A  does not satisfy   H      ′ A   = {ε,ε } . Since  ′ ε is used to represent the internal action that can be removed, we solve this problem marking  h,I A  in    h S∕A  , i.e. we replace    h S∕A  by     h (S∕A )Ah,I  . Therefore, verifying that S satisfies SIR-SNNI amounts to checking whether     ----h       h PS = S \A ∅ × (S∕A )Ah,I  passes the refinement test. Applying a similar reasoning, if we are interested on verifying SIR-NNI, we can check if -------------- ((S\Ah,I)∕Ah,O )∅ × (S ∕Ah )Ah,I  passes the SIR-relation test. Then we have a decision algorithm to check whether an ISS satisfies SIR-SNNI or SIR-NNI. We state it in the following theorem.


Theorem 8. Let S = ⟨S,Ah,Al⟩ be an ISS.

1.
S satisfies SIR-SNNI iff -----h-       h (S\A  )∅ × (S ∕A )Ah,I  passes the SIR-relation test.
2.
S satisfies SIR-NNI iff ((S\Ah,I)∕Ah,O) × (S∕Ah)Ah,I               ∅  passes the SIR-relation test.



pict

Table 5: Set of eliminable candidates.



Synthesizing Secure ISS.  In the following, we show that if a synchronized product PS may pass the SIR relation test then there is a set of input transition that can be pruned so that the resulting interface is secure. First, we need to select which are the candidate input actions to be removed. So, if S is an ISS such that PS may pass the SIR-relation test, the set EC (S) ⊆ -→ ∩Q × AI × Q  (see Table 5) is the set of eliminable candidates

All transitions in EC (S )  are involved in a synchronization that connects a source pair that may pass the SIR-relation test and a failing target. This can happen in four different situations. The first one is the basic case, in which one of the components of the pair can perform a low input transition that cannot be matched by the other. The following two cases are symmetric and consider the case in which both sides can perform an equally low input transition but end up in a failing state. The last case includes high input actions that are hidden in the synchronized product and always reach a pair that fails. Notice that if PS may pass the bisimulation test then EC (S) ⁄= ∅

An important result is that no new failing pair of states is introduced by removing eliminable candidates. Moreover, if a pair of states fails in the synchronous product of the original ISS and it is also present in the synchronous product of the reduced ISS, then it also fails in this ISS. This ensures that a synchronous product that may pass the SIR-relation test, will not fail after pruning. In a sense, Lemma 4 below states that the sets Fail  and Pass∪ May  remain invariant.


Lemma 4. Let S be an ISS s.t. PS may pass the SIR-relation test. Let S ′ be an ISS obtained by removing one transition in EC (S)  from S (i.e. -→S ′ = -→S - {q a-→ q′} , provided q-→a q′ ∈ EC(S)  , and unreachable states are removed form S′ ). Then it holds that: (i) FailPS′ = FailPS ∩QPS ′ ; (ii) (PassPS ∪MayPS )∩ QPS′ = PassPS′ ∪ MayP ′                                      S (Subindices in FailPS , MayPS , etc. indicate that these sets were obtained from the synchronous product PS .)



Proof. We only show (i). (ii) is an immediate consequence of (i). 

(Case ⊆ ). Clearly QPS′ ⊆ QPS  . Suppose   b? q -→  q′ ∈ EC(S)  is the transition that is removed. By induction on k  we show Failka      ⊆ Failka    q-→q ′,PS′      q-→q′,PS for all k  . This implies FailkPS′ ⊆ FailkPS and then FailPS′ ⊆ FailPS . Suppose            0 (qr,qa) ∈ Failqa- a→q′,PS′                a . By definition, action      I    ′ a ⁄∈ A ∪ {ε} and        a (qr,qa) -→ fail  . Then a ⁄= b?  and therefore       a (qr,qa)-→ fail  belongs to PS . Then             0 (qr,qa) ∈ Failq-→aq ′,P             a   a S . Suppose now            k+1 (qr,qa) ∈ Failq -→aq ′,P ′             a   a S . Then      I    ′ a ⁄∈ A ∪ {ε} and   ′        a   ′  ′    ′ ′      k (∀qr : (qr,qa)-→ (qr,qa) : (qr,qa) ∈ FailPS′)  . Notice that    ′ ′         a     ′  ′      ′  ′         a     ′  ′ {(qr,qa) : (qr,qa)-→PS (qr,qa)} = {(qr,qa) : (qr,qa)-→PS′ (qr,qa)} as consequence of       I   ′ b? ∈ A ∪ {ε } . By induction hypothesis FailkPS′ ⊆ FailkPS , then             a ∀q′a : (qr,qa)-→ (q′r,q′a) : (q′r,q′a) ∈ FailkPS and we get (qr,qa) ∈ Failk+1a            qa-→q ′a,PS and (qr,qa) ∈ FailkP+1             S

(Case (⊇)  .) We show by induction on k  that    k       k FailPS′ ⊇ FailPS ∩QPS ′ for all k  . Let             0 (qr,qa) ∈ FailPS ∩ QPS′ . Moreover, w.l.o.g. suppose (qr,qa) ∈ Fail0 a            qa-→q ′a,PS . Since a ∕∈ AI  , the transition qr a-→ q′r  cannot be removed and since   a qr-→⁄ , then it holds that (qr,qa) ∈ Fail0 a    ⊆ Fail0PS′            qr-→q′r,PS′ . For the induction case, suppose w.l.o.g. (qr,qa) ∈ Failk+1a ′  ∩QPS ′            qa-→q a,PS . Then             a (∀q′r : (qr,qa) -→ (q′r,qa′) : (q′r,q′a) ∈ FailkPS)  . Since (qr,qa)  is reachable in  ′ S and     I a ∕∈ A  , all pair  ′  ′ (qr,qa)  is reachable in   ′ S . By induction hypothesis, (q′r,q′a) ∈ FailkPS′ and then (qr,qa) ∈ Failk+1a ′  ⊆ FailkP+S1            qa-→qa,PS . __


The following theorem is the main result of this section. Notice that its proof defines the algorithm to prune input actions and obtain a secure interface. A similar result holds for SIR-NNI.


Theorem 9. Let S be an ISS such that PS may pass the SIR relation test. Then there is an input transition set -→ χ  such that, if  ′ S is the ISS obtained from S by removing all transitions in -→ χ  ,   ′ S is SIR-SNNI.



Proof. We only report a proof sketch. The complete proof follows in the same way as the proof of Theorem 4.10 in [6]. Let S ′ be an ISS obtained from S by removing one transition from the set EC(S)  . Lemma 4 ensures that S′ may pass or passes the SIR relation test. If S′ passes the SIR relation test, we stop. If S ′ may pass the SIR relation test, we repeat the process until we obtain an ISS that passes the test. Since the transition set is finite, in the worst case, we will continue with the process until obtaining an ISS with an empty set of eliminable candidates. If this ISS may pass the SIR-relation test we get a contradiction with the fact that the set of eliminable candidates is empty, then this ISS has to pass the test. Finally, -→ χ  is composed by the set of transitions removed along the way. __

6 Concluding remarks


In this work, we have presented semantics for interactive sequential systems. In this way we have extended the work of [1] and [2] to models where the control of the actions is shared by the user and the system. To reduce complexity, we did not include all types of observations presented in [2], thus limiting ourselves to work with a subset of them. We do not foresee major problems in extending our theory to the types of observations we left out. 

The approach to define non-interference security properties through types of observations gives important insight about the security model, in particular about the characteristics of the attacker. For instance, if the attacker can make use of the covert channels, then the type of observation ε  should be chosen. Another example is the type of observation F  which can be interpreted as the system detecting an attack and aborting the execution. In this way, the types of observations define a catalogue to characterize the attackers that could be considered. 

This general definition encloses previous definitions of non-interference for ISS. We found notions of observability to represent (S)NNI, B(S)NNI and SIR-(S)NNI (Theorem 1 and Lemma 2). This approach also provides a better understanding of the security properties. In [9], SIR-(S)NNI is introduced to resolve some shortcomings found in B(S)NNI, but in fact, these shortcomings do not exist because the properties should be considered in a different context. B(S)NNI should be considered in a context where an attacker can only observes how the system behaves. On the other hand, SIR-(S)NNI should be considered in a context where the attacker can interact through the interface. This is obvious when we see the notions of observability used to represent each property:           ∧ {a,T, ε⁄,⇄⁄ , ,¬} for B(S)NNI and           ∧ {a,T,ε⁄ ,RT, } for SIR-(S)NNI. Notice that B(S)NNI has the no interaction type (⇄⁄ )  while in SIR-(S)NNI the interaction is explicit due to the type (RT )

In addition, the different types of observations provide a simple way to chose the appropriate notion of security. For example notice interface S in Figure 5. One could argue that still there is an information leakage, because the execution of action a!  is an evidence that the high user has not interacted with the interface. If this information is sensitive and the attacker interacts with the interface, one could use the notion of observability                ∧ V = {a,T,ε⁄ ,RT , ,¬} to detect this kind of problem. Notice this notion of observability is stronger than the notion used for B(S)NNI. 

Future Works.  We have identified two different research lines to continue this work. At first place, the types of observations presented in [2] that have been omitted, have to be addressed, and a deep study comparing the different semantics should be carried out to get a better understanding of them. Second, we also plan to study how the new semantics for interactive systems affect the different models with both input/controllable and output/uncontrollable actions and the results obtained for them. 

References


[1]   R. J. V. Glabbeek, “The linear time - branching time spectrum i. the semantics of concrete, sequential processes,” in In Handbook of Process Algebra. Elsevier, 2001, pp. 3–99.

[2]   R. van Glabbeek, “The linear time-branching time spectrum II: The semantics of sequential processes with silent moves,” in Proceedings CONCUR, vol. 93, 2003, pp. 66–81.

[3]   L. de Alfaro and T. A. Henzinger, “Interface theories for component-based design,” in EMSOFT, ser. LNCS, T. A. Henzinger and C. M. Kirsch, Eds., vol. 2211. Springer, 2001.

[4]   L. de Alfaro and T. Henzinger, “Interface automata,” in ESEC / SIGSOFT FSE. ACM Press, 2001, pp. 109–120.

[5]   L. de Alfaro and T. A. Henzinger, “Interface-based design,” in Engineering Theories of Software-Intensive Systems, ser. Nato Science Series, M. B. et al., Ed. Springer, 2005, pp. 83–104.

[6]   M. Lee and P. R. D’Argenio, “Describing secure interfaces with interface automata,” Electron. Notes Theor. Comput. Sci., vol. 264, no. 1, pp. 107–123, 2010.

[7]   J. A. Goguen and J. Meseguer, “Security policies and security models,” in IEEE Symposium on Security and Privacy, 1982, pp. 11–20.

[8]   R. Focardi and R. Gorrieri, “Classification of security properties (part i: Information flow),” in Procs. of FOSAD 2000, ser. LNCS, vol. 2171. Springer, 2001, pp. 331–396.

[9]   M. Lee and P. R. D’Argenio, “A refinement based notion of non-interference for interface automata: Compositionality, decidability and synthesis,” in SCCC, 2010, pp. 280–289.

[10]   J.-C. Fernandez and L. Mounier, ““On the fly” verification of behavioural equivalences and preorders,” in Procs. of CAV ’91, ser. LNCS, vol. 575. Springer, 1991, pp. 181–191.

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons