Model checking ontology‐driven reasoning agents using strategy and abstraction

We present a framework for the modelling, specification, and verification of ontology‐driven multi‐agent rule‐based systems (MASs). We assume that each agent executes in a separate process and that they communicate via message passing. The proposed approach makes use of abstract specifications to model the behaviour of some of the agents in the system and exploits information about the reasoning strategy adopted by the agents. Abstract specifications are given as Linear Temporal Logic (LTL) formulas which describe the external behaviour of the agents, allowing their temporal behaviour to be compactly modelled. Both abstraction and strategy have been combined in an automated model checking encoding tool Tovrba for rule‐based multi‐agent systems which allows the system designer to specify information about agents' interaction, behaviour, and execution strategy at different levels of abstraction. The Tovrba tool generates an encoding of the system for the Maude LTL model checker, allowing properties of the system to be verified.

of some of the agents in the system and exploits information about the reasoning strategy adopted by the agents. Abstract specifications are given as Linear Temporal Logic (LTL) formulas, which describe the external behaviour of the agents (the response time behaviour of the agent), allowing their temporal behaviour to be compactly modelled. We explain how our abstraction approach gives both correct and complete results.
Third, to illustrate the scalability of our approach, we reimplemented an example scenario introduced in the work of Alechina et al 7 and provide a more detailed complementary analysis of previously presented results, 8 and presenting results for a more complex multi-agent home health care monitoring alarm system adapted from the work of Paganelli and Giuli. 9 The remainder of this paper is structured as follows. In Section 2, we provide an overview of ontology and how agents are modelled using ontology-driven rules, followed by basics of model checking technique and the Maude LTL model checker. In Section 3, we present a scalable compositional modelling and verification framework of distributed agents. In Section 4, we briefly describe a prototyping tool TOVRBA for translating ontology based specification of the agents into Maude. In Section 5, we present Maude encoding. In Section 6, we model a home health care monitoring system and present some experimental results using TOVRBA; the scalability of the new approach is also illustrated using a distributed reasoning problem, which can be easily parameterised to increase or decrease the problem size. We discuss related work in Section 8 and conclude in Section 9.

Ontology-driven Horn clause rules
Ontologies and rules play a central role in the design and development of Semantic Web applications. An ontology is an explicit formal specification of a conceptualization which defines certain terms of a domain and the relationships among them. 10 The Web ontology language OWL is a semantic markup language for ontologies that provides a formal syntax and semantics for them. The W3C declared two different standardizations for OWL, ie, OWL 1 and OWL 2. 11 Both the description logic-based OWL 1 and OWL 2 are decidable fragments of First Order Logic (FOL); however, the expressive power of OWL 1 is strictly limited to certain tree structure-like axioms. 12 For instance, a simple rule livesIn(?x, ?y), locatedIn(?y,?z) → hasCountry(?x,?z) cannot be modeled using OWL 1 axioms. Although OWL 2 can express this country rule indirectly, many rules are still not possible to model using OWL 2 axioms. Function-free Horn clause rules can remove such restrictions while being decidable but they are restricted to universal quantification and no negation. A combination of OWL 2 with rules offers a more expressive formalism for building Semantic Web applications. Several proposals have been made to combine rules with ontologies. We use one of them, the SWRL that extends OWL DL by adding new axioms, namely, Horn clause rules. Although SWRL was a proposed extension for OWL 1, it can be used as a rule extension for OWL 2. 13 We combine a set of SWRL rules with the set of OWL 2 RL axioms and facts to build our ontology. Since OWL 2 RL is based on DLP, the set of axioms and facts of an OWL 2 RL ontology can be translated to Horn clause rules. 12 Translations of some of the OWL 2 RL axioms and facts into rules are given in Table 1. In the second column, complete DL statements are given, which are constructed by the corresponding OWL 2 RL axioms and facts to illustrate the translation. The translation of SWRL rules is straightforward because they are already in the Horn clause rule format.

Model checking using Maude
The model-based verification approach uses model checking techniques, which are based on the semantics of the specification language.
Applying model checking to a design comprises three components. First, a detailed description M (model) of the system has to be given using EquivalentProperties P ≡ Q Q (x, y) → P(x, y) P(x, y) → Q(x, y) ObjectInverseOf P ≡ Q − P(x, y) → Q( y, x) Q( y, x) → P(x, y) TransitiveObjectProperty P + ⊑ P P (x, y), P( y, z) → P(x, z) SymmetricObjectProperty P ≡ P − P(x, y) → P(y, x) the description language of the model checker. Second, a property of the system has to be given by means of some property specification language, eg, linear time logic (LTL) or computation tree logic (CTL). The expressive power of LTL and CTL is not comparable. While there are properties that can be expressed both in LTL and CTL, there are also properties exist that can be expressed in LTL but cannot be expressed in CTL and vice-versa. 14 pp 30-31 Third, once the model M and the system property are given, a model checker will check whether or not M ⊧ .
The third phase is completely automatic. Thus, the model checking problem can be stated simply as given a formula of some logical language and a model M, to determine whether or not is valid in the model M. In Maude, 6 a rewriting theory  = (Σ, E, R), consists of a signature Σ, a set E of equations, and a set R of rules. The static part of a system is specified in an equational sub-logic of rewriting logic (membership equational logic) by means of equations E. The system dynamics (concurrent transitions or inferences) is specified by means of rules R that rewrite terms, representing parts of the system, into other terms. The rules in R are applied modulo the equations in E. Maude computes normal form of a term by applying equations from left to right iteratively; then, an applicable rewrite rule is arbitrarily chosen and applied from left to right. Thus, data types are defined algebraically by equations and the dynamic behaviour of a system is defined by rewrite rules, which describe how a part of the state can change in one step. A rewrite theory is often non-deterministic and could exhibit many different behaviours. In Maude, a term is a constant, a variable, or the application of an operator to a list of argument terms. A ground term is a term containing no variables, but only constants and operators. Like any other model checking tool, verification in Maude requires a system specification and a property specification.
The system specification is provided by a rewrite theory, whereas the property specification is given by LTL formulas.

A MODELLING AND VERIFICATION FRAMEWORK OF DISTRIBUTED AGENTS
We adapt the model of distributed agents presented in the work of Alechina et al. 7 A distributed reasoning system consists of n A g ( ≥ 1) individual reasoners or agents. Each agent is identified by a value in {1, 2, … , n A g } and we use variables i and j over {1, 2, … , n A g } to refer to agents. An agent in the system is either concrete or abstract. Each concrete agent has a program, consisting of Horn clause rules, and a working memory, which contains facts (ground atomic formulas) representing the initial state of the system. The logical model presented in the work of Alechina et al 7 is based on propositional language; however, the restriction to propositional rules is not a very drastic assumption. If the rules do not contain functional symbols and we can assume a fixed finite set of constant symbols, then any set of first-order Horn clauses and facts can be encoded as propositional formulas. In this framework, concrete agents in a system also use different conflict resolution strategies. The behaviour of each abstract agent is represented in terms of a set of temporal epistemic formulas. That is, abstract specifications are given as LTL formulas, which describe the external behaviour or the response time behaviour of some of the agents in the system. The overall rationale for choosing this abstract agent notion is discussed below in Section 3.1. The agents (concrete and abstract) execute synchronously. We assume that each agent executes in a separate process and that agents communicate via message passing. We further assume that each agent can communicate with multiple agents in the system at the same time. In the following sections, we describe in more detail how we model the concrete and abstract agents.

Managing complexity through strategy and abstraction
We would like to be able to verify properties of systems consisting of arbitrary numbers of complex communicating reasoners. However, our experience in the works of Alechina et al 7,15 has indicated that verifying such large complex reasoning systems is infeasible with current model checking techniques. The most straightforward approach to defining the global state of a multi-agent system is as a (parallel) composition of the local states of the agents. At each step in the evolution of the system, each agent chooses from a set of possible actions. The actions selected by the agents are then performed in parallel and the system advances to the next state. In a multi-agent system composed of n( ≥ 1) agents, if each agent i can choose between performing at most m( ≥ 1) actions, then the system as a whole can move in m n different ways from a given state at a given point in time. Along with the state space size, model checking performance is heavily dependent on the branching factor of states in the reachable state space as well as on the solution depth of a given problem. In general, the model checking algorithm for reachability analysis performs a breadth-first exploration of the state transition graph. When checking invariant (safety) properties, the model-checker will either determine that no states violate the invariant by exploring the entire state space, or will find a state violating the invariant and produce a counter-example.* However, even with state-of-the-art BDD-based model-checkers, memory exhaustion can occur when computing the reachable state space due to the large size of the intermediate BDDs (because of the high branching factor). The model checking performance based on depth-first search can also vary dramatically from good to worst. In both the cases, verification of true formulas take longer than verification of false formulas since a model checker will find a counterexample faster than it takes to explore the whole model.
To overcome this problem, our modelling approach abstracts from some aspects of system behaviour to obtain a system model that is tractable for a standard model-checker. Abstract specifications are given as Linear Temporal Logic (LTL) formulas, which describe the external behaviour of some of the agents, allowing their temporal behaviour to be compactly modelled. Conversely, reasoning strategies allow the detailed specification of the ordering of steps in the agent's reasoning process. The decision regarding which agents to abstract and how their external behaviour should be specified rests with the modeller/system designer. Specifications of the external (observable) behaviour of abstract agents may be derived from, eg, assumed characteristics of as-yet-unimplemented parts of the system, assumptions regarding the behaviour of parts of the overall system the designer does not control (eg, quality of service guarantees offered by an existing web service), or from the prior verification of the behaviour of other (concrete) agents in the system.

Ontology-driven rules
The use of first-order rules increases the expressiveness of the framework in the work of Alechina et al 7 and makes it easier to model complex real world scenarios. To formally represent a domain model, we use OWL 2 RL ontology augmented with SWRL rules, which is ultimately translated into a set of Horn-clause rules to design the desired multi-agent system following the concept presented in Section 2.1. Section 4 provides more detailed discussion of the translation process. However, the verification framework is standalone, and it is not necessary that rules will only be derived from ontologies; a system designer can model and write a set of rules to construct the systems using any other approaches. The use of ontology-driven rules simply provides a more natural way to think about and model real world rules and exploit this benefit. In addition, existing tools, including Protégé, 17 support the design of OWL 2 RL and SWRL based ontologies, making it easier to model rule-based agents using semantic rules.

Description of concrete agents
The two main components of rule-based agents are the knowledge base (KB), which contains a set of first-order Horn-clause rules, and the working memory (WM), which contains a set of facts that constitute the current (local) state of the system. The state of an agent also contains a communication counter, which is discussed below. Another component of a rule-based system is the inference engine which reasons over rules when the application is executed.
The inference engine may have some reasoning strategies to handle cases when multiple rule instances are eligible to fire. The agents use the refractory rule firing technique, ie, each rule instance is fired only once. In Listing 1, we specify the abstract syntax for concrete agents' rules using a BNF. In this notation, the terminals are quoted, the non-terminals are not quoted, alternatives are separated by vertical bars, and components that can occur zero or more times are enclosed braces followed by a superscript asterisk symbol ({ … } * ). In other words, the rules of a concrete agent have the plain text format < n ∶ P 1 , P 2 , … , P n → P >, where n is a constant that represents the annotated priority of the rule and the P i 's and P are first-order atoms. If an agent i has this rule, the antecedents P 1 , P 2 , … , P n match with the facts in the agent's working memory and the consequent P is not in the agent's working memory in a given state s, then the agent can fire the matching rule instance which adds the consequent to the agent's working memory in the successor state s ′ .
*Even with on-the-fly model-checking, 16 the model checker has to explore the state space at least until the solution depth.
Listing 1 Abstract syntax for concrete agent's rules

Model of communication
We assume a simple query-response scheme based on asynchronous message passing for agent communication. Each agent's rules may contain two distinguished communication atoms, ie, Ask(i, j, P), and Tell(i, j, P), where i and j are agents and P is an atomic formula not containing an Ask or a Tell. Ask(i, j, P) means ''i asks j whether P is the case'' and Tell(i, j, P) means ''i tells j that P'' (i ≠ j). The positions in which the Ask and Tell primitives may appear in a rule depends on which agent's program the rule belongs to. Agent i may have an Ask or a Tell with arguments (i, j, P) in the consequent of a rule; for example, < n ∶ P 1 , P 2 , … , P n → Ask(i, j, P) >, whereas agent j may have an Ask or a Tell with arguments (i, j, P) in the antecedent of the rule; for example, < n ∶ Tell(i, j, P) → P > is a well-formed rule (we call it trust rule) for agent j that causes it to believe i when i informs it that P is the case. No other occurrences of Ask or Tell are allowed. When a rule has either an Ask or a Tell as its consequent, we call it a communication rule. All other rules are known as deduction rules. These include rules with Asks and Tells in the antecedent as well as rules containing neither an Ask nor a Tell.
We assume that the state, for each agent i, contains a communication counter, which starts with value 0 and incremented by 1 each time while interacting (sending/receiving a message) with other agents. After the counter reaches its limit, say n C (i), agent i cannot perform any more communication actions. The exchange of information between agents work like this as follows: if an Ask(i, j, P) (or a Tell(i, j, P)) is in agent i's working memory in a given state, Ask(i, j, P) (or Tell(i, j, P)) is not in the working memory of agent j, and agent j has not exceeded its communication bound then in the successor state, Ask(i, j, P) (or Tell(i, j, P)) can be added to agent j's working memory, and its communication counter incremented.

Possible actions of an agent
The semantics of the agents' language is based on transition systems and follow the approach of Alechina et al. 7 We view the process of producing new facts from existing facts as a sequence of states of an agent, starting from an initial state, and producing the next state by one of the following actions.
-Rule: firing a matching rule instance in the current sate; -Comm: if agent i has an Ask(i, j, P) (or a Tell(i, j, P)) in its current state, then agent j can copy it to its next state provided j's communication counter has not exceeded n C ( j ) value; -Idle: which leaves its configuration unchanged.
That is, each transition (result of an action) corresponds to a single execution step and takes an agent from one state to another. States consist of the rules, facts, and communication counter of the agent. A step of the whole system is composed of the actions of each agent, in parallel. We measure time requirements for a problem as the number of such system steps. The key idea underlying the logical approach presented in 7 of rule-based systems is to define a formal logic that axiomatizes the set of transition systems, and it is then used to state various properties of the systems.

Reasoning strategies
We assume that each concrete agent has a reasoning strategy (or conflict resolution strategy) which determines the order in which rules are applied when more than one rule matches the contents of the agent's working memory. The framework (and the TOVRBA tool presented in 6 of 18

RAKIB AND FARUQUI
Listing 2 Temporal epistemic formulas for abstract agents Section 4) supports a set of standard conflict resolution strategies often used in rule-based systems including, Rule ordering, Depth, Breadth, Simplicity, and Complexity. [18][19][20] Different agents in the system may use different types of reasoning strategies. To allow the implementation of reasoning strategies, each atom of a rule is associated with a time stamp, which records the cycle at which the atom was added to the working memory. In order to achieve this, the internal configurations of the rules in the Maude specification (cf, Section 5) follow the syntax given: where the t i 's and t represent time stamps of atoms. When a rule instance of the above rule is fired, its consequent atom (ground instance of P) will be added to the working memory with time stamp t = t ′ + 1, ie, t will be replaced by t ′ + 1, where t ′ is the current cycle time of the system.

Abstract agents
An abstract agent consists of a working memory and a behavioural specification. The behaviour of abstract agents is specified using a subset of LTL formulas extended with belief operators. The general form of the formulas used to represent the external behaviour of an abstract agent i is given in Listing 2. In the formulas X is the ''next step'' temporal operator, X ≤ n is a sequence of n (or less) X operators, G is the temporal ''in all future states'' operator, and B i for each agent i is a syntactic epistemic operator used to specify agent i's ''beliefs,'' ie, the contents of its working memory. Formulas of the form X ≤ n 1 describe agents that produce a certain message or input to the system within n time steps. These formulas (partly) describe proactive behaviour of an agent. For example, the formula X ≤ n B i Tell(i, j, P), which describes abstract behaviour of agent i, produces a Tell(i, j, P) within n time steps. That is, i tells about P to j proactively by generating a Tell(i, j, P) message in the interval [1, n] thinking that it might be useful for j. In other words, i tells j about P without being asked. A formula 1 of the form B i Ask(i, j, P) or B i Tell(i, j, P) results in communication with the other agent as follows: when the beliefs appear (as an Ask or a Tell) in the abstract agent i's working memory, they are also copied to agent j's working memory at the next step. A formula 1 of the form B i P representing a belief involving an atom P (other than Ask and Tell), which may also appear in the abstract agent i's working memory within n time steps. This is not critical to how abstract agents interact with communication; however, it describes abstract agent i's own behaviour.
The G( 2 → X ≤ n 3 ) formulas describe agents, which are always guaranteed to reply to a request for information within n time steps. We interpret the formula G(B i Ask( j, i, P) → X ≤ n B i Tell(i, j, P)) as follows: if t is the time stamp when abstract agent i came to believe Ask( j, i, P) (i.e., Ask( j, i, P) appears in the agent i's working memory), then the atom Tell(i, j, P) must appear in the working memory of agent i within t + n steps.
The atom Tell(i, j, P) is then copied to agent j's working memory at the next step. The other possible combinations of Ask and Tell in places of 2 and 3 in the G( 2 → X ≤ n 3 ) formulas can be interpreted in a similar way. The language described above for the abstract agents is independent of the language of the concrete agents. Note, however, that we do not need the full language of LTL (eg, the Future (F) or Until (U) operator) in order to specify these abstract agents. This is because, a formula such as, eg, FB i Ask( j, i, P), which states that the atom Ask( j, i, P) must be appeared in the agent i's working memory at some time in the future, represents a form of temporal indeterminacy, which is not very helpful in our context.

Specifying systems at different levels of abstraction
In our framework, we assume that an agent in the system is either completely concrete or completely abstract. The representation of agents in the system is divided into two classes based on their behavioural specification. The system designer may have complete control over the internal behaviour of some agents in the system. The concrete agents class contains those agents. The remaining agents belong to the abstract agents class. In this step, the designer identifies which agents he needs to design for what classes. The designer also determines the number of agents he needs to place in each class and their possible interactions. An agent can interact with one or more agents in the system, but not necessarily every agent interacts with every other agent in the system. The designer can consider the following different possible levels of system information in order to design and verify system properties.
1. The system designer may have detailed design information about the internal behaviour of some agents in the system including the initial facts in their working memories, their rules, and the reasoning strategy. The remaining agents in the system are modelled using temporal epistemic formulas.

2.
The system designer may have information of all the agents in the system including the initial facts in their working memories, their rules, but no information at all about their reasoning strategy. This design gives the worst case model. 3. The system designer may have detailed information of all the agents in the system including the initial facts in their working memories, their rules, and the reasoning strategy.
Both approaches (strategy and abstraction) have been combined in a prototyping tool TOVRBA for rule-based multi-agent systems, which allows the designer to specify information about agents' interaction, behaviour, and execution strategy at different levels of abstraction. The TOVRBA tool generates an encoding of the system for the Maude LTL model checker, allowing properties of the system to be verified.

Discussion of the abstraction approach
Our modelling approach presented above abstracts from some aspects of system behaviour to obtain a system model that is tractable for a standard model-checker. Our use of abstraction is however different from classic approaches in model-checking, such as the works of Clarke et al 21 and Cousot and Cousot,22 which used a mapping between an abstract transition system and a concrete program. Depending on this mapping, verification results may be correct but not complete. By correct or conservative abstraction usually mean that, if a formula is true in the abstract system, then it is true in the concrete system (but if a formula is false in the abstract system, it may not be false in the concrete system). In contrast, our approach uses a very specific kind of abstraction, which replaces a concrete agent with an abstract one that implements guarantees of its response time behaviour. If those guarantees are correct, then our approach gives both correct and complete results. Complete or exact abstraction means that a formula is true in the abstract system if and only if it is true in the concrete system. Agents can be modelled as abstract if their response time guarantees have already been verified or the system designer is prepared to assume them.

A PROTOTYPING TOOL TOVRBA
We use the Protégé 17 ontology editor and knowledge-base framework to build the ontologies augmented with SWRL rules while modelling a domain. The SWRL editor is integrated with Protégé and permits the interactive editing of SWRL rules. In order to encode an ontology-driven rule-based system using a Maude 6 specification and formally verify its interesting properties using LTL model checking, we first need to translate the ontology in the OWL/XML format to a set of simple plain text Horn clause rules. We developed a translator that takes as input an OWL 2 RL ontology in the OWL/XML format (an output file of the Protégé editor) and translate it to a set of plain text Horn clause rules. First, we take an OWL 2 RL ontology as an input and then invoke a DL reasoner to compute a complete class hierarchy. Then, we parse the inferred ontology that generates a set of OWL 2 RL axioms and facts. We use the OWL API 23 to parse the ontology and extract the set of axioms and facts. The design of the OWL API is directly based on the OWL 2 Structural Specifications and it treats an ontology as a set of axioms and facts, which are read using the visitor design pattern. The DLP-based translation rules (Section 2.1) are then recursively applied to generate equivalent plain text Horn clause rules for each axiom and fact. We also extract the set of SWRL rules using the OWL API which are already in the Horn clause rule format. First, atoms with corresponding arguments associated with the head and the body of a rule are identified and we then generate a plain text Horn clause rule for each SWRL rule using these atoms. The translated Horn clause rules of an ontology are then used to create agents of a multi-agent rule-based system using the Maude specifications. We then automatically verify interesting properties of the system using the Maude LTL model checker. The high-level architecture of the TOVRBA tool is shown in Figure 1.

MAUDE ENCODING
We chose Maude 6 rewriting system and its LTL model checker because it can model check systems whose states involve arbitrary algebraic data types. The only assumption is that the set of states reachable from a given initial state is finite. This simplifies modelling of the agents' (first-order) rules and reasoning strategies. For example, the variables that appear in a rule can be represented directly in the Maude encoding, without having to generate all ground instances resulting from possible variable substitutions.
We take advantage of Maude's modular structuring mechanisms to implement our systems design. We use a generic functional module and a set of functional and system modules to represent the system. The overall picture of our implementation is shown in Figure 2. Throughout this entire paper, we will use verbatim texts to represent specification of the agents into Maude. Therefore, an agent i corresponds to i in Maude specification. Similarly, Ask(i, j, P) will have the same meaning as Ask(i, j, P) and so on.

of 18
RAKIB AND FARUQUI

FIGURE 1
The TOVRBA tool architecture

Implementation of agent modules
We model each concrete( and abstract) agent using a functional module Concrete Agent-i (and AbstractAgent-i), which imports the ACM same. This is just to maintain consistency of the shape of each agent's configuration. However, eg, the sort RepTime is of no use for concrete agents and its value is always empty for them.

Implementation of the MAS module
Computation steps of multi-agent systems are represented by transitions, which take systems from one configuration to subsequent ones. Each agent in the system has its own local state and the composition of all these local states comprises the configuration (global state) of the multi-agent system. In every configuration (global state), all agents proceed simultaneously. Each agent changes its next local configuration, possibly depending on the current local configurations of the other agents in the system. However, there can be an alternative interleaved execution model, where at most one agent is allowed to act at any one time. It depends on the modelled system which execution model (interleaved or synchronous) is more realistic. If we count time steps required by a system of agents to derive something, interleaved model gives rather pessimistic results because only one agent can ''think'' at any single step and the rest are waiting. This makes sense if the agents run on the same processor. However, if, as in most of our examples, agents are running on different processors and can ''think'' in parallel, a synchronous model is more realistic.
The MAS module imports all the agent modules and contains both functions and rewrite rules, which are used to implement the dynamic behaviour of the system. The structure of the MAS module is given in Listing 4. The parallel composition of agent configurations in the system is achieved using the _||_ operator. In the MAS module, we declare a sort masConfig to represent the global configuration of the system. We then define an operator <_,_> whose first argument is the composition of all the local configurations of the system and the second argument is a phase, and it returns an element of sort masConfig. The masConfig moves through communication and execution phases. The communication phase simply says that, if there is something to be communicated then do it, and then return to the execution phase.
The inference engine of concrete agents and the partial behaviour of abstract agents are implemented using a set of rules, ie, Generate, Choice, Apply, Idle, and Communication. The Generate rule causes each agent to generate its conflict set. The Choice rule causes each agent to apply its reasoning strategy, the Apply rule causes each agent to execute the rule instances selected for execution, the Idle rule executes only when there are no rule instances to be executed (the application of the Idle rule advances the cycle time of the agent i, leaving everything else unchanged), and communication among agents is achieved using the Communication rule. When agents communicate with each other, one agent copies the communicated fact from another agent's working memory. Copying is only allowed if the fact to be copied is not already in the working memory of the agent intending to copy and it has not exceeded its communication counter limit. For the sake of brevity, we do not describe the encoding in any further details here, we refer the interested reader to the work of Rakib. 24

RAKIB AND FARUQUI
Listing 4 Structure of MAS module

Verifying system properties
Model checking in Maude involves a Maude specification of a system together with a property of interest. A property is a LTL formula interpreted as a property of computations of the system (linear sequences of states generated by application of rewrite rules). A simple path from a given initial state s, to a state satisfying a property is a list of rules together with a state s ′ satisfying such that applying the rules starting with s leads to s ′ . One way to find a simple path is to model check the assertion that from s no state can be reached satisfying : modelCheck(s, ∼ F ).
If there is a reachable state satisfying , a counterexample will be returned. The counterexample contains the list of rules applied. Given a system module, say MAS, and an initial state, say s of sort masConfig, we can model check different LTL properties beginning at this initial state by doing the following: • defining a new module, ModelCheck-MAS, that includes the module MAS and Maude's built-in module MODEL-CHECKER module as sub modules; • giving a subsort declaration, masConfig < State, where State is a sort in the module MODEL-CHECKER; • defining the syntax of the (target) state predicates we wish to use by means of constants and operators of sort Prop, a subsort of the sort Formula (ie, LTL formulas) in the module MODEL-CHECKER; • defining the semantics of the state predicates by means of equations.
The following ModelCheck-MAS system module defined in Listing 5 shows how we can define state predicates whose semantics are defined by appropriate equations. In the state predicate semantics defined in Listing 5, the masConfig says that agent i's working memory contains a ground atom P. The remaining information of the configuration is specified using Maude's on-the-fly variable declaration. Note however that the initial state must contain information using ground terms only. In the ModelCheck-MAS module the initial system state is represented using init, where all the '_' placeholders used in the configuration represent ground terms. Once the semantics of each of the state predicates has been defined, given an initial state init, we can model check any LTL formula, say , involving such predicates. We do so by executing in Maude, the command reduce modelCheck(init, ), where could be, eg, [] success,<> success, <>∼success, etc ([] stands Listing 5 Structure of ModelCheck-MAS module for the global LTL operator G and <> stands for the future LTL operator F). Two things can then happen. If the property holds, then we get the result true; if it does not, we get a counterexample.

Analysis of the Maude implementation
When implementing reasoning strategies which involve time stamps of atoms, it is convenient to be able to associate a time stamp to each pattern. To achieve this, we have declared the sort TimeWM in the above encoding. However, in the encoding, we maintained both the sorts TimeWM and WM simultaneously. In this section, we explain why. Let us suppose that each agent uses TimeWM as its only working memory. When agents generate their conflict sets, they check whether consequents of rule instances are already present in their working memory. If so, then these rule instances will not be added to their agendas. Similarly, when agent fires a rule instance or receives a message from another agent, it will make sure these atoms are not present in its working memory. For example, suppose an atom P with time stamp t1 is already added to the working memory of an agent i.

CASE STUDY 1: HOME HEALTH CARE MONITORING ALARM SYSTEM
In this section, to illustrate the application of the framework we consider the following scenario of a home health care monitoring alarm system adapted from. 9 We built a home health care ontology using OWL 2 RL and SWRL from the scenario using Protégé. 17 A fragment of the ontology is depicted in Figure 3.
The dotted lines represent object/data properties between classes and solid lines represent ''subclass'' relations. A snapshot of an individual of the class ''Patient'' is given in Figure 5B, which clearly shows associated object and data properties with ''Tracy.'' Static behaviour of the system is captured using OWL 2 RL and dynamic behaviour of the system is captured using SWRL rules. Some SWRL rules are given in Figure 5A. The prototyping tool TOVRBA translates the ontology into a set of Horn clause rules. The translated Horn clause rules of the ontology are then used to create agents of a multi-agent rule-based system using the Maude specification. The system consists of several concrete and abstract agents.
The concrete agents in the system include a number of home healthPCs, pc i s, and a central Health Planner, p. Each pc i agent in the system is connected with two body sensor agents such as a Blood pressure monitoring agent, b i , and a Heart rate monitoring agent, h i . The agents b i s and h i s are modelled as abstract agents. All the home healthPC agents pc i s can communicate with the agent p, which is located at the health centre.  The agent p can also communicate with various other agents in the system including doctors, nurses, relatives of patients, and an emergency operator. The over-all picture of the system is depicted in Figure 4.
The abstract agents b i and h i measure the Blood pressure and Heart rate information of a patient and inform to the corresponding home healthPC, pc i , as messages of the form Tell (h k , pc i , hasHeartRateFreq(?p, ?v)).
Upon receiving the Blood pressure and Heart rate information from the body sensor agents, the agent pc i derives an alarm level by firing a sequence of rules from its knowledge base, including the rules shown in Figure 5.
The level of alarms could be Low, VeryLow, Medium, and High depending on the blood-pressure and heart-rate measurement values. The agent pc i then sends the alarm level information to the agent p for the patient's health planning. In this system, the agents doctors, d i s, nurses, n i s, relatives of patients, r i s, and an emergency operator, e, are modelled as abstract agents. These abstract agents can notify to the agent p about their availability status by sending messages, which could be, eg, Available, NotAvailable, and Busy. The messages generated by these abstract agents are of the form Busy or NotAvailable message within a fixed time interval, then the agent p alerts an emergency operator. At the same time, the agent p alerts the relative of the patient, but an acknowledgement is not required.
The Blood pressure and Heart rate sensor agents in the system generate information about the measurement values at different times in the interval [1,5]. For example, the agent b i generates blood pressure information for a patient with patient's name Tracy and systolic blood pressure 130mmHg using the following formula: In this experiment, the priorities (from higher to the lower) among rules of the central Health Planner are assigned corresponding to the alarm levels High, Medium, Low, and VeryLow, respectively. The experimental results reported in Table 2; for the 1 patient scenario, the system generates Medium alarms; for the 2 patients scenario, the system generates Medium alarms for one patient and High alarm for the other patient; and for the 3 patients scenario, the system generates Medium alarms for two patients and High alarm for the other patient. For ease of illustration, we modelled one doctor, one nurse, and one relative corresponding to each patient in the system. In the one patient scenario, two concrete agents are modelled using 16 and 36 rules, respectively, three abstract agents are modelled using one LTL formula each, and other two abstract agents are modelled using two LTL formulas each. In the two patient scenario, three concrete agents are modelled using 16, 16, and 72 rules, respectively, four abstract agents are modelled using one LTL formula each, and other seven abstract agents are modelled using two LTL formulas each. Moreover, in the three patient scenario, four concrete agents are modelled using 16,16,16, and 108 rules, respectively, four abstract agents are modelled using one LTL formula each, nine abstract agents are modelled using two LTL formulas each, and one abstract agent is modelled using three LTL formulas. Maude encoding can be found online. † We verify the following properties of the system: The above property specifying the fact that healthPC classifies the alarm level as ′ MEDIUM in n time steps while message counter value of the healthPC is m, when the values of blood pressure and heart rate are 130mmHg, 85mmHg, and 30bps, respectively.
The above property says that, whenever patient's alarm level is classified (eg, in this case, it is Medium), the patient's home healthPC will interact and informs this to the planner p and the planner receives classified message in n time steps.  However, when we assign a value to n which is less than 7 in Prop1, less than 2 in Prop2, and less than 3 in Prop3, the properties are verified as false and the model checker returns counterexamples. Similarly, when we assign a value to m, which is less than 3, Prop1 is verified as false. This also ensures the correctness of the encoding in that model checker does not return true for arbitrary values of n and m. Note that verification of true formulas take longer than verification of false formulas since a model checker will find a counterexample faster than it takes to explore the whole model. For example, when the model checker returns counterexamples, it spends 0.04 seconds for the one patient scenario, 0.04 seconds for the two patient scenario, and 0.2 seconds for the three patient scenario. It should also be noted that the value of n depends on the experimental setup. For example, the value of n is 3 when verifying Prop3 for the three patient scenario and the planner has to contact emergency operator for one patient with Medium alarm (because it has received acknowledgements from the doctor and nurse as busy) and for another patient with Medium alarm it receives positive acknowledgement from the doctor. However, the value of n is 4 when verifying Prop3 for the three patient scenario and the planner has to contact emergency operator for both the patients with Medium alarm (because it has received acknowledgements from the doctor and nurse as busy for both the patients). The results are summarised in Table 2.

CASE STUDY 2: A SYNTHETIC DISTRIBUTED REASONING PROBLEM
To illustrate the scalability of our approach, we reimplemented an example scenario introduced in the work of Alechina et al 7 and preliminary results were reported in another work of the aforementioned authors. 8 In this scenario, a system of communicating reasoners attempt to solve a distributed reasoning problem where the set of rules and facts that describes the agents' knowledge base are constructed from a complete binary tree. For example, a complete binary tree with 8 leaf facts has the following set of rules: In the work of Alechina et al, 7 variations on this synthetic ''binary tree'' problem have been used, with A i s being the leaves and the goal formula being the root of the tree, as examples (see Figure 6). As we have already mentioned, the use of ontology-driven rules is to exploit an ontology and the SWRL rules to design a rule-based multi-agent system, which facilitates to capture and design critical elements of a real-world application.
This synthetic distributed reasoning problem, which is not based on ontologies, is considered here because it can be easily parametrised by the number of leaf facts to increase or decrease the problem size.

Analysis of experimental results
In the work of Alechina et al, 7 the results of various experiments of the binary tree problems using the Mocha model-checker 25 are reported. In the simplest case of a single agent, the largest problem that could be verified using Mocha had 128 leaf facts, as shown in Table 3. However, using our TOVRBA tool, we are able to verify a system with 2048 leaf facts. This was modelled as a single concrete agent, with varying numbers of facts and rules. The experimental results are summarised in Table 4 (#Agents = 1). In the case of multi-agent systems, the exchange of information between agents was modelled using Comm operation, which requires special communication rules. In the work of Alechina et al, 7 using Mocha, we were able to verify a multi-agent system consisting of two agents with 16 leaf facts. An invariant property of the form AG¬(B 1 ∨ B 2 ) (where represents the root node) was verified when the odd position node facts were assigned to one agent and the even position node facts were assigned to the other agent in the system. In our re-implementation, communication between agents is achieved using Ask and Tell actions. The  Figure 7 depicts experimental performance comparison using strategy and non-strategy based encodings, which indicates that much larger systems can be verified using our new approach.

RELATED WORK
The idea of integrating ontologies and multi-agent systems has been realized in numerous research. 2,29 Gâteau 30 proposed a smart IoT middleware for comport management integrating ontologies and multi agent systems using JaCaMo 31 for Multi-Agent Programming. Ontology played a vital role to select a best action in multi-agent system whenever an event occurs. There has also been considerable work on rule-based agents and model checking multi-agent systems. Subercaze and Maret 1 presented a semantic agent model that allows SWRL programming of agents. A Java interpreter has been developed that communicates with the Knowledge-Base using the Protégé-OWL API. The prototype tool takes advantages of the Java-based domain modeling tool JADE that allows agent registration, service discovery and messages passing. The framework supports FIPA-ACL for agent communication. Mousavi et al 32 presented an ontology-driven reasoning system based on BDI agent model. 33 In contrast to Jadex (that utilizes an XML format to represent agents' plans, beliefs and goals), in their framework, an ontology (in an OWL format) has been used to represent agents' believes, plans and events. The Java-based tool JADE was used to implement the agents, and the Protégé OWL was used to create the ontology. To illustrate the use of the framework, a simple Mobile Workforce Brokering Systems ( a multi-agent system that automates the process of allocating tasks to Mobile Workforces) was modelled for simulation. In 34 the Datalaude system is presented, which essentially implements a Datalog interpreter in Maude. However the encoding of rules and rule execution strategy is very different from that proposed in this paper, in using functional modules and implementing a backward chaining rule execution strategy. The aim of the Datalaude project is not to analyse Datalog programs as such, but to provide a fast and 'declarative' (in the sense of functional programming) specification of memory management in Java programs. The example application in 34 uses Datalog facts represent information about references, and some simple rules ensure transitivity of the reference relation. While in the above a number of ontology-driven modeling and reasoning approaches 1,32 have been developed for multi-agent systems to our knowledge tools for automated formal verification for such systems are lacking. In the work of Rakib and Ul Haque, 35 we have used the technique presented in this paper to model and verify resource-bounded context-aware systems; however, all the agents used in the case study were modelled as concrete agents. In the literature, there have been many other approaches to alleviate the state space explosion problem, including verification approaches based on compositional reasoning. 36 In compositional reasoning, a property to be verified is decomposed into sub-properties that describe the behaviour of small components of the system. The sub-properties are verified for the corresponding components. Then, the system satisfies if all the sub-properties are satisfied locally and their conjunction implies . In contrast, our approach to verification using abstraction does not decompose into sub-properties. The property is verified in the whole system. However, we construct the system using a hierarchical composition in which the LTL properties can be previously verified properties of non-abstract versions of an abstract agent or set of abstract agents.

CONCLUSIONS AND FUTURE WORK
In this paper, we have proposed an approach to modelling, specifying and verifying response time guarantees of ontology-driven multi-agent rule-based systems. To design ontologies, we use OWL 2 RL language because it is more expressive than the RDFS and it is suitable for the design and development of rule-based systems. An OWL 2 RL ontology can be translated into a set of Horn clause rules based on DLP. 12 Furthermore, we express more complex rule-based concepts using SWRL which allows us to write rules using OWL concepts. We show how the Maude LTL model checker can be used to verify desired system properties including response-time guarantees of the form: if the system receives a query, then a response will be produced within n time steps. We described results of experiments on a simple healthcare monitoring system; we also presented strategy-based efficient encoding of the rule-based multi-agent systems for LTL compared to our previously presented encoding for CTL. In future work, we plan to evaluate our approach on more real-life examples of Semantic Web and rule-based systems and enhance our framework for designing and verifying situation-aware ambient intelligence systems.