list | Up ]

Hossein Hojjat: Publications and manuscripts

For details on each publication, click on the publication [number]. For pdf, click on the publication title. Publication numbers are for reference, not for (bean)counting purposes.
[24] Life on the Edge: Unraveling Policies into Configurations. 13th ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS). 2017. [ bib | http | list ]
Current frameworks for network programming assume that the network comprises a set of homogenous devices that can be rapidly reconfigured in response to changing policies and network conditions. Unfortunately, these assumptions are incompatible with the realities of modern networks, which contain legacy devices that offer diverse functionality and can only be reconfigured slowly. Additionally, network service providers need to walk a fine line between providing flexibility to users, and maintaining the integrity and reliability of their core networks. These issues are particularly evident in optical networks, which are used by ISPs and WANs and provide high bandwidth at the cost of limited flexibility and long reconfiguration times. This paper presents a different approach to implementing high-level policies, by pushing functionality to the edge and using the core merely for transit. Building on the NetKAT framework and leveraging linear programming solvers, we develop techniques for analyzing and transforming policies into configurations that can be installed at the edge of the network. Furthermore, our approach can be extended to incorporate constraints that are crucial in the optical domain, such as path constraints. We develop a working implementation using off-the-shelf solvers and evaluate our approach on realistic optical topologies.

[23] The FMCAD 2016 graduate student forum. 16th Conference on Formal Methods in Computer-Aided Design (FMCAD). 2016. [ bib | http | list ]
The FMCAD Student Forum provides a platform for graduate students at any career stage to introduce their research to the wider Formal Methods community, and solicit feedback. In 2016, the event took place in Mountain View, California, as integral part of the FMCAD conference. Ten students were invited to give a short talk and present a poster illustrating their work. The presentations covered a broad range of topics in the field of verification and synthesis, including automated reasoning, model checking of hardware, software, as well as hybrid systems, verification and synthesis of networks, and application of artificial intelligence techniques to circuit design.

[22] Optimizing horn solvers for network repair. 16th Conference on Formal Methods in Computer-Aided Design (FMCAD). 2016. [ bib | http | list ]
Automatic program repair modifies a faulty program to make it correct with respect to a specification. Previous approaches have typically been restricted to specific programming languages and a fixed set of syntactical mutation techniques-e.g., changing the conditions of if statements. We present a more general technique based on repairing sets of unsolvable Horn clauses. Working with Horn clauses enables repairing programs from many different source languages, but also introduces challenges, such as navigating the large space of possible repairs. We propose a conservative semantic repair technique that only removes incorrect behaviors and does not introduce new behaviors. Our proposed framework allows the user to request the best repairs - it constructs an optimization lattice representing the space of possible repairs, and uses a novel local search technique that exploits heuristics to avoid searching through sub-lattices with no feasible repairs. To illustrate the applicability of our approach, we apply it to problems in software-defined networking (SDN), and illustrate how it is able to help network operators fix buggy configurations by properly filtering undesired traffic. We show that interval and Boolean lattices are effective choices of optimization lattices in this domain, and we enable optimization objectives such as modifying the minimal number of switches. We have implemented a prototype repair tool, and present preliminary experimental results on several benchmarks using real topologies and realistic repair scenarios in data centers and congested networks.

[21] Event-driven network programming. 37th annual ACM SIGPLAN conference on Programming Language Design and Implementation (PLDI). 2016. [ bib | http | list ]
Software-defined networking (SDN) programs must simultaneously describe static forwarding behavior and dynamic updates in response to events. Event-driven updates are critical to get right, but difficult to implement correctly due to the high degree of concurrency in networks. Existing SDN platforms offer weak guarantees that can break application invariants, leading to problems such as dropped packets, degraded performance, security violations, etc. This paper introduces event-driven consistent updates that are guaranteed to preserve well-defined behaviors when transitioning between configurations in response to events. We propose network event structures (NESs) to model constraints on updates, such as which events can be enabled simultaneously and causal dependencies between events. We define an extension of the NetKAT language with mutable state, give semantics to stateful programs using NESs, and discuss provably-correct strategies for implementing NESs in SDNs. Finally, we evaluate our approach empirically, demonstrating that it gives well-defined consistency guarantees while avoiding expensive synchronization and packet buffering.

[20] Efficient Synthesis of Network Updates. 36th annual ACM SIGPLAN conference on Programming Language Design and Implementation (PLDI). 2015. [ bib | list ]
Software-defined networking (SDN) is revolutionizing the networking industry, but current SDN programming platforms do not provide automated mechanisms for updating global configurations on the fly. Implementing updates by hand is challenging for SDN programmers because networks are distributed systems with hundreds or thousands of interacting nodes. Even if initial and final configurations are correct, naively updating individual nodes can lead to incorrect transient behaviors, including loops, black holes, and access control violations. This paper presents an approach for automatically synthesizing updates that are guaranteed to preserve specified properties. We formalize network updates as a distributed programming problem and develop a synthesis algorithm based on counterexample-guided search and incremental model checking. We describe a prototype implementation, and present results from experiments on real-world topologies and properties demonstrating that our tool scales to updates involving over one-thousand nodes.

[19] The Homeostasis Protocol: Avoiding Transaction Coordination Through Program Analysis. SIGMOD. 2015. [ bib | list ]
Datastores today rely on distribution and replication to achieve improved performance and fault-tolerance. But correctness of many applications depends on strong consistency properties-something that can impose substantial overheads, since it requires coordinating the behavior of multiple nodes. This paper describes a new approach to achieving strong consistency in distributed systems while minimizing communication between nodes. The key insight is to allow the state of the system to be inconsistent during execution, as long as this inconsistency is bounded and does not affect transaction correctness. In contrast to previous work, our approach uses program analysis to extract semantic information about permissible levels of inconsistency and is fully automated. We then employ a novel homeostasis protocol to allow sites to operate independently, without communicating, as long as any inconsistency is governed by appropriate treaties between the nodes. We discuss mechanisms for optimizing treaties based on workload characteristics to minimize communication, as well as a prototype implementation and experiments that demonstrate the benefits of our approach on common transactional benchmarks.

[18] Network Updates for the Impatient: Eliminating Unnecessary Waits. 1st Workshop on PL and Verification Technology for Networking. 2015. [ bib | list ]
[17] On recursion-free Horn clauses and Craig interpolation. Formal Methods in System Design. 2014. [ bib | http | list ]
One of the main challenges in software verification is efficient and precise analysis of programs with procedures and loops. Interpolation methods remain among the most promising techniques for such verification. To accommodate the demands of various programming language features, over the past years several extended forms of interpolation have been introduced. We give a precise ontology of such extended interpolation methods, and investigate the relationship between interpolation and fragments of constrained recursion-free Horn clauses. We also introduce a new notion of interpolation, disjunctive interpolation, which solves a more general class of problems in one step compared to previous notions of interpolants, such as tree interpolants or inductive sequences of interpolants. We present algorithms and complexity for construction of interpolants, as well as the corresponding decision problems for recursion-free Horn fragments. Finally, we give an extensive empirical evaluation using a solver for (recursive) Horn problems, in particular comparing the performance of tree interpolation and disjunctive interpolation for constraints modelling software verification tasks.

[16] Horn Clauses for Communicating Timed Systems. Workshop on Horn Clauses for Verification and Synthesis (HCVS). 2014. [ bib | list ]
Languages based on the theory of timed automata are a well established approach for modelling and analysing real-time systems, with many applications both in industrial and academic context. Model checking for timed automata has been studied extensively during the last two decades; however, even now industrial-grade model checkers are available only for few timed automata dialects (in particular Uppaal timed automata), exhibit limited scalability for systems with large discrete state space, or cannot handle parametrised systems. We explore the use of Horn constraints and off-the-shelf model checkers for analysis of networks of timed automata. The resulting analysis method is fully symbolic and applicable to systems with large or infinite discrete state space, and can be extended to include various language features, for instance Uppaal-style communication/broadcast channels and BIPstyle interactions, and systems with infinite parallelism. Experiments demonstrate the feasibility of the method.

[15] Classifying and Solving Horn Clauses for Verification. 5th Working Conference on Verified Software: Theories, Tools and Experiments (VSTTE). 2013. [ bib | list ]
As a promising direction to overcome difficulties of verification, researchers have recently proposed the use of Horn constraints as intermediate representation. Horn constraints are related to Craig interpolation, which is one of the main techniques used to construct and refine abstractions in verification, and to synthesise inductive loop invariants.We give a classification of the different forms of Craig interpolation problems found in literature, and show that all of them correspond to natural fragments of (recursion-free) Horn constraints. For a logic that has the binary interpolation property, all of these problems are solvable, but have different complexity. In addition to presenting the theoretical classification and solvability results, we present a publicly available collection of benchmarks to evaluate solvers for Horn constraints, categorized according to our classification. The benchmarks are derived from real-world verification problems. The behavior with our tools as well as with Z3 prover indicates the importance of Horn clause solving as distinct from the general problem of solving quantified constraints by quantifier instantiation.

[14] Disjunctive Interpolants for Horn-Clause Verification. 25th international conference on Computer Aided Verification (CAV). 2013. [ bib | list ]
One of the main challenges in software verification is efficient and precise compositional analysis of programs with procedures and loops. Interpolation methods remains one of the most promising techniques for such verification, and are closely related to solving Horn clause constraints. We introduce a new notion of interpolation, disjunctive interpolation, which solves a more general class of problems in one step compared to previous notions of interpolants, such as tree interpolants or inductive sequences of interpolants. We present algorithms and complexity for construction of disjunctive interpolants, as well as their use within an abstraction-refinement loop. We have implemented Horn clause verification algorithms that use disjunctive interpolants and evaluate them on benchmarks expressed as Horn clauses over the theory of integer linear arithmetic.

[13] Accelerating Interpolants. Automated Technology for Verification and Analysis (ATVA). 2012. [ bib | list ]
We present Counterexample-Guided Accelerated Abstraction Refinement (CEGAAR), a new algorithm for verifying infinite-state transition systems. CEGAAR combines interpolation-based predicate discovery in counterexampleguided predicate abstraction with acceleration technique for computing the transitive closure of loops. CEGAAR applies acceleration to dynamically discovered looping patterns in the unfolding of the transition system, and combines overapproximation with underapproximation. It constructs inductive invariants that rule out an infinite family of spurious counterexamples, alleviating the problem of divergence in predicate abstraction without losing its adaptive nature. We present theoretical and experimental justification for the effectiveness of CEGAAR, showing that inductive interpolants can be computed from classical Craig interpolants and transitive closures of loops. We present an implementation of CEGAAR that verifies integer transition systems. We show that the resulting implementation robustly handles a number of difficult transition systems that cannot be handled using interpolation-based predicate abstraction or acceleration alone.

[12] A Verification Toolkit for Numerical Transition Systems (Tool Paper). 16th International Symposium on Formal Methods (FM). 2012. [ bib | list ]
This paper presents a publicly available toolkit and a benchmark suite for rigorous verification of Integer Numerical Transition Systems (INTS), which can be viewed as control-flow graphs whose edges are annotated by Presburger arithmetic formulas. We present FLATA and ELDARICA, two verification tools for INTS. The FLATA system is based on precise acceleration of the transition relation, while the ELDARICA system is based on predicate abstraction with interpolation-based counterexample-driven refinement. The ELDARICA verifier uses the PRINCESS theorem prover as a sound and complete interpolating prover for Presburger arithmetic. Both systems can solve several examples for which previous approaches failed, and present a useful baseline for verifying integer programs. The infrastructure is a starting point for rigorous benchmarking, competitions, and standardized communication between tools.

[11] Symbolic Execution of Reo Circuits Using Constraint Automata. Science of Computer Programming. 2012. [ bib | http | list ]
Reo is a coordination language that can be used to model different systems. We propose a technique for symbolic execution of Reo circuits using the symbolic representation of data constraints in Constraint Automata. This technique enables us to obtain the relations among the data that pass through different nodes in a circuit from which we infer the coordination patterns of the circuit. Deadlocks, which may involve data values, can also be checked using reachability analysis. As an alternative to constructing the symbolic execution tree, we use regular expressions and their derivatives which we obtain from our deterministic finite automata. We show that there is an upper bound of one for unfolding the cycles in constraint automata which is enough to reveal the relations between symbolic representations of inputs and outputs. In the presence of feedback in a Reo circuit a number of substitutions are necessary to make the relationship between successive input/output values explicit. The number of these substitutions depends on the number of buffers in the Reo circuit, and can be found by static analysis. We illustrate our technique on a set of Reo circuits to show the extracted relations between inputs and outputs. We have implemented a tool to automate our proposed technique.

[10] Formal Analysis of SystemC Designs in Process Algebra. Fundamenta Informaticae. 2011. [ bib | http | list ]
SystemC is an IEEE standard system-level language used in hardware/software co-design and has been widely adopted in the industry. This paper describes a formal approach to verifying SystemC codes by providing a mapping to the process algebra mCRL2. The outstanding advantages of mCRL2 are the support for different data types and a powerful tool-set for model reduction and analysis. A tool is implemented to automatically perform the proposed mapping. This translation enabled us to exploit process algebraic verification techniques to analyze a number of case-studies, including the formal analysis of a single-cycle and a pipelined MIPS processor specified in SystemC.

[9] Automated Analysis of Reo Circuits using Symbolic Execution. 8th International Workshop on the Foundations of Coordination Languages and Software Architectures (FOCLASA). 2009. [ bib | list ]
Reo is a coordination language that can be used to model different systems. We propose a technique for symbolic execution of Reo circuits using Constraint Automata and more specifically exploiting their data constraints. This technique enables us to obtain the relations among the data passing through different nodes in a circuit and also infer coordination patterns. As an alternative to constructing the symbolic execution tree, we propose an algorithm similar to the algorithms used for converting deterministic finite automata to regular expressions. Our technique can be used for analyzing data-dominated systems whereas the model checking approach is best suited for the study of control-dominated applications. Deadlocks, which may involve data values, can also be checked. We illustrate the technique on a set of data dominated circuits as well as a non-trivial critical section problem. A tool is implemented to automate the proposed technique.

[8] Application of Process Algebraic Verification and Reduction Techniques to SystemC Designs. Technische Universiteit Eindhoven Technical Report CSR-08-15. 2008. [ bib | list ]
SystemC is an IEEE standard system-level language used in hardware/software co- design and has been widely adopted in the industry. This paper describes a formal approach to verifying SystemC designs by providing a mapping to the process algebra mCRL2. Our mapping formalizes both the simulation semantics as well as exhaustive state-space exploration of SystemC designs. By exploiting the existing reduction techniques of mCRL2 and also its model-checking tools, we efficiently locate the race conditions in a system and resolve them. A tool is implemented to automatically perform the proposed mapping. This mapping and the implemented tool enabled us to exploit process-algebraic verification techniques to analyze a number of case-studies, including the formal analysis of a single-cycle and a pipelined MIPS processor specified in SystemC.

[7] Process Algebraic Verification of SystemC Codes. 8th International Conference on Application of Concurrency to System Design (ACSD). 2008. [ bib | http | list ]
SystemC is an IEEE standard system-level language used in hardware/software co-design and has been widely adopted in the industry. This paper describes a formal approach to verifying SystemC codes by providing a mapping to the process algebra mCRL2. The outstanding advantages of mCRL2 are the support for different data types and a powerful tool-set for model reduction and analysis. A tool is implemented to automatically perform the proposed mapping. This translation enabled us to exploit process algebraic verification techniques to analyze a number of case-studies, including the formal analysis of a single-cycle and a pipelined MIPS processor specified in SystemC.

[6] A Framework for Performance Evaluation and Verification in Stochastic Process Algebras. 22nd ACM Symposium on Applied Computing, Software Verification Track (SV). 2007. [ bib | http | list ]
Despite its relatively short history, a wealth of formalisms exist for algebraic specification of stochastic systems. The goal of this paper is to give such formalisms a unifying framework for performance evaluation and functional verification. To this end, we propose an approach enabling a provably sound transformation from some existing stochastic process algebras, e.g., PEPA and MTIPP, to a generic form in the mCRL2 language. This way, we resolve the semantic differences among different stochastic process algebras themselves, on one hand, and between stochastic process algebras and classic ones, such as mCRL2, on the other hand. From the generic form, one can generate a state space and perform various functional and performance-related analyses, as we illustrate in this paper.

[5] Integrating Module Checking and Deduction in a Formal Proof for the Perlman Spanning Tree Protocol (STP). J.UCS Journal of Universal Computer Science. 2007. [ bib | http | list ]
In the IEEE 802.1D standard for the Media Access Control layer (MAC layer) bridges, there is an STP (Spanning Tree Protocol) definition, based on the algorithm that was proposed by Radia Perlman. In this paper, we give a formal proof for correctness of the STP algorithm by showing that finally a single node is selected as the root of the tree and the loops are eliminated correctly. We use formal inductive reasoning to establish these requirements. In order to ensure that the bridges behave correctly regardless of the topology of the surrounding bridges and LANs, the Rebeca modular verification techniques are applied. These techniques are shown to be efficiently applicable in model checking of open systems.

[4] Evaluation of part of speech tagging on Persian Text. 2nd Workshop on Computational Approaches to Arabic Script-based Languages (CAASL2). 2007. [ bib | list ]
One of the fundamental tasks in natural language processing is part of speech (POS) tagging. A POS tagger is a piece of software that reads text in some language and assigns a part of speech tag to each one of the words. Our main interest in this research was to see how easy it is to apply methods used in a language such as English to a new and different language such as Persian (Farsi) and what would be the performance of such approaches. This paper presents evaluation of several part of speech tagging methods on Persian text. These are a statistical tagging method, a memory based tagging approach and two different versions of Maximum Likelihood Estimation (MLE) tagging on Persian text. The two MLE versions differ in the way they handle the unknown words. We also demonstrate the value of simple heuristics and post-processing in improving the accuracy of these methods. These experiments have been conducted on a manually part of speech tagged Persian corpus with over two million tagged words. The results of the experiments are encouraging and comparable with the other languages such as English, German or Spanish.

[3] Sarir: A Rebeca to mCRL2 Translator. 7th International Conference on Application of Concurrency to System Design (ACSD). 2007. [ bib | list ]
We describe a translation from Rebeca, an actor-based language, to mCRL2, a process algebra enhanced with data types. The main motivation is to exploit the verification tools and theories developed for mCRL2 in Rebeca. The mapping is applied to several case-studies including the tree identify phase of the IEEE 1394 standard. The results of the experiment show that the minimization tools of mCRL2 can be very effective and the outcome of the present translation outperforms that of the translation to the input language of the Spin model-checker.

[2] Formal verification of the object-based systems using process algebra. Master's Thesis, Tehran University ECE Dep. 2007. [ bib | list ]
The object paradigm has been recognized as one of the most popular approaches to modeling and programming. However, despite the remarkable advancements it seems that the paradigm still lacks established formal treatment theories. In this thesis, we are to provide a basis for formal reasoning on the object based languages. We pick process algebraic style of formal analysis, which is a powerful approach to the modeling of concurrent systems. By providing a mapping from object based languages to process algebra, we can utilize the capabilities of process algebra in real case studies coming from industry and commerce. We purposely use mCRL2, which is a process algebra enhanced with data types. The potentials of mCRL2 have been proven in numerous earlier experiments, and moreover there is a powerful toolset for analysis and verification of the mCRL2 models. In order to materialize our ideas we select two object based languages. The first one is Rebeca, an actor-based language for modeling concurrent and distributed systems. The the second one is from industry, SystemC, exploited for the system-level designs. The main advantage of our approach in comparison with the earlier mappings of object based languages to process algebra is our extensive use of the built-in data types of mCRL2 and avoiding the redefinition of our own set of data types. Both mappings have been implemented as a compiler, and a number of case studies have been checked in either case. The results of the Rebeca mapping explains that the dynamic partial order reduction can decrease the size of the state space more than the previous used static partial order methods, while it is slower and more time consuming. We could also find a deficiency in the static version of our partial order reductions, which can be solved by clustering. In the case of the SystemC mapping we found and recovered a hidden bug in the implementation of a mini MIPS processor which was not discovered before.

[1] Formal Verification of the IEEE 802.1D Spanning Tree Protocol Using Extended Rebeca. 1st IPM International Symposium on Fundamentals of Software Engineering (FSEN). 2006. [ bib | http | list ]
The STP (Spanning Tree Protocol) which is standardized as IEEE 802.1D has been used in many bridges and switches of networks. This algorithm tries to eliminate loops in bridged networks. In this study the correctness of STP algorithm is formally verified using Extended Rebeca. In order to not to be confined to a specific case or set of cases we used a compositional verification approach. This allows us to gain generality in verifying the algorithm. The clarity and convenience in model checking by means of Extended Rebeca suggests that this language can be used for verifying more network protocols in future.