TLA+ is a tried and tested and widely adopted language for specifying systems. However, formal verification research has made huge leaps forward since the tooling for TLA+ was devised in the late 90s. Our research team has been actively working to fill this gap between state-of-the-art formal verification and the tooling for TLA+. We have been developing Apalache, a symbolic model checker for TLA+. Apalache leverages the power of SMT solvers to reason about states and transitions in terms of a logic of constraints, rather than in terms of individual states and transitions.
For a quick example, imagine that we want to answer this question:
Are there m and n in the range from 2 to 1000000 such that m * n = 999999?
Or in TLA+ words:
\E m, n \in 2..1000000: m * n = 999999 /\ answer' = TRUE
Apalache finds an answer in less than a second, e.g., m = 333333
and n = 3
. Had we tried it with TLC -- the standard model checker for TLA+ -- we would have to wait for ages. See factorization.tla for a complete example.
The above example looks like a foreign language to many engineers, and for good reason. Thus our first goal with TLA+ tooling is to improve developer experience. Our second goal is to integrate TLA+ specifications with real code. The latter is currently being pursued via Model Based Testing, where complex test cases for real code are generated from small amounts of TLA+.
TLA+ is an untyped language. This suits well the TLC model checker, as it dynamically checks values while exploring states. However, the untyped nature of the language poses challenges for tools that use SMT in the backend. For instance, both Apalache and the TLAPS do some form of type inference for TLA+ in the background. So far, both tools have been hiding type inference and types from users. With the new type checker Snowcat (see below), users can use types as an independent analysis tool! We believe that a common infrastructure for typed TLA+ will boost new ideas for TLA+ tooling.
Check the Talk at Interchain Conversations to see how we are using TLA+ and Apalache at Informal Systems.
Much of the focus of the past six months has been on improving the stability and usability of the model checker. After receiving feedback from our internal users at Informal Systems, we found out that many users never reached the stage when they saw the power of the model checker. Instead, they were stumbling upon three issues:
How do I write TLA+? The documentation is scattered across many sources, some of it is significantly outdated.
Why is Apalache complaining about types? The old type checker in Apalache is ad-hoc and is optimized for the model checker, not for the users.
Why is Apalache complaining about assignments? Again, the model checker was not helping the users to understand what it was doing.
We focused on these three issues in Q3-Q4.
The learning material on TLA+ is scattered across many resources. In the canonical book on Specifying Systems, Leslie Lamport is writing about TLA+ Version 1, whereas the tools support TLA+ Version 2, which introduced recursive operators and lambdas. Moreover, the resources on learning TLA+ are too focused on the mathematical principles of the language. This produces lots of confusion for the newcomers and complicates the on-boarding process. We have started to write a new TLA+ manual for engineers and collect TLA+ idioms.
We have received positive feedback from our users already. We will be happy to receive feedback from you!
Having seen the irritation of our users with the old type checker, we have decided to completely rewrite it. We started by writing RFC001 and collecting feedback from the core developers of TLA+, including Markus Kuppe and Leslie Lamport. The results of this discussion materialized in ADR002, which presents our new approach to type annotations and the underlying type system. Jumping forward to Q1 of 2021, we have recently introduced Java-like annotations in ADR004.
For example, here is a snippet from QueensTyped.tla that demonstrates the new type annotations:
\* @type: Seq(Int) => Bool; IsSolution(queens) == \A i \in 1 .. Len(queens) - 1 : \A j \in i + 1 .. Len(queens) : ~Attacks(queens,i,j)
We have presented the prototype of the type checker at the TLA+ community event in the talk called Type inference for TLA+ in Apalache. In that talk, we have also featured an approach to completely automatic type inference for TLA+ by using SMT solvers. However, a completely automatic type inference tool requires much more work of the Apalache development team.
You can try the new type checker Snowcat right now.
In theory, TLA+ does not have assignments by design, as the users should focus on logical constraints instead of thinking in terms of memory and statements. In practice, the model checker TLC interprets some expressions like x' = 10
and x' \in { 1, 2, 3}
as assignments to the variable x
.
When we started the work on Apalache back in 2016, we realized that the practical approach of TLC is quite appealing, as the users were running TLC against their specs. However, TLC discovers assignments while exploring the state space. Hence, it finds assignments dynamically, as it goes. We had to transfer this idea to static analysis. Our first solution was using SMT and it could even find assignments that were going against the expected flow of a TLA+ formula.
Although this approach was a good solution from the academic point of view, we found that the users were experiencing problems with understanding the error messages that were produced by this technique. Moreover, the tool was producing flaky results, as the SMT solvers are not guaranteed to give us the same solution every time.
In Q4 of 2020, we had decided that less is more! The new version of Apalache follows pretty much the same approach as in TLC to finding assignments. However, Apalache finds assignments statically. Though in theory it means that Apalache can reject some specs that TLC would have accepted, in practice it works just fine. You can check the details in the manual chapter on Assignments in Apalache.
In Q1 of 2021, we have been preparing for our first Alpha release, which we call Alpha Centauri. So far, Apalache has been labelled as experimental. We were breaking lots of things and tried different approaches to model checking of TLA+. Now it is time to introduce some stability.
We have switched from a sporadic release schedule to weekly releases (actually, we had no release schedule). This helps us to get user feedback faster. Moreover, should we break a fresh release, users always have access to a whole range of more stable versions.
The final challenge before issuing Alpha Centauri is to integrate the types computed by the new type checker Snowcat into the model checker.
You can track our progress towards the alpha release by checking the issues that are tagged with Alpha Centauri.
In the meantime, take Apalache for a spin and let us know what you think! If you're new to TLA+, be sure to read our TLA+ manual for engineers.