Program synthesis is an appealing notion: I give you a pretty good (but incomplete) description of a program, and you give me a working program back out. Flash Fill—program synthesis by example that ships in Excel—is the standard example that PL people trot out, and it is an incredible success. (Check out Sumit Gulwani’s POPL 2015 keynote (video in supplemental material) if you’ve never heard of it.)
Fiat Cryptography is another great example of program synthesis success. Where Flash Fill is programming by example, Fiat Cryptography is deductive synthesis: here, an expert manually refines a specification of arithmetic operations used in elliptic curve cryptography to achieve fast and verified implementations of the necessary primitives. Fiat Cryptography is used in BoringSSL, a part of Chrome and Android (check out the paper).
These two approaches are quite different: Flash Fill offers nearly instantaneous feedback as part of an existing interface; Fiat Cryptography is effectively a set of Coq lemmas and tactics, only approachable by experts. (You might wonder why they’re both called program synthesis! For that, I’d follow Sam Tobin-Hochstadt’s second-hand quote: “a synthesizer is just a compiler that doesn’t work”, that is, these tools are both synthesis because they take specifications as input and produce valid implementations as output, when they produce output… but they don’t always succeed in producing output.)
Flash Fill and Fiat Cryptography are tech-transfer success stories working on starkly different timescales: Flash Fill is an end-user synthesis that takes examples and produces Excel formulae in a fraction of a second; Fiat Cryptography is a tool for experts to generate extremely specialized code in a more interactive, involved way, possibly taking days to generate something acceptable. Given these remarkable differences, I wonder… when is program synthesis worthwhile?
A theory of effort
Suppose you’re trying to write some program, and you could (a) use program synthesis or (b) write the program by hand. Let’s say it takes E time to come up with an appropriate specification, S time to synthesize a solution, and I time to inspect and approve a solution. Conversely, suppose it takes P time to write the program and T time to test and approve it.
In general, synthesis is only worthwhile when E + S + I < P + T. Considering that most programmers are more familiar with conventional programming tasks and users are often reluctant to install and use a complex new tool, it’s likely that synthesis is only reasonable when E + S + I <<< P + T (we read <<< as “is much less than”). Let’s take this theory for a spin.
Flash Fill
Flash Fill is example based, where simple string transformations are canonical examples. Writing up some example string transformations might take a few seconds, i.e., E ~ 10. Synthesis is very fast—say, S ~ 1. Inspection might take longer—from a few seconds to a minute or so. Let’s be careful and say I ~ 30 (though your typical Excel user may be a little more YOLO about things).
Conversely, programming string manipulations in Excel are fiddly; I’d certainly have to spend a minute to find the right functions, figure out the annoying indexing bits (another minute—or more), and then go through and verify that I got it right. A user with less programming expertise might take longer. Let’s say P ~ 120 and T ~ 20—though many users will give up before figuring it out at all.
Here synthesis is much faster than writing the program by hand: we have 31 ~ 140. It’s a huge savings in time for an experienced programmer to use Flash Fill, never mind a user less familiar with programming.
Fiat Cryptography
Fiat Cryptography bakes in a set of techniques for deriving very efficient implementations of cryptographic arithmetic primitives. In this case E amounts to choosing an appropriate prime and its representation and encoding it in Coq. I pored over their commit logs, but I couldn’t find an example of new primes being introduced. Let’s say it takes a day, though I suspect it’s much less. (For example, it takes a few days for them to implement new optimizations—a new prime amounts to simply picking a representation.) S is the time it takes for Coq to compile the code—I haven’t tried it, but it takes about 1.5hrs to compile in their CI. Inspection is… free, or rather, the low, low price of trusting their trusted computing base (TCB)! Okay, okay: their validation in CI takes another 1.5hrs. Trust but verify. Verify but test.
On the other hand, what does it take to implement and verify operations for a new prime by hand? Months! And at the end of those months, one has less confidence in the final product compared to an unverified one; verifying the handwritten code will take months more.
Again, synthesis is substantially faster: a day or two, compared to many months. As a bonus, the output of synthesis is substantially more trustworthy than what you’d get rolling your own.
RLIBM
Another example of well situated, worthwhile synthesis is RLIBM, a collection of fast and correct elementary floating point functions. RLIBM doesn’t advertise itself as synthesis, but that’s exactly what it is: given a specification (e.g., log2f
) and a correct oracle, they generate appropriate polynomials for approximation. A number of the talks are available online.
RLIBM is less of a ‘drop in’ replacement than Fiat Cryptography, but they’re already seeing plenty of success in getting their code into LLVM. Synthesis is slow—it can take quite some time for them to generate and validate polynomials (i.e., E, S, and I are on the order of days judging from their interactions with LLVM devs). But the value proposition is huge: their approach yields more efficient code, offers a single polynomial for multiple rounding modes and bits of precision, and is more correct than the state of the art. What’s more, the start of the art is also very expensive: Intel has a whole team of mathematical experts working on this problem (P is very, very high); for float32
, I = T, since both cases use exhaustive checking.
New frontiers
Synthesis is worthwhile when its costs (coming up with a specification/examples E, running the synthesizer S, and inspecting the solution I) are substantially less than the cost of programming by hand (programming the solution P and testing it T). That is, synthesis is worthwhile when E + S + I <<< P + T.
My estimate is conservative: the framing assumes that all we get from synthesis is a replacement for programming, no better than what we’d produce ourselves. But for users without programming experience, Flash Fill doesn’t just speed things up, it makes new things possible. Similarly, Fiat Cryptography doesn’t just produce a fast C implementation of arithmetic primitives… its proofs add confidence that the result is correct! RLIBM is faster and more precise than existing solutions.
Performance changes how users use software. Fabian Giesen is quoted to similar effect in Dan Luu’s post on productivity and velocity; Shriram Krishnamurthi has made similar comments, too (Thanks to Sam Tobin-Hochstadt and Dan Luu for the links!) Such a maxim certainly applies here: synthesis opens new doors. Non-programmers get help writing simple programs; programmers save substantial time and effort. Either way, synthesis boosts productivity.
When is synthesis not worthwhile?
We’ve seen three examples of program synthesis that have clearly provided value in the real world. Flash Fill, Fiat Cryptography, and RLIBM are clearly worthwhile. When is synthesis not worthwhile?
Let’s consider the case of general-purpose programming. A variety of program synthesizers can generate code in functional or imperative languages. A common example in these settings is linked list reversal. A quadratic solution in a functional language is fairly simple if you know about append
or snoc
, but harder without it; efficient, linear versions in functional and imperative languages are a bit tougher.
Many academic tools are example based, in which case it’d be reasonable to say that E ~ T. For synthesis to be worthwhile, then, we would need S + I <<< P. That is, the time to run the synthesizer and inspect the resulting program must be substantially less than the time to write the program itself.
Writing a linear-time linked list reversal in OCaml takes me about 30s; less in Haskell. Imperative linked list reversal is a bit slower—say, 2.5min. Testing either of these won’t take long—a few seconds suffices in an FP REPL, or as much as a minute to write the JUnit in Java.
Existing synthesis tools can handily beat my programming time, so… what gives? Why aren’t program synthesis tools everywhere? I think there are a few reasons:
- In fact, E > T. Looking at synthesis benchmarks, the tools that beat my time take many more examples than I would bother with. Tail-recursive reversal takes 14 examples in an older tool I know of… and that’s to generate the two-argument version, not the outer function that one might care about. If I know to generate the accumulator-passing type of the inner function, why do I need help writing list reverse?
- I is large. I looked for research on how I and P relate, i.e., how long it takes to write a function as compared to understanding and agreeing with someone else’s implementation, but I couldn’t find much concrete information. I suspect that if the standard is ‘believing a program to be correct’, then I is on the same order as P when T is small, i.e., programs that can be easily tested are as hard to check as they are to write. As T increases, then I is on the order of T. (But who cares what I think? It’s a testable hypothesis!)
- S should include the cost of invoking the tool… and maybe the amortized cost of acquiring the tool. Flash Fill just comes with Excel and is quite discoverable in the UI, but existing tools need to be downloaded, compiled, and installed, and they don’t have much in the way of editor integration.
Alternatively, consider GitHub’s Copilot. You write a little prose explanation and a function header, and it does the rest. Copilot seems to set E quite low, but I is comparatively high. What’s worse, E != T when you give Copilot textual descriptions rather than input/output examples. Copilot addresses HCI issues nicely—it’s easy to install and easy to use—but it produces particularly low-confidence code. For Copilot code, I’d say I = T, i.e., I’d want to thoroughly test anything that came out of Copilot. (And more importantly, for licensing reasons, I wouldn’t want to use anything that came out of Copilot.)
To sum up: synthesizing code for general purpose languages has E + S + I less than P + T in some cases, but not so much less than P + T that it seems worthwhile.
My examples of worthwhile synthesis offer two primary ways for synthesis to be worthwhile: either focus on fast-turnarounds and excellent usability (very low E, S, and I), or specialize in a domain where the state of the art is excruciatingly hard (P and T are huge and you have a story—verification, validation, etc.—for keeping I low). Flash Fill’s successful tech transfer depended heavily on HCI improvements. Fiat Cryptography’s and RLIBM’s success did not—but they target specialized domains.
Don’t be discouraged
Program synthesis offers a novel relationship between programmers people and computers. I am not at all saying to give up on forms of synthesis that don’t meet my criteria of E + S + I <<< P + T. Some forms of synthesis may not be worthwhile yet, but we don’t know what changes and opportunities the future holds!
Program synthesis is an exciting new area, and there’s lots of PL work on it. It’s unreasonable to expect every academic paper to be immediately applicable, so I don’t expect every proposed synthesizer to meet my criteria. It is important, however, to be realistic about where we stand. If you’re working on synthesis, think about how E, S, and I compare to P and T for you. If your approach doesn’t yet hit my criteria, what needs to change to make that happen?
Postscript
There’s some great discussion on the Twitter thread announcing this post.