During the last Tech Tuesday we talked about the syntax of programming languages as part of the cycle on programming. That’s a term you will hear frequently in no small part because when you get it wrong the computer will complain about it with a “syntax error.” Today’s topic is much more elusive and less talked about: semantics or what a program “means.” The “means” is in quotation marks because paraphrasing Bill Clinton, the answer depends on what the meaning of the word “means” is. Yes, semantics is an area that can make your head explode if you think about it too hard.
Before we dive in a bit, why should you even care about semantics? Tons of code gets written every day without heads exploding as people think about what that code “means” for the computer. Still, I am convinced that spending some time learning and thinking about semantics will ultimately make you a better programmer. While the field is full of very theoretical ideas that you could get lost in (which is fun in and of itself), exposure to it will help you appreciate why some code is better than other code (for some meaning of “better” :) that will hopefully become clearer during this series of posts).
To prevent heads from exploding, here is a simple crack at understanding semantics. Think of it as a formal description of the steps that a hypothetical computer will perform when it executes a program. This approach is known as operational semantics and is the only one we will cover here (see here for others). Lets unpack this a bit. The “hypothetical” is there because semantics doesn’t operate at the level of a hardware configuration by tracing the steps inside specific CPU but concerns itself with a higher level of computation (and that’s OK because those execution details don’t matter for the meaning of the code).
The “formal” bit is there because semantics is not some vague description of meaning in say English but rather a precise set of rules. At this point some alarm bells should start to go off. What language exactly are we using to describe those rules? Ah, welcome to the rabbit hole. There are several possible, closely related and equally intriguing answers. One answer is to use formal logic, such as the lambda calculus. The other answer is to use the programming language itself. Come again? Yes - as it turns out for a whole bunch of programming languages you can describe the rules for what the language means in the language itself. That results in what is known as a meta-circular evaluator and is not half as crazy as it sounds.
Most of this was first figured out in the creation of LISP, which in various incarnations still remains the most intriguing programming language. John McCarthy wrote a seminal paper titled “Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I” in 1960 (aside: there never was a Part II). The opening paragraph of the most important section is:
We shall first define a class of symbolic expressions in terms of ordered pairs and lists. Then we shall define five elementary functions and predicates, and build from them by composition, conditional expressions, and recursive definitions an extensive class of functions of which we shall give a number of examples. We shall then show how these functions themselves can be expressed as symbolic expressions, and we shall define a universal function that allows us to compute from the expression for a given function its value for given arguments.
The basic idea is that McCarthy is formally defining the meaning of the LISP language using only the elements of the language itself (which also happen to draw closely on the lambda calculus).
Even if you don’t immediately see the benefits of doing this, there is something wonderfully appealing about having a language describe itself (and of course that is usually what we do when we explain the meaning of an English word to someone else – we use other English words!). As it turns out this is a very powerful idea and as we go through more of the concepts behind programming in upcoming posts we will come back to it repeatedly, for instance in looking at what an interpreter does or learning about data types.