Rob Pike: Geek of the Week

Rob Pike's contribution to Information Technology has been profound, both through the famous books he co-authored with Brian Kernighan, and his contribution to distributed systems, window systems and concurrency in Unix. Now at Google, his creative skills are in full flow, particularly, in collaboration with Ken Thompson, in the exciting Go language, a grown-up, but radical, C with concurrency.

1221-Image1.jpg

 Rob Pike joined Google as Distinguished Engineer where he works on distributed systems, data mining, programming languages, and software development tools, Before then, Rob was a member of the famous Computing Sciences Research Center at Bell Labs, the lab that developed Unix. Amongst other contributions, he was responsible for the first bitmapped window system for Unix.

At Bell Labs he worked on computer graphics, user interfaces, concurrent programming, and distributed systems. He is the author of several text editors, including sam and acme, and devised the vismon program for displaying images of faces of email authors. He was an architect of the Plan 9 and Inferno operating systems. More recently at Google he was a co-designer of the Go programming language.

He also found time co-write The Unix Programming Environment and The Practice of Programming with Brian Kernighan. He is the author of numerous papers, including ‘Squeak: A Language for Communicating with Mice’

He is a keen astronomer; a shuttle mission nearly launched a gamma-ray telescope he designed. (he has written several papers on astronomy, including ‘A Bright Future for the Night Sky’). Rob is a Canadian citizen, married to the American comic books writer and illustrator Renee French. He claims to have never written a program that uses cursor addressing.

RM:
“‘What was the theory behind developing ‘Go’?”
RP:
“‘I’ve said a lot about this since Go came out. In a nutshell, we wanted a language with the safety and performance of statically compiled languages such as C++ and Java, but the lightness and fun of dynamically typed interpreted languages such as Python.

It was also important that the result be suitable for large systems software development on modern, networked, multicore hardware.

To achieve these goals, we tried to design a language in which the various elements – the type system, the package system, the syntax, concurrency, and so on – were completely “orthogonal”, by which I mean that they never interact in surprising ways, making the language easy to understand but also easy to implement.”

RM:
“‘What elements have you borrowed from other languages?”
RP:
“The syntax is clearly influenced by the C family, but there’s also a lot of the Pascal family especially from the Modula and Oberon branches, in the type syntax and package systems. Its concurrency is rooted in CSP, but evolved through a series of languages done at Bell Labs in the 1980s and 1990s, such as Newsqueak, Alef, and Limbo.

A lot of Go is new, or at least was designed without direct borrowing from existing languages. The idea of interface types is one example – it’s very different from what “interface” usually means. Go was designed from the ground up to solve our problems, not to be a union of our favorite features from other languages.”

RM:
“What is the best program you’ve seen written in Go?”
RP:
“The most surprising one to me was a program Russ Cox wrote that, because of this orthogonality of features I mentioned, was able to attach a method to a function in the implementation of a web server. That’s a fun example to present – it’s a bit of a mindblower. It’s the HandlerFunc adapter described in Effective Go”
RM:
“Have you received much feedback of the language so far? If so, what has this been focused on?”
RP:
“There’s been tons of feedback. It seems that the language is more interesting than it looks at first, because people who only read about it don’t seem to respond well to it, but those who’ve tried it are delighted by its expressiveness and ease of programming. We’ve received a number of comments from people who’ve rewritten programs from other languages into Go and find they get much shorter and sometimes even more general in the process.

We receive a lot of feature requests, of course, and consider them in light of this orthogonality principle. Lots of people in the open source community have written libraries, tools, and packages for the Go project. The entire job of porting to Windows was done by people in the outside community, for instance.”

RM:
“Can we hark back to how you began with computers. What the first interesting program you remember writing? What was it about programming that drew you in?”
RP:
“The first big (by contemporary standards – it was a box of cards) program I wrote was in Fortran IV. It did statistical analysis of sunspot observations by me and some friends. I started by thinking of programming as a tool for automating some numerical processing but after a while I became drawn by the power to make new things happen. The time I spent in the Dynamic Graphics Project at the University of Toronto hooked me up to computer graphics early in its development. One of the things I did then, in collaboration with Richard Berry, was to model the growth of light pollution, generating some depressing (and, it turns out, prescient) maps of the decay of the night sky through the end of the twentieth century (see ‘Light Pollution)

Graphics is still my favourite area of computing, making pictures happen by writing code.”

RM:
“What languages have you used seriously? It must be a long list for you.”
RP:
“I’ve used lots. I wrote my first ‘cool’ programs in Fortran IV, my first ‘aha’ programs in Algol, and my first ‘wow’ programs in LISP. It may be more interesting to name a few languages I haven’t used seriously: COBOL, doing scientific computing early in my career kept me safe, Perl because I still believe in one tool for each job, not one tool for all jobs, JavaScript, an accident of the path I’ve taken lately.”
RM:
“Are there any programming languages you don’t enjoy using?”
RP:
“I’m not comfortable thinking about types before data and functions, so I don’t enjoy languages like Java and C++ but I respect them and can be effective in them. But their bureaucracy – the need to tell computer even the simplest things it should be able to figure out for itself – can wear me down. My OSCON keynote on this topic last year, called Public Static Void seemed to strike a chord with people.”
RM:
“Do you think languages are getting better? You’ve designed quite a few so you obviously think it’s a worthwhile pursuit. Is it easier to write software now because of advances that we’ve made?”
RP:
“Programmers are working at a higher level, mostly because of web programming and the associated languages and frameworks. I guess that’s a good thing, but it surprises me how little most programmers today seem to know about the actual machines: sizes, performance, bits, and the true elements of computing. They get away with it most of the time, but when they don’t trouble can ensue.

There are many interesting languages cropping up these days, Go among them, and the wealth of choices is a welcome change. Also productivity is surely improved over the last generation. Things are getting better but most of the code I see today is still written in languages I find unpleasant.”

RM:
“Are the programmers of today up against a less easy environment than say 30 years ago – they exercise the same amount of ingenuity but in an environment that’s harder to understand – we try to make more elaborate languages to help them deal with the uncertainty of those environments?”
RP:
“I think it’s the exact opposite. The environment today is much easier to work in. Computers are much faster, libraries are better and more complete, and lots of low-level stuff is just taken care of. It’s not the languages so much as how the field has developed.”
RM:
“What is your view about the role of the language in making it impossible to make mistakes? Some people for instance say: ‘If we lock down this language it’ll be impossible to write bad code? Then others say ‘That’s a doomed enterprise, so we might as well leave everything wide open and leave it to programmers to be clever. How do you find the balance?”
RP:
“A language cannot keep you from making mistakes. It might be able to eliminate some forms of error a priori, but it can never kill all of them. The best it can do is to make it easier to write safe, correct, easy-to-understand programs.

The balance is to do what you can in language design to let good programming shine through. Concurrent programming is one area where it’s clear what can be done.

The best-known approach – memory barriers, locks, semaphores, condition variables, threads – works but requires great skill and understanding and many programmers get in trouble because of the difficulty.

Languages can help a lot by providing easier-to-use, easier-to-understand primitives (which are probably implemented under the covers using locks and so on). Programmers unfamiliar with the traditional kind of multithreaded programming have found it easy to write correct concurrent code using Go’s linguistic support for concurrency, for example. The language doesn’t prevent you writing code with, say, memory races, but it encourages you to think about programming in a style in which they don’t arise as often.”

RM:
“You’ve co-authored a couple of books with Brian Kernighan and obviously care about writing. Do you find writing books and writing code to be similar mental exercises?”
RP:
“I am more fluent in prose than code. I’d say it’s hard not to be except that I know some people who can write good code but are unable to put a proper sentence together. Maybe the discipline of a more formal language with tools, such as a compiler, strips away distractions and focuses thought.

The processes of writing code and prose don’t feel very similar, but in both I care a great deal about parsimony and grace – style. And in both I do a great deal of rereading and rewriting. The first draft, even if correct, is rarely ‘right’.

That need to rewrite is important and often neglected when coding. Modern programming languages, especially object-oriented ones, are interface heavy. This means that before you can make anything happen you need to write down a lot of preliminaries, often making important API decisions before the best design has emerged. Once it does emerge, there’s so much written down already that it feels counterproductive to back up and rework the interfaces. These languages create a penalty for rewriting that encourages working around early design mistakes rather than fixing them.

I believe this dependence on interfaces before code is a major reason for programs being so much bigger than they used to be. Also, as Peter Weinberger has observed, interface design is hard but critical, so it used to be done only by the best programmers on your team. Now it’s done by everyone, with predictable results.”

RM:
“Are there language features that make programmers more productive? You’ve designed a number of languages so you’ve obviously an opinion on this?”
RP:
“There are the obvious things that remove the need for programmers to attend to details, things like garbage collection and other automated bookkeeping features. But it’s important to understand some facts about features in programming languages.

Programming languages are all about productivity. That’s their purpose: to make people more productive. Jokes aside, and there are some great ones, no one has designed a language to make people less productive. Special-purpose languages are a double hit, since they focus on expression of solutions within a particular problem domain, automating more of the irrelevant and unnecessary and allowing more focus on the important. Features, then, are not what make languages productive; what does is how well the language fits the problem at hand.

Requests for features in programming languages usually fall into one of two camps. The first is “I need this feature because my problem domain needs it;” the second is “I need this feature because I’m used to it from another language.”

A language designer might think, “I need this feature because it will attract users.” These arguments are not compelling (although they can be shrill) and acquiescing to them leads to big messy languages without focus, workable for many problems but rarely just right.

A feature makes a language better either by fitting smoothly into what’s already there or, perhaps paradoxically, by being orthogonal to what’s already there, so it adds power without mixing in complicated ways with the existing design.

In short, a language aids productivity if it allows succinct, precise expression of the solution to a problem by clean combination of its features. A feature doesn’t make the language better on its own, but a feature that works smoothly with the rest of the language, even if it’s hard to show some specific benefit of the feature in isolation, can have a huge effect.

In that light there are a few “features” in Go that aid productivity: – goroutines, which make it efficient and simple to execute concurrent code
– channels, which combine synchronization and communication to make concurrency easy and safe
– interfaces, which make code extensible, composable, and adaptable after the fact [The word “interface” in Go refers to generalized specification of behaviour; they’re a true language feature unrelated to the “I” in “API”.]

The talk Russ Cox and I gave at Google I/O last year covers these topics in detail.”

RM:
“I’ll like to throw a couple of de-bugging questions at you. What’s the worst bug you’ve had to track down? What are your preferred debugging techniques? Do you use print statements, assertions, formal proofs?”
RP:
“I like finding and fixing bugs; they’re puzzles. People in this profession treat them as some sort of character flaw or major irritation, but they’re just part of the business. Think of them in the right way and they can be fun.

The only bad bug is one you haven’t found yet.

I rarely use debuggers, a few times a year at most. Sometimes they’re useful but I find they lead too often to false avenues of inquiry. I like stack traces on failure, which Go provides automatically (as do many languages, but nowhere near all). Beyond that, a good debugger – and let’s admit that most of them aren’t that good – will answer questions correctly, but the problem is that you need to know what questions to ask, and that’s the hardest part of debugging.

Since they make it so easy to ask a question and have it answered, debuggers encourage a sort of microphenomenological, depth-first approach to debugging that is too divorced from the actual program. And even when they lead to the right answer, debuggers create a mindset where a local microscopic fix is done rather than the right, global fix.

It’s better to debug from the program itself. When there’s a failure, the one true fact is the failure itself. Don’t get distracted by other things, just focus on the fact. What could have caused that? What assumptions or invariants in the code might lead to that or, if broken, enable that? It’s almost always better to think about the program first. It’s a discipline that’s hard to acquire but in my experience leads quickly to a proper understanding of the problem and the right fix.

As for the mechanics of it: good tests are always helpful; logging is essential; detailed debugging code embedded in the program can be a boon; and beyond that it’s print statements. The only time I depend on debuggers is when seemingly impossible things happen, such as when I suspect a bug in the compiler or operating system.”

RM:
“What are you working on now?”
RP:
“Go’s usage is growing both inside Google and in the outside world. I’m doing what I can to support that, while continuing to develop the language, libraries and environment. I hope some big things will be happening soon.”