Perl.com: The advent blogs roundup

Well hello there, and happy holidays to you! Every December Perl bloggers come together and write new posts for the advent. This year is no different, and if you haven’t seen them already, here are three blogs to check out.

PerlAdvent.org

Probably the merriest of all the advent blogs, Mark Fowler’s PerlAdvent.org has been running since 2000, and has a collection of general-purpose Perl programming articles, this year, including How to run Perl in a browser.

Perl 6 Advent

The Perl 6 Advent has been going since 2009, and covers general-purpose Perl 6 programming. Nigel Hamilton’s article about his command line searcher/launcher jmp should be interesting to Perl 5 and Perl 6 programmers.

Mojolicious

Recently celebrating its version 8 release, the Mojolicious team are running an advent for the second year in a row on their blog. It’s not simply a web framework though: Sebastian Riedel and the Mojo team have created Mojo modules which are useful in their own right, such as for parsing HTML. There is also the async job queue app, Minion.

All three blogs will have new articles each day through December 24th. You can also jump to new articles in our convenient community article sidebar --->

:: Luca Ferrari ::: PostgreSQL 11 Server Side Programming Quick Start Guide

Last week my book PostgreSQL 11 Server Side Programming Quick Start Guide was released.



The book is a Quick Start Guide, that means it goes straight to the point, in this case, programming on the server-side of PostgreSQL.

Why is this related to Perl?
Well, one very cool feature of PostgreSQL is the capability to execute functions, and therefore triggers and procedures, in pretty much any programming language available out there. And this of course, means Perl!
In fact, Perl is very well supported in PostgreSQL to the point that the DO statement does allow you to use PL/Perl code instead of the PL/pgSQL one.

The other language I show, as a "foreign" language, in this book, is Java. My idea was to show to the readers the differences between using a language like Perl and one that requires a full compile-deploy cycle.

Hope the book can be useful.

Dave's Free Press: Journal: Module pre-requisites analyser

Dave's Free Press: Journal: YAPC::Europe 2007 report: day 2

Dave's Free Press: Journal: CPANdeps

Dave's Free Press: Journal: Thanks, Yahoo!

Dave's Free Press: Journal: YAPC::Europe 2007 travel plans

Dave's Free Press: Journal: Perl isn't dieing

Dave's Free Press: Journal: Wikipedia handheld proxy

Dave's Free Press: Journal: YAPC::Europe 2007 report: day 3

Dave's Free Press: Journal: POD includes

Dave's Free Press: Journal: Bryar security hole

Dave's Free Press: Journal: cgit syntax highlighting

Dave's Free Press: Journal: Devel::CheckLib can now check libraries' contents

Ocean of Awareness: A Haskell challenge

The challenge

A recent blog post by Michael Arntzenius ended with a friendly challenge to Marpa. Haskell list comprehensions are something that Haskell's own parser handles only with difficulty. A point of Michael's critique of Haskell's parsing was that Haskell's list comprehension could be even more powerful if not for these syntactic limits.

Michael wondered aloud if Marpa could do better. It can.

The problem syntax occurs with the "guards", a very powerful facility of Haskell's list comprehension. Haskell allows several kinds of "guards". Two of these "guards" can have the same prefix, and these ambiguous prefixes can be of arbitrary length. In other words, parsing Haskell's list comprehension requires either lookahead of arbitrary length, or its equivalent.

To answer Michael's challenge, I extended my Haskell subset parser to deal with list comprehension. That parser, with its test examples, is online.[1] I have run it for examples thousands of tokens long and, more to the point, have checked the Earley sets to ensure that Marpa will stay linear, no matter how long the ambiguous prefix gets.[2]

Earley parsing, which Marpa uses, accomplishes the seemingly impossible here. It does the equivalent of infinite lookahead efficiently, without actually doing any lookahead or backtracking. That Earley's algorithm can do this has been a settled fact in the literature for some time. But today Earley's algorithm is little known even among those well acquainted with parsing, and to many claiming the equivalent of infinite lookahead, without actually doing any lookahead at all, sounds like a boast of magical powers.

In the rest of this blog post, I hope to indicate how Earley parsing follows more than one potential parse at a time. I will not describe Earley's algorithm in full.[3] But I will show that no magic is involved, and that in fact the basic ideas behind Earley's method are intuitive and reasonable.

A quick cheat sheet on list comprehension

List comprehension in Haskell is impressive. Haskell allows you to build a list using a series of "guards", which can be of several kinds. The parsing issue arises because two of the guard types -- generators and boolean expressions -- must be treated quite differently, but can look the same over an arbitrarily long prefix.

Generators

Here is one example of a Haskell generator, from the test case for this blog post:


          list = [ x | [x, 1729,
		      -- insert more here
		      99
		   ] <- xss ] [4]

This says to build a lists of x's such that the guard [x, 1729, 99 ] <- xss holds. The clue that this guard is a generator is the <- operator. The <- operator will appear in every generator, and means "draw from".

The LHS of the <- operator is a pattern and the RHS is an expression. This generator draws all the elements from xss which match the pattern [x, 1729, 99 ]. In other words, it draws out all the elements of xss, and tests that they are lists of length 3 whose last two subelements are 1729 and 99.

The variable x is set to the 1st subelement. list will be a list of all those x's. In the test suite, we have


    xss = [ [ 42, 1729, 99 ] ] [5]

so that list becomes [42] -- a list of one element whose value is 42.

Boolean guards

Generators can share very long prefixes with Boolean guards.


	list2 = [ x | [x, 1729, 99] <- xss,
               [x, 1729,
                  -- insert more here
                  99
               ] == ys,
             [ 42, 1729, 99 ] <- xss
             ] [6]

The expression defining list2 has 3 comma-separated guards: The first guard is a generator, the same one as in the previous example. The last guard is also a generator.

The middle guard is of a new type: it is a Boolean: [x, 1729, 99 ] == ys. This guard insists that x be such that the triple [x, 1729, 99 ] is equal to ys.

In the test suite, we have


    ys = [ 42, 1729, 99 ] [7]
so that list2 is also [42].

Boolean guards versus generators

From the parser's point of view, Boolean guards and generators start out looking the same -- in the examples above, three of our guards start out the same -- with the string [x, 1729, 99 ], but

  • in one case (the Boolean guard), [x, 1729, 99 ] is the beginning of an expression; and
  • in the other two cases (the generators), [x, 1729, 99 ] is a pattern.
Clearly patterns and expressions can look identical. And they can look identical for an arbitrarily long time -- I tested the Glasgow Haskell Compiler (GHC) with identical expression/pattern prefixes thousands of tokens in length. My virtual memory eventually gives out, but GHC itself never complains.[8] (The comments "insert more here" show the points at which the comma-separated lists of integers can be extended.)

The problem for parsers

So Haskell list comprehension presents a problem for parsers. A parser must determine whether it is parsing an expression or a pattern, but it cannot know this for an arbitrarily long time. A parser must keep track of two possibilities at once -- something traditional parsing has refused to do. As I have pointed out[9], belief that traditional parsing "solves" the parsing problem is belief in human exceptionalism -- that human have calculating abilities that Turing machines do not. Keeping two possibilites in mind for a long time is trivial for human beings -- in one form we call it worrying, and try to prevent ourselves from doing it obsessively. But it has been the orthodoxy that practical parsing algorithms cannot do this.

Arntzenius has a nice summary of the attempts to parse this construct while only allowing one possibility at a time -- that is, determistically. Lookahead clearly cannot work -- it would have to be arbitrarily long. Backtracking can work, but can be very costly and is a major obstacle to quality error reporting.

GHC avoids the problems with backtracking by using post-processing. At parsing time, GHC treats an ambiguous guard as a Boolean. Then, if it turns out that is a generator, it rewrites it in post-processing. This inelegance incurs some real technical debt -- either a pattern must always be a valid expression, or even more trickery must be resorted to.[10]

The Earley solution

Earley parsing deals with this issue by doing what a human would do -- keeping both possibilities in mind at once. Jay Earley's innovation was to discover a way for a computer to track multiple possible parses that is compact, efficient to create, and efficient to read.

Earley's algorithm maintains an "Earley table" which contains "Earley sets", one for each token. Each Earley set contains "Earley items". Here are some Earley items from Earley set 25 in one of our test cases:


	origin = 22; <atomic expression> ::=   '[' <expression> '|' . <guards> ']'
	origin = 25; <guards> ::= . <guard<>
	origin = 25; <guards> ::= . <guards> ',' <guard<>
	origin = 25; <guard<>  ::= . <pattern> '< <expression>
	origin = 25; <guard<>  ::= . <expression> [11]

In the code, these represent the state of the parse just after the pipe symbol ("|") on line 4 of our test code.

Each Earley item describes progress in one rule of the grammar. There is a dot (".") in each rule, which indicates how far the parse has progressed inside the rule. One of the rules has the dot just after the pipe symbol, as you would expect, since we have just seen a pipe symbol.

The other four rules have the dot at the beginning of the RHS. These four rules are "predictions" -- none of their symbols have been parsed yet, but we know that these rules might occur, starting at the location of this Earley set.

Each item also records an "origin": the location in the input where the rule described in the item began. For predictions the origin is always the same as the Earley set. For the first Earley item, the origin is 3 tokens earlier, in Earley set 22.

The "secret" of non-determinism

And now we have come to the secret of efficient non-deterministic parsing -- a "secret" which I hope to convince the reader is not magic, or even much of a mystery. Here, again, are two of the items from Earley set 25:


	origin = 25; <guard<>  ::= . <pattern> '< <expression>
	origin = 25; <guard<>  ::= . <expression>  [12]

At this point there are two possibilities going forward -- a generator guard or a Boolean expression guard. And there is an Earley item for each of these possibilities in the Earley set.

That is the basic idea -- that is all there is to it. Going forward in the parse, for as long as both possibilities stay live, Earley items for both will appear in the Earley sets.

From this point of view, it should now be clear why the Earley algorithm can keep track of several possibilities without lookahead or backtracking. No lookahead is needed because all possibilities are in the Earley set, and selection among them will take place as the rest of the input is read. And no backtracking is needed because every possibility was already recorded -- there is nothing new to be found by backtracking.

It may also be clearer why I claim that Marpa is left-eidetic, and how the Ruby Slippers work.[13] Marpa has perfect knowledge of everything in the parse so far, because it is all in the Earley tables. And, given left-eidetic knowledge, Marpa also knows what terminals are expected at the current location, and can "wish" them into existence as necessary.

The code, comments, etc.

A permalink to the full code and a test suite for this prototype, as described in this blog post, is on Github. In particular, the permalink of the the test suite file for list comprehension is here. I expect to update this code, and the latest commit can be found here.

To learn more about Marpa, a good first stop is the semi-official web site, maintained by Ron Savage. The official, but more limited, Marpa website is my personal one. Comments on this post can be made in Marpa's Google group, or on our IRC channel: #marpa at freenode.net.

Footnotes

1. If you are interested in my Marpa-driven Haskell subset parser, this blog post may be the best introduction. The code is on Github.

2. The Earley sets for the ambigious prefix immediately reach a size of 46 items, and then stay at that level. This is experimental evidence that the Earley set sizes stay constant.

And, if the Earley items are examined, and their derivations traced, it can be seen that they must repeat the same Earley item count for as long as the ambiguous prefix continues. The traces I examined are here, and the code which generated them is here, for the reader who wants to convince himself.

The guard prefixes of Haskell are ambiguous, but (modulo mistakes in the standards) the overall Haskell grammar is not. In the literature on Earley's, it has been shown that for an unambiguous grammar, each Earley item has an constant amortized cost in time. Therefore, if a parse produces a Earley sets that are all of less than a constant size, it must have linear time complexity.

3. There are many descriptions of Earley's algorithm out there. The Wikipedia page on Earley's algorithm (accessed 27 August 2018) is one good place to start. I did another very simple introduction to Earley's in an earlier blog post, which may be worth looking at. Note that Marpa contains improvements to Earley's algorithm. Particularly, to fulfill Marpa's claim of linear time for all LR-regular grammars, Marpa uses Joop Leo's speed-up. But Joop's improvement is not necessary or useful for parsing Haskell list comprehension, is not used in this example, and will not be described in this post.

4. Permalink to this code, accessed 27 August 2018.

5. Permalink to this code, accessed 27 August 2018.

6. Permalink to this code, accessed 27 August 2018.

7. Permalink to this code, accessed 27 August 2018.

8. Note that if the list is extended, the patterns matches and Boolean tests fail, so that 42 is no longer the answer. From the parsing point of view, this is immaterial.

9. In several places, including this blog post.

10. This account of the state of the art summarizes Arntzenius's recent post, which should be consulted for the details.

11. Adapted from this trace output, accessed 27 August 2018.

12. Adapted from this trace output, accessed 27 August 2018.

13. For more on the Ruby Slippers see my just previous blog post,

Dave's Free Press: Journal: I Love Github

Dave's Free Press: Journal: CPAN Testers' CPAN author FAQ

Ocean of Awareness: Marpa and combinator parsing 2

In a previous post, I outlined a method for using the Marpa algorithm as the basis for better combinator parsing. This post follows up with a trial implementation.

For this trial, I choose the most complex example from the classic 1996 tutorial on combinator parsing by Hutton and Meijer[1]. Their example implements the offside-rule parsing of a functional language -- parsing where whitespace is part of the syntax.[2] The Hutton and Meijer example is for Gofer, a now obsolete implementation of Haskell. To make the example more relevant, I wrote a parser for Haskell layout according to the Haskell 2010 Language Report[3].

For tests, I used the five examples (2 long, 3 short) provided in the 2010 Report[4], and the four examples given in the "Gentle Introduction" to Haskell[5]. I implemented only enough of the Haskell syntax to run these examples, but this was enough to include a substantial subset of Haskell's syntax.

This description of the implementation includes many extracts from the code. For those looking for more detail, the full code and test suite for this example are on Github. While the comments in the code do not amount to a tutorial, they are extensive. Readers who like to "peek ahead" are encouraged to do so.

Layout parsing and the off-side rule

It won't be necessary to know Haskell to follow this post. This section will describe Haskell's layout informally. Briefly, these two code snippets should have the same effect:


       let y   = a*b
	   f x = (x+y)/y
       in f c + f d [6]
     

       let { y   = a*b
	   ; f x = (x+y)/y
	   } [7]
    

In my test suite, both code snippets produce the same AST. The first code display uses Haskell's implicit layout parsing, and the second code display uses explicit layout. In each, the "let" is followed by a block of declarations (symbol <decls>). Each decls contains one or more declarations (symbol <decl>). For the purposes of determining layout, Haskell regards <decls> as a "block", and each <decl> as a block "item". In both displays, there are two items in the block. The first item is y = a*b, and the second <decl> item is f x = (x+y)/y.

In explicit layout, curly braces surround the block, and semicolons separate each item. Implicit layout follows the "offside rule": The first element of the laid out block determines the "block indent". The first non-whitespace character of every subsequent non-empty line determines the line indent. The line indent is compared to the block indent.

  • If the line indent is deeper than the block indent, then the line continues the current block item.
  • If the line indent is equal to the block indent, then the line starts a new block item.
  • If the line indent is less than the block indent (an "outdent"), then the line ends the block. An end of file also ends the block.
Lines containing only whitespace are ignored. Comments count as whitespace.

Explicit semicolons can be used in implicit layout: If a semicolon occurs in implicit layout, it separates block items. In our test suite, the example


       let y   = a*b;  z = a/b
	   f x = (x+y)/z
       in f c + f d [8]
    
contains three <decl> items.

The examples in the displays above are simple. The two long examples from the 2010 Report are more complicated: 6 blocks of 4 different kinds, with nesting twice reaching a depth of 4. The two long examples in the 2010 Report are the same, except that one uses implicit layout and the other uses explicit layout. In the test of my Haskell subset parser, both examples produce identical ASTs.

There are additional rules, including for tabs, Unicode characters and multi-line comments. These rules are not relevant in the examples I took from the Haskell literature; they present no theoretical challenge to this parsing method; and they are not implemented in this prototype Haskell parser.

The strategy

To tackle Haskell layout parsing, I use a separate combinator for each layout block. Every combinator, therefore, has its own block and item symbols, and its own block indent; and each combinator implements exactly one method of layout -- explicit or implicit.

From the point of view of its parent combinator, a child combinator is a lexeme, and the parse tree it produces is the value of the lexeme. Marpa can automatically produce an AST, and it adds lexeme values to the AST as leaves. The effect is that Marpa automatically assembles a parse tree for us from the tree segments returned by the combinators.

Ruby Slippers semicolons

In outlining this algorithm, I will start by explaining where the "missing" semicolons come from in the implicit layout. Marpa allows various kinds of "events", including on discarded tokens. ("Discards" are tokens thrown away, and not used in the parse. The typical use of token discarding in Marpa is for the handling of whitespace and comments.)

The following code sets an event named 'indent', which happens when Marpa finds a newline followed by zero or more whitespace characters.[9] This does not capture the indent of the first line of a file, but that is not an issue -- the 2010 Report requires that the first indent be treated as a special case anyway.

      :discard ~ indent event => indent=off
      indent ~ newline whitechars [10]
      

Indent events, like others, occur in the main read loop of each combinator. Outdents and EOFs are dealt with by terminating the read loop.[11] Line indents deeper than the current block indent are dealt with by resuming the read loop. [12] Line indents equal to the block indent trigger the reading of a Ruby Slippers semicolon as shown in the following:


	$recce->lexeme_read( 'ruby_semicolon', $indent_start,
	    $indent_length, ';' ) [13]
    
    

Ruby Slippers

In Marpa, a "Ruby Slippers" symbol is one which does not actually occur in the input. Ruby Slippers parsing is new with Marpa, and made possible because Marpa is left-eidetic. By left-eidetic, I mean that Marpa knows, in full detail, about the parse to the left of its current position, and can provide that information to the parsing app. This implies that Marpa also knows which tokens are acceptable to the parser at the current location, and which are not.

Ruby Slippers parsing enables a very important trick which is useful in "liberal" parsing -- parsing where certain elements might be in some sense "missing". With the Ruby Slippers you can design a "liberal" parser with a "fascist" grammar. This is, in fact, how the Haskell 2010 Report's context-free grammar is designed -- the official syntax requires explicit layout, but Haskell programmers are encouraged to omit most of the explicit layout symbols, and Haskell implementations are required to "dummy up" those symbols in some way. Marpa's method for doing this is left-eideticism and Ruby Slippers parsing.

The term "Ruby Slippers" refers to a widely-known scene in the "Wizard of Oz" movie. Dorothy is in the fantasy world of Oz, desperate to return to Kansas. But, particularly after a shocking incident in which orthodox Oz wizardry is exposed as an affable fakery, she is completely at a loss as to how to escape. The "good witch" Glenda appears and tells Dorothy that in fact she's always had what she's been wishing for. The Ruby Slippers, which she had been wearing all through the movie, can return her to Kansas. All Dorothy needs to do is wish.

In Ruby Slippers parsing, the "fascist" grammar "wishes" for lots of things that may not be in the actual input. Procedural logic here plays the part of a "good witch" -- it tells the "fascist" grammar that what it wants has been there all along, and supplies it. To do this, the procedural logic has to have a reliable way of knowing what the parser wants. Marpa's left-eideticism provides this.

Ruby Slippers combinators

This brings us to a question I've postponed -- how do we know which combinator to call when? The answer is Ruby Slippers parsing. First, here are some lexer rules for "unicorn" symbols. We use unicorns when symbols need to appear in Marpa's lexer, but must never be found in actual input.


      :lexeme ~ L0_unicorn
      L0_unicorn ~ unicorn
      unicorn ~ [^\d\D]
      ruby_i_decls ~ unicorn
      ruby_x_decls ~ unicorn [14]
    
    

<unicorn> is defined to match [^\d\D]. This pattern is all the symbols which are not digits and not non-digits -- in other words, it's impossible that this pattern will ever match any character. The rest of the statements declare other unicorn lexemes that we will need. <unicorn> and <L0_unicorn> are separate, because we need to use <unicorn> on the RHS of some lexer rules, and a Marpa lexeme can never occur on the RHS of a lexer rule.[15]

In the above Marpa rule,

  • <decls> is the symbol from the 2010 Report;
  • <ruby_i_decls> is a Ruby Slippers symbol for a block of declarations with implicit layout.
  • <ruby_x_decls> is a Ruby Slippers symbol for a block of declarations with explicit layout.
  • <laidout_decls> is a symbol (not in the 2010 Report) for a block of declarations covering all the possibilities for a block of declarations.

      laidout_decls ::= ('{') ruby_x_decls ('}')
	       | ruby_i_decls
	       | L0_unicorn decls L0_unicorn [16]
    

It is the expectation of a <laidout_decls> symbol that causes child combinators to be invoked. Because <L0_unicorn> will never be found in the input, the <decls> alternative will never match -- it is there for documentation and debugging reasons.[17] Therefore Marpa, when it wants a <laidout_decls>, will look for a <ruby_x_decls> if a open curly brace is read; and a <ruby_i_decls> otherwise. Neither <ruby_x_decls> or <ruby_i_decls> will ever be found in the input, and Marpa will reject the input, causing a "rejected" event.

Rejected events

In this code, as often, the "good witch" of Ruby Slippers does her work through "rejected" events. These events can be set up to happen when, at some parse location, none of the tokens that Marpa's internal lexer finds are acceptable.

In the "rejected" event handler, we can use Marpa's left eideticism to find out what lexemes Marpa would consider acceptable. Specifically, there is a terminals_expected() method which returns a list of the symbols acceptable at the current location.


            my @expected =
              grep { /^ruby_/xms; } @{ $recce->terminals_expected() }; [18]
    

Once we "grep" out all but the symbols with the "ruby_" prefix, there are only 4 non-overlapping possibilities:

  • Marpa expects a <ruby_i_decls> lexeme;
  • Marpa expects a <ruby_x_decls> lexeme;
  • Marpa expects a <ruby_semicolon> lexeme;
  • Marpa does not expect any of the Ruby Slippers lexemes;

If Marpa does not expect any of the Ruby Slippers lexemes, there was a syntax error in the Haskell code.[19]

If a <ruby_i_decls> or a <ruby_x_decls> lexeme is expected, a child combinator is invoked. The Ruby Slippers symbol determines whether the child combinator looks for implicit or explicit layout. In the case of implicit layout, the location of the rejection determines the block indent.[20]

If a <ruby_semicolon> is expected, then the parser is at the point where a new block item could start, but none was found. Whether the block was implicit or explicit, this indicates we have reached the end of the block, and should return control to the parent combinator.[21]

To explain why <ruby_semicolon> indicates end-of-block, we look at both cases. In the case of an explicit layout combinator, the rejection should have been caused by a closing curly brace, and we return to the parent combinator and retry it. In the parent combinator, the closing curly brace will be acceptable.

If we experience a "rejected" event while expecting a <ruby_semicolon> in an implicit layout combinator, it means we did not find an explicit semicolon; and we also never found the right indent for creating a Ruby semicolon. In other words, the indentation is telling us that we are at the end of the block. We therefore return control to the parent combinator.

Conclusion

With this, we've covered the major points of this Haskell prototype parser. It produces an AST whose structure and node names are those of the 2010 Report. (The Marpa grammar introduces non-standard node names and rules, but these are pruned from the AST in post-processing.)

In the code, the grammars from the 2010 Report are included for comparison, so a reader can easily determine what syntax we left out. It might be tedious to add the rest, but I believe it would be unproblematic, with one interesting exception: fixity. To deal with fixity, we may haul out the Ruby Slippers again.

The code, comments, etc.

A permalink to the full code and a test suite for this prototype, as described in this blog post, is on Github. I expect to update this code, and the latest commit can be found here. Links for specific lines of code in this post are usually static permalinks to earlier commits.

To learn more about Marpa, a good first stop is the semi-official web site, maintained by Ron Savage. The official, but more limited, Marpa website is my personal one. Comments on this post can be made in Marpa's Google group, or on our IRC channel: #marpa at freenode.net.

Footnotes

1. Graham Hutton and Erik Meijer, Monadic parser combinators, Technical Report NOTTCS-TR-96-4. Department of Computer Science, University of Nottingham, 1996, pp 30-35. http://eprints.nottingham.ac.uk/237/1/monparsing.pdf. Accessed 19 August 2018.

2. I use whitespace-significant parsing as a convenient example for this post, for historical reasons and for reasons of level of complexity. This should not be taken to indicate that I recommend it as a language feature.

3. Simon Marlow, Haskell 2010 Language Report, 2010. Online version accessed 21 August 2018. For layout, see in particular section 2.7 (pp. 12-14) and section 10.3 (pp. 131-134).

4. 2010 Report. The short examples are on p. 13 and p. 134. The long examples are on p. 14.

5. Paul Hudak, John Peterson and Joseph Fasel Gentle Introduction To Haskell, version 98. Revised June, 2000 by Reuben Thomas. Online version accessed 21 August 2018. The examples are in section 4.6, which is on pp. 20-21 of the October 1999 PDF.

6. Github Permalink.

7. Github Permalink.

8. Github Permalink.

9. Single-line comments are dealt with properly by lexing them as a different token and discarding them separately. Handling multi-line comments is not yet implemented -- it is easy in principle but tedious in practice and the examples drawn from the Haskell literature did not provide any test cases.

10. Github Permalink.

11. Github Permalink.

12. Github Permalink.

13. Github Permalink.

14. Github Permalink.

15. The reason for this is that by default a Marpa grammar determines which of its symbols are lexemes using the presence of those symbol on the LHS and RHS of the rules in its lexical and context-free grammars. A typical Marpa grammar requires a minimum of explicit lexeme declarations. (Lexeme declarations are statements with the :lexeme pseudo-symbol on their LHS.) As an aside, the Haskell 2010 Report is not always careful about the lexer/context-free boundary, and adopting its grammar required more use of Marpa's explicit lexeme declarations than usual.

16. Github Permalink.

17. Specifically, the presense of a <decls> alternative silences the usual warnings about symbols inaccessible from the start symbol. These warnings can be silenced in other ways, but at the prototype stage it is convenient to check that all symbols supposed to be accessible through <decls> are in fact accessible. There is a small startup cost to allowing the extra symbols in the grammars, but the runtime cost is probably not measureable.

18. Github Permalink.

19. Currently the handling of these is simplistic. A practical implementation of this method would want better reporting. In fact, Marpa's left eideticism allows some interesting things to be done in this respect.

20. Github Permalink.

21. Github Permalink.

Ocean of Awareness: Measuring language popularity

Language popularity

Github's linguist is seen as the most trustworthy tool for estimating language popularity[1], in large part because it reports its result as the proportion of code in a very large dataset, instead of web hits or searches.[2] It is ironic, in this context, that linguist avoids looking at the code, preferring to use metadata -- file name and the vim and shebang lines. Scanning the actual code is linguist's last resort.[3]

How accurate is this? For files that are mostly in a single programming language, currently the majority of them, linguist's method are probably very accurate.

But literate programming often requires mixing languages. It is perhaps an extreme example, but much of the code used in this blog post comes from a Markdown file, which contains both C and Lua. This code is "untangled" from the Lua by ad-hoc scripts[4]. In my codebase, linguist indentifies this code simply as Markdown.[5] linguist then ignores it, as it does all documentation files.[6].

Currently, this kind of homegrown literate programming may be so rare that it is not worth taking into account. But if literate programming becomes more popular, that trend might well slip under linguist's radar. And even those with a lot of faith in linguist's numbers should be happy to know they could be confirmed by more careful methods.

Token-by-token versus line-by-line

linguist avoids reporting results based on looking at the code, because careful line counting for multiple languages cannot be done with traditional parsing methods.[7] To do careful line counting, a parser must be able to handle ambiguity in several forms -- ambiguous parses, ambiguous tokens, and overlapping variable-length tokens.

The ability to deal with "overlapping variable-length tokens" may sound like a bizarre requirement, but it is not. Line-by-line languages (BASIC, FORTRAN, JSON, .ini files, Markdown) and token-by-token languages (C, Java, Javascript, HTML) are both common, and even today commonly occur in the same file (POD and Perl, Haskell's Bird notation, Knuth's CWeb).

Deterministic parsing can switch back and forth, though at the cost of some very hack-ish code. But for careful line counting, you need to parse line-by-line and token-by-token simultaneously. Consider this example:


    int fn () { /* for later
\begin{code}
   */ int fn2(); int a = fn2();
   int b = 42;
   return  a + b; /* for later
\end{code}
*/ }
    

A reader can imagine that this code is part of a test case using code pulled from a LaTeX file. The programmer wanted to indicate the copied portion of code, and did so by commenting out its original LaTeX delimiters. GCC compiles this code without warnings.

It is not really the case that LaTeX is a line-by-line language. But in literate programming systems[8], it is usually required that the \begin{code} and \end{code} delimiters begin at column 0, and that the code block between them be a set of whole lines so, for our purposes in this post, we can treat LaTeX as line-by-line. For LaTeX, our parser finds


  L1c1-L1c29 LaTeX line: "    int fn () { /* for later"
  L2c1-L2c13 \begin{code}
  L3c1-L5c31 [A CODE BLOCK]
  L6c1-L6c10 \end{code}
  L7c1-L7c5 LaTeX line: "*/ }"[9]

Note that in the LaTeX parse, line alignment is respected perfectly: The first and last are ordinary LaTeX lines, the 2nd and 6th are commands bounding the code, and lines 3 through 5 are a code block.

The C tokenization, on the other hand, shows no respect for lines. Most tokens are a small part of their line, and the two comments start in the middle of a line and end in the middle of one. For example, the first comment starts at column 17 of line 1 and ends at column 5 of line 3.[10]

What language is our example in? Our example is long enough to justify classification, and it compiles as C code. So it seems best to classify this example as C code[11]. Our parses give us enough data for a heuristic to make a decision capturing this intuition.[12]

Earley/Leo parsing and combinators

In a series of previous posts[13], I have been developing a parsing method that integrates Earley/Leo parsing and combinator parsing. Everything in my previous posts is available in Marpa::R2, which was Debian stable as of jessie.

The final piece, added in this post, is the ability to use variable length subparsing[14], which I have just added to Marpa::R3, Marpa::R2's successor. Releases of Marpa::R3 pass a full test suite, and the documentation is kept up to date, but R3 is alpha, and the usual cautions[15] apply.

Earley/Leo parsing is linear for a superset of the LR-regular grammars, which includes all other grammar classes in practical use, and Earley/Leo allows the equivalent of infinite lookahead.[16] When the power of Earley/Leo gives out, Marpa allows combinators (subparsers) to be invoked. The subparsers can be anything, including other Earley/Leo parsers, and they can be called recursively[17]. Rare will be the grammar of practical interest that cannot be parsed with this combination of methods.

The example

The code that ran this example is available on Github. In previous posts, we gave larger examples[18], and our tools and techniques have scaled. We expect that the variable-length subparsing feature will also scale -- while it was not available in Marpa::R2, it is not in itself new. Variable-length tokens have been available in other Marpa interfaces for years and they were described in Marpa's theory paper.[19].

The grammars used in the example of this post are minimal. Only enough LaTex is implemented to recognize code blocks; and only enough C syntax is implemented to recognize comments.

The code, comments, etc.

To learn more about Marpa, a good first stop is the semi-official web site, maintained by Ron Savage. The official, but more limited, Marpa website is my personal one. Comments on this post can be made in Marpa's Google group, or on our IRC channel: #marpa at freenode.net.

Footnotes

1. This github repo for linguist is https://github.com/github/linguist/.

2. Their methodology is often left vague, but it seems safe to say the careful line-by-line counting discussed in this post goes well beyond the techniques used in the widely-publicized lists of "most popular programming languages".

In fact, it seems likely these measures do not use line counts at all, but instead report the sum of blob sizes. Github's linguist does give a line count but Github does not vouch for its accuracy: "if you really need to know the lines of code of an entire repo, there are much better tools for this than Linguist." (Quoted from the resolution of Github linguist issue #1331.) The Github API's list-languages command reports language sizes in bytes. The API documentation is vague, but it seems the counts are the sum of blob sizes, with each blob classed as one and only one language.

Some tallies seem even more coarsely grained than this -- they are not even blob-by-blob, but assign entire repos to the "primary language". For more, see Jon Evan's Techcrunch article; and Ben Frederickson's project.

3. linguist's methodology is described in its README.md (permalink as of 30 September 2018).

4. This custom literate programming system is not documented or packaged, but those who cannot resist taking a look can find the Markdown file it processes here, and its own code here (permalinks accessed 2 October 2018).

5. For those who care about getting linguist as accurate as possible. there is a workaround: the linguist-language git attribute. This still requires that each blob be reported as containing lines of only one language.

6. For the treatment of Markdown, see linguist README.md (permalink accessed as of 30 September 2018).

7. Another possibility is a multi-scan approach -- one pass per language. But that is likely to be expensive. At last count there were 381 langauges in linguist's database. Worse, it won't solve the problem: "liberal" recognition even of a single language requires more power than available from traditional parsers.

8. For example, these line-alignment requirements match those in Section 10.4 of the 2010 Haskell Language Report.

9. Adapted from test code in Github repo, permalink accessed 2 October 2018.

10. See the test file on Gihub.

11. Some might think the two LaTex lines should be counted as LaTex and, using subparsing of comments, that heuristic can be implemented.

12. To be sure, a useful tool would want to include considerably more of C's syntax. It is perhaps not necessary to be sure that a file compiles before concluding it is C. And we might want to class a file as C in spite of a fleeting failure to compile. But we do want to lower the probably of a false positive.

13. Marpa and procedural parsing; Marpa and combinator parsing; and Marpa and combinator parsing 2

14. There is documentation of the interface, but it is not a good starting point for a reader who has just started to look at the Marpa::R3 project. Once a user is familiar with Marpa::R3 standard DSL-based interface, they can start to learn about its alternatives here.

15. Specifically, since Marpa::R3 is alpha, its features are subject to change without notice, even between micro releases, and changes are made without concern for backward compatibility. This makes R3 unsuitable for a production application. Add to this that, while R3 is tested, it has seen much less usage and testing than R2, which has been very stable for some time.

16. Technically, a grammar is LR-regular if it can be parsed deterministically using a regular set as its lookahead. A "regular set" is a set of regular expressions. The regular set itself must be finite, but the regular expressions it contains can match lookaheads of arbitrary length.

17. See Marpa and combinator parsing 2

18. The largest example is in Marpa and combinator parsing 2

19. Kegler, Jeffrey. Marpa, A Practical General Parser: The Recognizer. Online version accessed of 24 April 2018. The link is to the 19 June 2013 revision of the 2012 original.

Dave's Free Press: Journal: Thankyou, Anonymous Benefactor!

Dave's Free Press: Journal: Number::Phone release

Dave's Free Press: Journal: Palm Treo call db module

Dave's Free Press: Journal: Ill

Dave's Free Press: Journal: CPANdeps upgrade

Dave's Free Press: Journal: YAPC::Europe 2006 report: day 3

Ocean of Awareness: Undershoot: Parsing theory in 1965

The difference between theory and practice is that in theory there is no difference between theory and practice, but in practice, there is.[1]

Once it was taken seriously that humans might have the power to, for example, "read" a chessboard in a way that computers could not beat. This kind of "computational mysticism" has taken a beating. But it survives in one last stronghold -- parsing theory.

In a previous post, I asked "Why is parsing considered solved?" If the state of the art of computer parsing is taken as anything close to its ultimate solution, then it is a case of "human exceptionalism" -- the human brain has some power that makes it much better at parsing than computers can be. It is very unlikely resorting to human exceptionalism as an explanation would be accepted for any other problem in computer science. Why is it accepted for parsing theory?[2]

The question really requires two separate answers:

  • "Why do practitioners accept the current state of the art as the solution?" and
  • "Why do the theoreticians accept the current state of the art as the solution?"

In one sense, the answer to both questions is the same -- because of the consensus created by Knuth's 1965 paper "On the translation of languages from left to right". In a previous post, I looked at Knuth 1965 and I answered the practitioner question in detail. But, for the sake of brevity, I answered the question about the theoreticians in outline. This post expands on that outline.

The Practitioners

To summarize, in 1965, practitioners accepted the parsing problem as solved for the following reasons.

  • In 1965, every practical parser was stack-driven.
  • As of 1965, stacks themselves were quite leading edge. As recently as 1961, a leading edge article[3] could not assume that its readers knew what "pop" and "push" operations were.
  • An algorithm that combined state transitions and stack operations was already a challenge to existing machines. In 1965, any more complicated algorithm was likely to be unuseable in practice.
  • Last, but not least, the theoreticians assured the practitioners that LR-parsing was either state-of-the-art or beyond, so making more agressive use of hardware would be futile.

What about the theorists?

The practitioners of 1965, then, were quite reasonable in feeling that LR-parsing was as good as anything they were likely to be able to implement any time soon. And they were being told by the theorists that, in fact, it never would get any better -- there were theoretical limits on parsers that faster hardware could not overcome.

We now know that the theorists were wrong -- there are non-LR parsers which are better than the LR parsers are at LR grammars. What made the theorists go astray?

How theorists work

As the epigraph for this post reminds us, theorists who hope to guide practitioners have to confront a big problem -- theory is practice only in theory. Theoreticians (or at least the better ones, like Knuth) know this, but they try to make theory as reliable a guide to practice as possible.

One of the most important examples of the theoretician's successes is asymptotic notation, which we owe to Knuth[4]. Asymptotic notation is more commonly referred to as big-O notation. The term "asymptotic notation" emphasizes its most dangerous aspect from a practical point of view: Asymptotic notation assumes that the behavior of most interest is the behavior for arbitrarily large inputs.

Practical inputs can be very large but, by definition, they are never arbitrarily large. Results in asymptotic terms might be what is called "galactic" -- they might have relevance only in situations which cannot possibly occur in practice.

Fortunately for computer science, asymptotic results usually are not "galactic". Most often asymptotic results are not only relevant to practice -- they are extremely relevant. Wikipedia pages for algorithms put the asymptotic complexities in special displays, and these displays are one of the first things that some practitioners look at.

Bracketing

Since coming up with a theoretical model that is equivalent to "practical" is impossible, theoreticians often work like artillerists. Artillerists often deliberately overshoot and undershoot, before they "fire for effect". "Bracketing" their target in this way has disadvantages -- it reduces the element of surprise, and can even allow the enemy to get their counter-fire in first. But, nasty as these consequences could be, the advantage in accuracy is usually held to outweigh them.

The practice of theoretical computer science is less risky, which makes "bracketing" a very attractive approach to tricky problems. Theoreticians often try to "bracket" practice between an "undershoot" and an "overshoot". The undershoots are models simple and efficient enough to be practical, but too weak to capture all the needs of practice. The overshoots are models which capture everything a practitioner needs, but which are too complicated and/or too resource-intensive for practice.

The P vs. NP problem is an active example of a bracketing technique. You will sometimes read that the P/NP boundary is expected to be that between practical and impractical, but this is an extreme simplification. P includes complexities like O(n^1000000), where the complexity for even n == 2 is a nunber which, in decimal form, fills many pages. Modulo bold advances in quantum computing, I cannot imagine that O(n^1000000) will ever be practical. And you can make the complexities much harder than O(n^1000000) without ever reaching P-hard.

So P-hard is beyond any reasonable definition of "practical" -- it is an "overshoot". But the P vs. NP question is almost certainly very relevant to what is "practical". Resolving the P vs. NP question is likely to be an important or even necessary step. It is a mystery that such a seemingly obvious question has resisted the best efforts of the theoreticians for so long, and the solution of P vs. NP is likely to bring new insights into asymptotic complexity.

Bracketing practical parsing

When Knuth published his 1965, "practical parsing" was already bracketed. On the overshoot side, Irons had already published a parser for context-free grammars. Worst case, this ran in exponential time, and it was, and remains, expected that general context-free parsing was not going to be practical.[5]

On the undershoot side, there were regular expressions and recursive descent. Regular expressions are fast and very practical, but parse a very limited set of grammars. Recursive descent is also fast and, since it parses a larger set of grammars, was the closest undershoot.

Mistake 1: The misdefinition of "language"

To curry respect from the behaviourists, American linguistics for many years banned any reference to meaning. Behaviorists looked down on hypothesized mental states as not worthy of "science", and it's hard to have a theory of meaning without conjectures about mental states. Without mental states, language was just a set of utterances. So in 1926 the linguist Leonard Bloomfield dutifully defined a "language" as a set of "utterances" (for our purposes, "strings"), and through the 30s and 40s most American linguists followed him.

After a brief nod to this tradition, Noam Chomsky restored sanity to linguistics. But it was too late for computer science. Automata theory adopted the semantics-free definition. In 1965, Knuth inherited a lot of prior work, almost all of which ignored, not just meaning or semantics, but even syntax and structure.[6]

Language extension versus language intension

Knuth, of course, wanted to make contact with prior art. The definition he had inherited seemed to work well enough and Knuth's 1965 defines a language as a set of strings. Most subsequent work has refused to breach this tradition.

In most people's idea of what a language is, the utterances/strings mean something -- you cannot take just any set of meaningless strings and call it a language. So the parsing theorists and everybody else had two different definitions of language.

But parsing theory also hoped to produce results relevant to practice, and few people are interested in recognizing meaningless strings -- almost everybody who parses is interested in (at a minimum) finding some kind of structure in what they parse, in order to do something with the result of the parse. Parsing theorists ended up using the word "language" in one sense, but implying that results they found worked for the word "language" in the usual sense.

At this point both senses of the word "language" have gotten entrenched in parsing theory. Instead of making up a new terminology for this blog post, I will borrow a distinction from linguistics and speak of the extension of a language and the intension of a language. The extension of a language is the Bloomfieldian defintion -- the set of utterances/strings in the language. The intension of a language, for our purposes here, can be regarded as its BNF grammar. Each language intension will have (if it is well-defined) exactly one extension. But multiple language intensions can have the same extension.

Red Herring 1: The stack machine model as a natural boundary

The temptation to use language extensions as a proxy for LR-grammars must have been overwhelming. It turns out that the language extension of deterministic stack machines is exactly that of the LR grammars. Further, the language extension of the context-free grammars is exactly that of the non-deterministic stack machines. (Non-deterministic stack machines are stack machines which can "fork" new instances of themselves on the fly.)

If you take language extensions as the proxy for grammars, things fall into place very neatly: the LR-parsers are the deterministic subset of the context-free parsers. And "deterministic" seemed like a very good approximation of practical. Certainly non-deterministic parsing is probably not practical. And the best practical parsers in 1965 were deterministic stack parsers.

Viewed this way, LR-parsing looked like the equivalent of practical parsing. It was a "direct hit", or as close to a exact equivalent of practical parsing as theory was going to get.

As we shall see, with this red herring, the reasoning went astray. But disaster was not inevitable. The whole point of bracketing, after all, is that it allows you to correct errors. Another red herring, however, resulted in parsing theory going on a decades-long wrong turn.

Red Herring 2: LR parsers are not good at LR grammars

The second red herring led to the mis-bracketing of practical parsing. Having seemingly established that LR-parsing is a natural boundary in the hierarchy of languages, Knuth discovered that general LR-parsers were very far from practical. LR parsing goes out to LR(k) for arbitrary k, but even LR(1) parsing was impractical in 1965 -- in fact, it is rare in practical use today. As the k in LR(k) grows, the size of the tables grows exponentially, while the value of the additional lookahead rapidly diminishes. It is not likely that LR(2) parsing will ever see much practical use, never mind LR(k) for any k greater than 2.

From this it was concluded that LR-parsing is an overshoot. In reality, as Joop Leo was to show, it is an undershoot, and in practical terms a very large one. If you mistake an undershoot for an overshoot, bracketing no longer works, and you are not likely to hit your target.

The Wrong Turn

Summing up, parsing theorists concluded, based on the results of Knuth 1965, that

  • LR-parsing is a good approximation to practical parsing -- it brackets it closely.
  • LR-parsing is an overshoot.
  • A subset of LR-parsing will be the solution to the parsing problem.

Signs of trouble ignored

There were, in hindsight, clear signs that LR language extensions were not a good proxy for LR grammars. LR grammars form a hierarchy -- for every k≥0, there is an LR grammar which is LR(k+1), but which is not LR(k).

But if you look at extensions instead of grammars, the hierarchy immediately collapses -- every LR(k) language extension is also an LR(1) language extension, as long as k≥1. Only LR(0) remains distinct.

It gets worse. In most practical applications, you can add an end-of-input marker to a grammar. If you do this the LR extension hierarchy collapses totally -- every LR(k) language extension is also an LR(0) language extension.

In short, it seems that, as a proxy for LR grammars, LR language extensions are likely to be completely worthless.

Why didn't Knuth see the problem?

Why didn't Knuth see the problem? Knuth certainly noted the strange behavior of the LR hierarchy in extensional terms -- he discovered it, and devoted several dense pages of his 1965 to laying out the complicated mathematics involved.

So why did Knuth expect to get away with punning intension and extension, even in the face of some very unsettling results? Here, the answer is very simple -- "punning" had always worked before.

Regular expressions are easily turned into parsers[7], so the language extension of a regular grammar is an adequate approximation to its intension. Context-free recognition has the same complexity, and in practice uses the same algorithms, as context-free parsing, so here again, language extension is a good approximation of language intension.

And the LL language extensions follow a strict hierarchy -- for every k≥0, LL(k+1) is a proper superset of LL(k). This fact forces LL grammars to follow the same hierarchy[8]. So, when studying complexity, LL language extensions are an excellent proxy for LL grammars.

Based on past experience, Knuth had reason to believe he could use language extensions as a proxy for grammars, and that the result would be a theory that was a reliable guide to practice.

Aftermath

In my timeline of parsing, I describe what happened next. Briefly, theory focused on finding a useful subset of LR(1). One, LALR, became the favorite and the basis of the yacc and bison tools.

Research into parsing of supersets of LR became rare. The theorists were convinced the LR parsing was the solution. These were so convinced that when, in 1991, Joop Leo discovered a practical way to parse an LR superset, the result went unimplemented for decades.

In 1965, the theoreticians gave a lot of weight to the evidence from the world of practice, but probably not undue weight. Going forward, it was a different story.

Leo had, in essence, disproved the implied conjecture of Knuth 1965. But the question is not an explicit mathematical question, like that of P vs. NP. It is a slipprier one -- capturing practice. Practitioners left it to the theoreticians to keep up with the literature. But theoreticians, as long as LR-superset methods did not come into use in the world of practice, felt no need to revisit their conclusions.

Comments, etc.

I encourage those who want to know more about the story of Parsing Theory to look at my Parsing: a timeline 3.0. To learn about Marpa, my Earley/Leo-based parsing project, there is the semi-official web site, maintained by Ron Savage. The official, but more limited, Marpa website is my personal one. Comments on this post can be made in Marpa's Google group, or on our IRC channel: #marpa at freenode.net.

Footnotes

1. Attributed to Jan L. A. van de Snepscheut and Yogi Berra. See https://en.wikiquote.org/wiki/Jan_L._A._van_de_Snepscheut, accessed 1 July 2018. I quote my preferred form of this -- the one it takes in Doug Rosenberg and Matt Stephens, Use Case Driven Object Modeling with UML: Theory and Practice, 2007, p. xxvii. Rosenberg and Stephens is also the accepted authority for its attribution.

2. As an aside, I am open to the idea that the human mind has abilities that Turing machines cannot improve on or even duplicate. When it comes to survival heuristics tied to the needs of human bodies, for example, it seems very reasonable to at least entertain the conjecture that the human mind might be near-optimal, particularly in big-O terms. But when it comes to ability to solve problems which can be formalized as "puzzles" -- and syntactic analysis is one of these -- I think that resort to human exceptionalism is a sign of desperation.

3. Oettinger, Anthony. "Automatic Syntactic Analysis and the Pushdown Store", Proceedings of Symposia in Applied Mathematics, Volume 12, American Mathematical Society, 1961. Oettinger describes "push" and "pop" stack operations in "Iversion notation" -- what later became APL. See the discussion of Oettinger in my "Why is parsing considered solved?" post.

4. Knuth did not invent asymptotic notation -- it comes from calculus -- but he introduced it to computer science and motivated its use.

5. The best lower bound for context-free parsing is still O(n). So it is even possible that there is a practical linear-time general context-free parser. But its discovery would be a big surprise.

6. In another blog post, I talk about the use of the word "language" in parsing theory in much more detail.

7. For example, regular expressions can be extended with "captures". Captures cannot handle recursion, but neither can regular expressions, so captures are usually sufficient to provide all the structure an application wants.

8. The discussion of the LL(k) hierarchy is in a sense anachronistic -- the LL(k) hierachy was not studied until after 1965. But Knuth certainly was aware of recursive descent, and it seems reasonable to suppose that, even in 1965, he had a sense of what the LL hierarchy would look like.

Dave's Free Press: Journal: Graphing tool

Dave's Free Press: Journal: Travelling in time: the CP2000AN

Dave's Free Press: Journal: XML::Tiny released

Dave's Free Press: Journal: YAPC::Europe 2007 report: day 1

Ocean of Awareness: Parsing Timeline 3.1

Announcing Timeline 3.1

I have just released version 3.1 of my Parsing Timeline. It is a painless introduction to a fascinating and important story which is scattered among one of the most forbidding literatures in computer science. Previous versions of this timeline have been, by far, the most popular of my writings.

A third of Timeline 3.1 is new, added since the 3.0 version. Much of the new material is adapted from previous blog posts, both old and recent. Other material is completely new. The sections that are not new with 3.1 has been carefully reviewed and heavily revised.

Comments, etc.

My interest in parsing stems from my own approach to it -- a parser in the Earley/Leo lineage named Marpa. To learn more about Marpa, a good first stop is the semi-official web site, maintained by Ron Savage. The official, but more limited, Marpa website is my personal one. Comments on this post can be made in Marpa's Google group, or on our IRC channel: #marpa at freenode.net.

Header image by Tambako the Jaguar. Some rights reserved.