Dave's Free Press: Journal: Module pre-requisites analyser

As a service to module authors, here is a tool to show a module's pre-requisites and the test results from the CPAN testers. So before you rely on something working as a pre-requisite for your code, have a look to see how reliable it and its dependencies are.

Dave's Free Press: Journal: YAPC::Europe 2007 report: day 2

A day of not many talks, but lots of cool stuff. Damian was his usual crazy self, and MJD's talk on building parsers was really good. Although I probably won't use those techniques at work as functional programming seems to scare people.

The conference dinner at a Heuriger on the outskirts of Vienna was great. The orga-punks had hired a small fleet of buses to get us there and back, and one of the sponsors laid on a great buffet. The local wine was pretty damned fine too, and then the evening de-generated into Schnapps, with toasts to Her Majesty, to her splendid navy, and to The Village People.

It wasn't all debauchery in the evening though - on the bus, I had a very useful chat with Philippe about Net::Proxy, and re-designing it to make it easier to create new connectors for it.

Dave's Free Press: Journal: CPANdeps

<a href=http://cpandeps.cantrell.org.uk/>CPANdeps</a> now lets you filter test results by perl version number, and also knows what modules were in core in which versions of perl. Hurrah!

Dave's Free Press: Journal: Thanks, Yahoo!

[originally posted on Apr 3 2008]

I'd like to express my warm thanks to the lovely people at Yahoo and in particular to their bot-herders. Until quite recently, their web-crawling bots had most irritatingly obeyed robot exclusion rules in the robots.txt file that I have on CPANdeps. But in the last couple of weeks they've got rid of that niggling little exclusion so now they're indexing all of the CPAN's dependencies through my site! And for the benefit of their important customers, they're doing it nice and quickly - a request every few seconds instead of the pedestrian once every few minutes that gentler bots use.

Unfortunately, because generating a dependency tree takes more time than they were allowing between requests, they were filling up my process table, and all my memory, and eating all the CPU, and the only way to get back into the machine was by power-cycling it. So it is with the deepest of regrets that I have had to exclude them.


[update] For fuck's sake, they're doing it again from a different netblock!

Dave's Free Press: Journal: YAPC::Europe 2007 travel plans

I'm going to Vienna by train for YAPC::Europe. If you want to join me you'll need to book in advance, and probably quite some way in advance as some of these trains apparently get fully booked.

Waterloo1740Fri 24 Aug
Paris Nord2117
Paris Est2245
Munich08590928Sat 25 Aug

The first two legs of that are second class, cos first wasn't available on Eurostar (being a Friday evening it's one of the commuter Eurostars and gets booked up months and months in advance) and was way too spendy on the sleeper to Munich. Upgrading to first class from Munich to Vienna is cheap, so I have.

Coming back it's first class all the way cos upgrading was nearly free ...

Vienna0930Fri 31 Aug
Zurich1402Sun 2 Sep
Paris Est1834
Paris Nord2013

Don't even think about trying to book online or over the phone, or at the Eurostar ticket office at Waterloo. Your best bet is to go to the Rail Europe shop on Picadilly, opposite the Royal Academy and next to Fortnums.

Dave's Free Press: Journal: Perl isn't dieing

Perl isn't dieing, but it tells me that it wishes it was. Last night it went out on the piss with Python and Ruby (PHP was the designated driver) and it did rather too many cocktails. It isn't quite sure what happened, but it woke up in the gutter in a puddle of its own fluids and its head hurts a lot.

It asked me to ask you all to keep the volume down.

Dave's Free Press: Journal: Wikipedia handheld proxy

I got irritated at how hard it was to use Wikipedia on my Treo. There's so much rubbish splattered around their pages that it Just Doesn't Work on such a small screen. Given that no alternatives seemed to be available - at least, Google couldn't find any - I decided to write my own Wikipedia handheld proxy.

It strips away all the useless rubbish that normally surrounds Wikipedia pages, as well as things like the editing functions which are also hard to use on portable devices. Internally, it's implemented using perl, LWP, and mod_perl, and is hosted by Keyweb.de.

Dave's Free Press: Journal: YAPC::Europe 2007 report: day 3

My Lightning Talk on cpandeps went down really well, although as José pointed out, I need to fix it to take account of File::Copy being broken. I also need to talk to Domm after the conference is over to see if I can get dependency information from CPANTS as well as from META.yml files.

There were lots of other good lightning talks. Dmitri Karasik's regexes for doing OCR, Juerd Waalboer's Unicode::Semantics, and Renée Bäcker's Win32::GuiTest were especially noteworthy.

Richard Foley's brief intro to the perl debugger was also useful. Unfortunately Hakim Cassimally's talk was about debugging web applications, which I'd not noticed on the schedule, so I didn't stay for that.

And finally, Mark Fowler's grumble about why perl sucks (and what to do about it) had a few interesting little things in it. I am having vaguely sick ideas about mixing some of that up with an MJD-stylee parser.

At the auction I paid €250 to have the Danish organisers of next year's YAPC::Europe wear the Swedish flag on their foreheads. This, I should point out, was Greg's idea. I would never be so evil on my own.

Dave's Free Press: Journal: POD includes

One of my CPAN distributions is CPAN-FindDependencies. It contains a module CPAN::FindDependencies, and a simple script that wraps around it so you can view dependencies easily from the command line. That script, naturally, has a man page. However, that manpage basically says "if you want to know what arguments this program takes, see the CPAN::FindDependencies docs". This is Bad from a usability point of view, good from a not-duplicating-stuff point of view, and good from a laziness point of view. Which means that it's Bad.

So, the solution.


#include shared/parameters


and some Magic that does the cpp-stylee substitution at make dist time. Note the 'dist' section in my call to WriteMakefile.

This is, of course, crying out to be made less horribly hacky, but it works for now, so I'm happy.

My original idea was to write some crazy shit that would do the #include at install-time, when the user was installing my code. But that has the disadvantage that tools like search.cpan wouldn't show it properly, as they simply look at the files in the distribution. So this does the #includes at the last moment just before I package up the code and upload to the PAUSE. You lovely people get the right documentation in all the right places, I only have to maintain it in one place so it stays in sync, and (in the interests of Laziness) I don't have to remember to run any extra scripts before releasing, make dist just Does The Right Thing.

Dave's Free Press: Journal: Bryar security hole

Someone on IRC reported a bug in Bryar. Namely that a Naughty Person can exploit the feature that notifies you of blog-spam by email to execute arbitrary code on your machine, as the user you run Bryar under.

A patched release is on the way to the CPAN, and you are strongly urged to upgrade.

Dave's Free Press: Journal: cgit syntax highlighting

NB: since writing this I have migrated all my code to Github and stopped using cgit. Therefore many of the links no longer work

For the last few months I've been using git for my version control system. It's better than CVS because it can handle offline commits. So if I'm using my laptop on a train, I can still use version control without having to have a notwork connection.

And to give a pretty web front-end to it for other people to read code without having to check it out of the repository, I use cgit, which I mostly chose because it's a dead simple CGI and not a huge fancy application.

One problem with cgit is that by default it doesn't do code highlighting. But it has the ability to run blobs of code through any filter you care to name before displaying them, so to get something nice like this all you need to do is write a highlighter and add a single line to your cgitrc:


My highlighter program is this:

   1 #!/usr/local/bin/perl
3 use warnings;
4 use strict;
6 my $file = shift;
8 if($file =~ /\.(p[ml]|t)$/i) {
9 system "/usr/local/bin/perltidy -html -st -ntoc -npod -pre -nss -nnn"
10 } else {
11 system "cat -n";
12 }

Ocean of Awareness: Infinite Lookahead and Ruby Slippers

About this post

This post presents a practical, compact example which demonstrates a use case for both infinite lookahead and Ruby Slippers parsing. While the example itself is very simple, this post may not be a good first tutorial -- it focuses on Marpa implementation strategy, instead of basics.

About Urbit

The example described in this post is one part of hoonlint. hoonlint, currently under development, will be a "lint" program for a language called Hoon.

Hoon is part of the Urbit project. Urbit is an effort to return control of the Internet experience to the individual user. (The Urbit community has, generously, been supporting my work on Hoon.)

The original Internet and its predecessors were cosy places. Users controlled their experience. Authority was so light you could forget it was there, but so adequate to its task that you could forget why it was necessary. What we old timers do remember of the early Internet was the feeling of entering into a "brave new world".

The Internet grew beyond our imaginings, and our pure wonder of decades ago now seems ridiculous. But the price has been a shift of power which should be no laughing matter. Control of our Internet experience now resides in servers, run by entities which make no secret of having their own interests. Less overt, but increasingly obvious, is the single-mindedness with which they pursue those interests.

And the stakes have risen. In the early days, we used the Internet as a supplement in our intellectual lives. Today, we depend on it in our financial and social lives. Today, the server-sphere can be a hostile place. Going forward it may well become a theater of war.

We could try to solve this problem by running our own servers. But this is a lot of work, and only leaves us in touch with those willing and able to do that. In practice, this seems to be nobody.

Urbit seeks to solve these problems with hassle-free personal servers, called urbits. Urbits are journaling databases, so they are incorruptable. To make sure they can be run anywhere in the cloud[1], they are based on a tiny virtual machine, called Nock. To keep urbits compact and secure, Urbit takes on code bloat directly -- Urbit is an original design from a clean slate, with a new protocol stack.

About Hoon

Nock's "machine language" takes the form of trees of arbitrary precision integers. The integers can be interpreted as strings, floats, etc., as desired. And the trees can be interpreted as lists, giving Nock a resemblance to a LISP VM. Nock does its own memory management and takes care of its own garbage collection.[2]

Traditionally, there are two ways to enter machine language,

  • Physically, for example, by toggling it into a machine's front panel. Originally, entering it physically was the only way.
  • Indirectly, using assembler or some higher-level language, like C. Once these indirect methods existed, they rapidly took over as the most common way to create machine language.
Like traditional machine language, Nock cannot be written directly. Hoon is Urbit's equivalent of C -- it is Urbit's "close to the metal" higher level language.

Not that Hoon looks much like C, or anything else you've ever seen. This is a Hoon program that takes an integer argument, call it n, and returns the first n counting numbers:

    |=  end=@                                               ::  1
    =/  count=@  1                                          ::  2
    |-                                                      ::  3
    ^-  (list @)                                            ::  4
    ?:  =(end count)                                        ::  5
      ~                                                     ::  6
    :-  count                                               ::  7
    $(count (add 1 count))                                  ::  8

Hoon comments begin with a "::" and run until the next newline. The above Hoon sample uses comments to show line numbers.

The example for this post will be a hoonlint subset: a multi-line comment linter. Multi-line comments are the only Hoon syntax we will talk about. (For those who want to know more about Hoon, there is a tutorial.)

About Hoon comments

In basic Hoon syntax, multi-line comments are free-form. In practice, Hoon authors tend to follow a set of conventions.


In the simplest case, a comment must precede the code it describes, and be at the same indent. These simple cases are called "pre-comments".[3] For example, this code contains a pre-comment:

	  :: pre-comment 1
	  [20 (mug bod)]


Hoon multi-line comments may also contain "inter-comments". The inter-comments are aligned depending on the syntax. In the display below, the inter-comments are aligned with the "rune" of the enclosing sequence. A "rune" is Hoon's rough equivalent of a "keyword". Runes are always digraphs of special ASCII characters. The rune in the following code is :~, and the sequence it introduces includes pre-comments, inter-comments and meta-comments.

      :~  [3 7]
	  :: pre-comment 1
	  [20 (mug bod)]
	  :: pre-comment 2
	  [2 yax]
	  :: pre-comment 3
	  [2 qax]
    ::    :: pre-comment 4
    ::    [4 qax]
	  :: pre-comment 5
	  [5 tay]

When inter-comments are empty, as they are in the above, they are called "breathing comments", because they serve to separate, or allow some "air" between, elements of a sequence. For clarity, the pre-comments in the above are further indicated: all and only pre-comments contain the text "pre-comment".


The above code also contains a third kind of comment -- meta-comments. Meta-comments must occur at the far left margin -- at column 1. These are called meta-comments, because they are allowed to be outside the syntax structure. One common use for meta-comments is "commenting out" other syntax. In the above display, the meta-comments "comment out" the comment labeled "pre-comment 4" and its associated code.

Staircase comments

Finally, there are "staircase comments", which are used to indicate the larger structure of Hoon sequences and other code. For example,

    ::                                                      ::
    ::::  3e: AES encryption  (XX removed)                  ::
      ::                                                    ::
    ::                                                      ::
    ::::  3f: scrambling                                    ::
      ::                                                    ::
      ::    ob                                              ::

Each staircase consists of three parts. In lexical order, these parts are an upper riser, a tread, and a lower riser. The upper riser is a sequence of comments at the same alignment as an inter-comment. The tread is also at the inter-comment alignment, but must be 4 colons ("::::") followed by whitespace. The lower riser is a sequence of comments indented two spaces more than the tread.

Hoon comment conventions

Hoon's basic syntax allows comments to be free-form. In practice, there are strict conventions for these comments, conventions we would like to enforce with hoonlint.

  1. A multi-line comment may contain an "inter-part", a "pre-part", or both.
  2. If both an inter-part and a pre-part are present, the inter-part must precede the pre-part.
  3. The inter-part is a non-empty sequence of inter-comments and staircases.
  4. A pre-part is a non-empty sequence of pre-comments.
  5. Meta-comments may be inserted anywhere in either the pre-part or the inter-part.
  6. Comments which do not obey the above rules are bad comments. A good comment is any comment which is not a bad comment.
  7. A comment is not regarded as a meta-comment if it can be parsed as structural comment. An structural comment is any good comment which is not a meta-comment.


We will implement these conventions using the BNF of this section. The sections to follow outline the strategy behind the BNF.

    :start ::= gapComments
    gapComments ::= OptExceptions Body
    gapComments ::= OptExceptions
    Body ::= InterPart PrePart
    Body ::= InterPart
    Body ::= PrePart
    InterPart ::= InterComponent
    InterPart ::= InterruptedInterComponents
    InterPart ::= InterruptedInterComponents InterComponent

    InterruptedInterComponents ::= InterruptedInterComponent+
    InterruptedInterComponent ::= InterComponent Exceptions
    InterComponent ::= Staircases
    InterComponent ::= Staircases InterComments
    InterComponent ::= InterComments

    InterComments ::= InterComment+

    Staircases ::= Staircase+
    Staircase ::= UpperRisers Tread LowerRisers
    UpperRisers ::= UpperRiser+
    LowerRisers ::= LowerRiser+

    PrePart ::= ProperPreComponent OptPreComponents
    ProperPreComponent ::= PreComment
    OptPreComponents ::= PreComponent*
    PreComponent ::= ProperPreComponent
    PreComponent ::= Exception

    OptExceptions ::= Exception*
    Exceptions ::= Exception+
    Exception ::= MetaComment
    Exception ::= BadComment
    Exception ::= BlankLine

Technique: Combinator

Our comment linter is implemented as a combinator. The main hoonlint parser invokes this combinator when it encounters a multi-line comment. Because of the main parser, we do not have to worry about confusing comments with Hoon's various string and in-line text syntaxes.

Note that while combinator parsing is useful, it is a technique that can be oversold. Combinators have been much talked about in the functional programming literature[4], but the current flagship functional programming language compiler, the Glasgow Haskell Compiler, does not use combinators to parse its version of the Haskell -- instead it uses a parser in the yacc lineage.[5] As a parsing technique on its own, the use of combinators is simply another way of packaging recursive descent with backtracking, and the two techniques share the same power, the same performance, and the same downsides.

Marpa is much more powerful than either LALR (yacc-lineage) parsers or combinators, so we can save combinator parsing for those cases where combinator parsing really is helpful. One such case is lexer mismatch.

Lexer mismatch

The first programming languages, like BASIC and FORTRAN, were line-structured -- designed to be parsed line-by-line.[6] After ALGOL, new languages were usually block-structured. Blocks can start or end in the middle of a line, and can span multiple lines. And blocks are often nested.

A line-structured language requires its lexer to think in terms of lines, but this approach is completely useless for a block-structured language. Combining both line-structured and block-structured logic in the same lexer usually turns the lexer's code into a rat's nest.

Calling a combinator every time a line-structured block is encountered eliminates the problem. The main lexer can assume that the code is block-structured, and all the line-by-line logic can go into combinators.

Technique: Non-determinism

Our grammar is non-deterministic, but unambiguous. It is unambiguous because, for every input, it will produce no more than one parse.

It is non-deterministic because there is a case where it tracks two possible parses at once. The comment linter cannot immediately distinguish between a prefix of the upper riser of a staircase, and a prefix of a sequence of inter-comments. When a tread and lower riser is encountered, the parser knows it has found a staircase, but not until then. And if the parse is of an inter-comment sequence, the comment linter will not be sure of this until the end of the sequence.

Technique: Infinite lookahead

As just pointed out, the comment linter does not know whether it is parsing a staircase or an inter-comment sequence until either

  • it finds a tread and lower riser, in which case it knows the correct parse will be a staircase; or
  • it successfully reaches the end of the inter-comment sequence, in which case it knows the correct parse is an inter-comment sequence.
To determine which of these two choices is the correct parse, the linter needs to read an arbitrarily long sequence of tokens -- in other words, the linter needs to perform infinite lookahead.

Humans deal with infinite lookaheads all the time -- natural languages are full of situations that require them.[7] Modern language designers labor to avoid the need for infinite lookahead, but even so cases where it is desirable pop up.[8]

Fortunately, in 1991, Joop Leo published a method that allows computers to emulate infinite lookahead efficiently. Marpa uses Joop's technique. Joop's algorithm is complex, but the basic idea is to do what humans do in the same circumstance -- keep all the possibilities in mind until the evidence comes in.

Technique: the Ruby Slippers

Recall that, according to our conventions, our parser does not recognize a meta-comment unless no structural comment can be recognized. We could implement this in BNF, but it is much more elegant to use the Ruby Slippers.[9]

As those already familiar with Marpa may recall, the Ruby Slippers are invoked when a Marpa parser finds itself unable to proceed with its current set of input tokens. At this point, the lexer can ask the Marpa parser what token it does want. Once the lexer is told what the "wished-for" token is, it can concoct one, out of nowhere if necessary, and pass it to the Marpa parser, which then proceeds happily. In effect, the lexer acts like Glenda the Good Witch of Oz, while the Marpa parser plays the role of Dorothy.

In our implementation, the Marpa parser, by default, looks only for structural comments. If the Marpa parser of our comment linter finds that the current input line is not a structural comment, the Marpa parser halts and tells the lexer that there is a problem. The lexer then asks the Marpa parser what it is looking for. In this case, the answer will always be the same: the Marpa parser will be looking for a meta-comment. The lexer checks to see if the current line is a comment starting at column 1. If there is a comment starting at column 1, the lexer tells the Marpa parser that its wish has come true -- there is a meta-comment.

Another way to view the Ruby Slippers is as a kind of exception mechanism for grammars. In this application, we treat inability to read an structural comment as an exception. When the exception occurs, if possible, we read a meta-comment.

Technique: Error Tokens

Error tokens are a specialized use of the Ruby Slippers. The application for this parser is "linting" -- checking that the comments follow conventions. As such, the main product of the parser is not the parse -- it is the list of errors gathered along the way. So stopping the parser at the first error does not make sense.

What is desirable is to treat all inputs as valid, so that the parsing always runs to the end of input, in the process producing a list of the errors. To do this, we want to set up the parser so that it reads special "error tokens" whenever it encounters a reportable error.

This is perfect for the Ruby Slippers. If an "exception" occurs, as above described for meta-comments, but no meta-comment is available, we treat it as a second level exception.

When would no meta-comment be available? There are two cases:

  • The line read is a comment, but it does not start at column 1.
  • The line read is a blank line (all whitespace).

On the second exception level, the current line will be read as either a <BlankLine>, or a <BadComment>. We know that every line must lex as either a <BlankLine> or a <BadComment> because our comment linter is called as a combinator, and the parent Marpa parser guarantees this.

Technique: Ambiguity

Marpa allows ambiguity, which could have been exploited as a technique. For example, in a simpler BNF than that we used above, it might be ambiguous whether a meta-comment belongs to an <InterPart> which immediately precedes it; or to a <PrePart> which immediately follows it. We could solve the dilemma by noting that it does not matter: All we care about is spotting bad comments and blank lines, so that picking one of two ambiguous parses at random will work fine.

But efficiency issues are sometimes a problem with ambiguity and unambiguity can be a good way of avoiding them.[10] Also, requiring the grammar to be unambiguous allows an additional check that is useful in the development phase. In our code we test each parse for ambiguity. If we find one, we know that hoonlint has a coding error.

Keeping the parser unambiguous makes the BNF less elegant than it could be. To avoid ambiguity, we introduced extra symbols; introduced extra rules; and restricted the use of ambiguous tokens.

Recall that I am using the term "ambiguous" in the strict technical sense that it has in parsing theory, so that a parser is only ambiguous if it can produce two valid parses for one string. An unambiguous parser can allow non-deterministism and can have ambiguous tokens. In fact, our example grammar does both of these things, but is nonetheless unambiguous.

Extra symbols

One example of an extra symbol introduced to make this parser unambiguous is <ProperPreComment>. <ProperPreComment> is used to ensure that a <PrePart> never begins with a meta-comment.[11]

The BNF requires that the first line of a <PrePart> must be a <ProperPreComment>. This means that, if a <MetaComment> is found at the boundary between an <InterPart> and a <PrePart>, it cannot be the first line of the <PrePart> and so must be the last line of the <InterPart>.

Extra rules

In our informal explanation of the comment conventions, we stated that an inter-part is a sequence, each element of which is an inter-comment or a staircase. While BNF that directly implemented this rule would be correct, it would also be highly ambiguous: If an inter-comment occurs before a tread or an upper riser line, it could also be parsed as part of the upper riser.

To eliminate the ambiguity, we stipulate that if comment can be parsed as part of a staircase, then it must be parsed as part of a staircase. This stipulation still leaves the grammar non-deterministic -- we may not know if our comment could be part of a staircase until many lines later.

With our stipulation we know that, if an <InterComponent> contains a staircase, then that staircase must come before any of the inter-comments. In an <InterComponent> both staircases and inter-comments are optional, so the unambiguous representation of <InterComponent> is

    InterComponent ::= Staircases
    InterComponent ::= Staircases InterComments
    InterComponent ::= InterComments
Notice that, although both staircases and inter-comments are optional, we do not include the case where both are omitted. This is because we insist that an <InterComponent> contain at least one line.

Ambiguous tokens

Our parser is not ambiguous, but it does allow ambiguous tokens. For example, a comment with inter-comment alignment could be either an <InterComment> or an <UpperRiser>; and our lexer returns both. The parser remains unambiguous, however, because only one of these two tokens will wind up in the final parse.

Call the set of tokens returned by our parser for a single line, a "token set". If the token set contains more than one token, the tokenization is ambiguous for that line. If the token set contains only one token, the token set is called a "singleton", and tokenization is unambiguous for that line.

To keep this parser unambiguous, we restrict the ambiguity at the lexer level. For example, our lexer is set up so that a meta-comment is never one of the alternatives in a lexical ambiguity. If a token set contains a <MetaComment>, that token set must be a singleton. The Ruby Slippers are used to enforce this.[12] Similarly, the Ruby Slippers are used to guarantee that any set of tokens containing either a <BadComment> or a <BlankLine> is a singleton.


This post did not walk the reader through the code. Instead, we talked in terms of strategy. The code is available on Github in unit test form. For those who want to see the comment-linter combinator in a context, a version of the code embedded in hoonlint in also on Github.[13]

Comments on this blog post, etc.

To learn about Marpa, my Earley/Leo-based parser, there is the semi-official web site, maintained by Ron Savage. The official, but more limited, Marpa website is my personal one. Comments on this post can be made in Marpa's Google group, or on our IRC channel: #marpa at freenode.net.


1. In their present form, urbits run on top of Unix and UDP.

2. Garbage collection and arbitrary precision may seem too high-level for something considered a "machine language", but our concepts evolve. The earliest machine languages required programmers to write their own memory caching logic and to create their own floating point representations, both things we now regard as much too low-level to deal with even at the lowest software level.

3. This post attempts to follow standard Hoon terminology, but for some details of Hoon's whitespace conventions, there is no settled terminology, and I have invented terms as necessary. The term "pre-comment" is one of those inventions.

4. For a brief survey of this literature, see the entries from 1990 to 1996 in my "timeline" of parsing history.

5. This is the LALR grammar for GHC, from GHC's Github mirror.

6. This is simplified. There were provisions for line continuation, etc. But, nonetheless, the lexers for these languages worked in terms of lines, and had no true concept of a "block".

7. An example of a requirement for infinite lookahead is the sentence "The horse raced past the barn fell". Yes, this sentence is not, in fact, infinitely long, but the subclause "raced past the barn" could be anything, and therefore could be arbitrarily long. In isolation, this example sentence may seem unnatural, a contrived "garden path". But if you imagine the sentence as an answer to the question, "Which horse fell?", expectations are set so that the sentence is quite reasonable.

8. See my blog post "A Haskell challenge".

9. To find out more about Ruby Slippers parsing see the Marpa FAQ, questions 122 and 123; my blog series on parsing HTML; my recent blog post "Marpa and combinator parsing 2"; and my much older blog post "Marpa and the Ruby Slippers".

10. This, by the way, is where I believe parsing theory went wrong, beginning in the 1960's. In an understandable search for efficiency, mainstream parsing theory totally excluded not just ambiguity, but non-determinism as well. These draconian restrictions limited the search for practical parsers to a subset of techniques so weak that they cannot even duplicate human parsing capabilities. This had the bizarre effect of committing parsing theory to a form of "human exceptionalism" -- a belief that human beings have a special ability to parse that computers cannot emulate. For more on this story, see my "timeline" of parsing history.

11. This example illustrates the efficiency considerations involved in the decision to tolerate, or to exclude, efficiency. If n meta-comments occur between a <InterPart> and a <PrePart>, the dividing line is arbitrary, so that there are n+1 parses. This will, in theory, make the processing time quadratic. And, in fact, long sequences of meta-comments might occur between the inter- and pre-comments, so the inefficiency might be real.

12. Inter-comments and comments that are part of upper risers may start at column 1, so that, without special precautions in the lexer, an ambiguity between a structural comment and a meta-comment is entirely possible.

13. For the hoonlint-embedded form, the Marpa grammar is here and the code is here. These are snapshots -- permalinks. The application is under development, and probably will change considerably. Documentation is absent and testing is minimal, so that this pre-alpha embedded form of the code will mainly be useful for those who want to take a quick glance at the comment linter in context.

Dave's Free Press: Journal: Devel::CheckLib can now check libraries' contents

Devel::CheckLib has grown a new feature. As well as checking that libraries and headers exist, it can now also check that particular functions exist in a library and check their return values. This will be particularly useful if you need to check that a particular version of a library is available.

It works if you have a Unixish toolchain. I need to wait for the CPAN-testers reports to see if I managed not to break anything on Windows. Unfortunately, even though the lovely Mr. Alias has worked hard to make Windows machines available to developers, I found it to be just too hard to use. Even downloading my code to the Windows machine was hard, as Windows seemed to think it knew better and shouldn't download the file I told it to download. Then once I had downloaded it, Windows decided to hide it somewhere that I couldn't get to using the command line. So I gave up.

I might try again once there are some decent tools on the machines: wget, tar, and gzip at minimum, as given those I can quickly bootstrap anything else. Software development isn't just about having compilers available.

Ocean of Awareness: A Haskell challenge

The challenge

A recent blog post by Michael Arntzenius ended with a friendly challenge to Marpa. Haskell list comprehensions are something that Haskell's own parser handles only with difficulty. A point of Michael's critique of Haskell's parsing was that Haskell's list comprehension could be even more powerful if not for these syntactic limits.

Michael wondered aloud if Marpa could do better. It can.

The problem syntax occurs with the "guards", a very powerful facility of Haskell's list comprehension. Haskell allows several kinds of "guards". Two of these "guards" can have the same prefix, and these ambiguous prefixes can be of arbitrary length. In other words, parsing Haskell's list comprehension requires either lookahead of arbitrary length, or its equivalent.

To answer Michael's challenge, I extended my Haskell subset parser to deal with list comprehension. That parser, with its test examples, is online.[1] I have run it for examples thousands of tokens long and, more to the point, have checked the Earley sets to ensure that Marpa will stay linear, no matter how long the ambiguous prefix gets.[2]

Earley parsing, which Marpa uses, accomplishes the seemingly impossible here. It does the equivalent of infinite lookahead efficiently, without actually doing any lookahead or backtracking. That Earley's algorithm can do this has been a settled fact in the literature for some time. But today Earley's algorithm is little known even among those well acquainted with parsing, and to many claiming the equivalent of infinite lookahead, without actually doing any lookahead at all, sounds like a boast of magical powers.

In the rest of this blog post, I hope to indicate how Earley parsing follows more than one potential parse at a time. I will not describe Earley's algorithm in full.[3] But I will show that no magic is involved, and that in fact the basic ideas behind Earley's method are intuitive and reasonable.

A quick cheat sheet on list comprehension

List comprehension in Haskell is impressive. Haskell allows you to build a list using a series of "guards", which can be of several kinds. The parsing issue arises because two of the guard types -- generators and boolean expressions -- must be treated quite differently, but can look the same over an arbitrarily long prefix.


Here is one example of a Haskell generator, from the test case for this blog post:

          list = [ x | [x, 1729,
		      -- insert more here
		   ] <- xss ] [4]

This says to build a lists of x's such that the guard [x, 1729, 99 ] <- xss holds. The clue that this guard is a generator is the <- operator. The <- operator will appear in every generator, and means "draw from".

The LHS of the <- operator is a pattern and the RHS is an expression. This generator draws all the elements from xss which match the pattern [x, 1729, 99 ]. In other words, it draws out all the elements of xss, and tests that they are lists of length 3 whose last two subelements are 1729 and 99.

The variable x is set to the 1st subelement. list will be a list of all those x's. In the test suite, we have

    xss = [ [ 42, 1729, 99 ] ] [5]

so that list becomes [42] -- a list of one element whose value is 42.

Boolean guards

Generators can share very long prefixes with Boolean guards.

	list2 = [ x | [x, 1729, 99] <- xss,
               [x, 1729,
                  -- insert more here
               ] == ys,
             [ 42, 1729, 99 ] <- xss
             ] [6]

The expression defining list2 has 3 comma-separated guards: The first guard is a generator, the same one as in the previous example. The last guard is also a generator.

The middle guard is of a new type: it is a Boolean: [x, 1729, 99 ] == ys. This guard insists that x be such that the triple [x, 1729, 99 ] is equal to ys.

In the test suite, we have

    ys = [ 42, 1729, 99 ] [7]
so that list2 is also [42].

Boolean guards versus generators

From the parser's point of view, Boolean guards and generators start out looking the same -- in the examples above, three of our guards start out the same -- with the string [x, 1729, 99 ], but

  • in one case (the Boolean guard), [x, 1729, 99 ] is the beginning of an expression; and
  • in the other two cases (the generators), [x, 1729, 99 ] is a pattern.
Clearly patterns and expressions can look identical. And they can look identical for an arbitrarily long time -- I tested the Glasgow Haskell Compiler (GHC) with identical expression/pattern prefixes thousands of tokens in length. My virtual memory eventually gives out, but GHC itself never complains.[8] (The comments "insert more here" show the points at which the comma-separated lists of integers can be extended.)

The problem for parsers

So Haskell list comprehension presents a problem for parsers. A parser must determine whether it is parsing an expression or a pattern, but it cannot know this for an arbitrarily long time. A parser must keep track of two possibilities at once -- something traditional parsing has refused to do. As I have pointed out[9], belief that traditional parsing "solves" the parsing problem is belief in human exceptionalism -- that human have calculating abilities that Turing machines do not. Keeping two possibilites in mind for a long time is trivial for human beings -- in one form we call it worrying, and try to prevent ourselves from doing it obsessively. But it has been the orthodoxy that practical parsing algorithms cannot do this.

Arntzenius has a nice summary of the attempts to parse this construct while only allowing one possibility at a time -- that is, determistically. Lookahead clearly cannot work -- it would have to be arbitrarily long. Backtracking can work, but can be very costly and is a major obstacle to quality error reporting.

GHC avoids the problems with backtracking by using post-processing. At parsing time, GHC treats an ambiguous guard as a Boolean. Then, if it turns out that is a generator, it rewrites it in post-processing. This inelegance incurs some real technical debt -- either a pattern must always be a valid expression, or even more trickery must be resorted to.[10]

The Earley solution

Earley parsing deals with this issue by doing what a human would do -- keeping both possibilities in mind at once. Jay Earley's innovation was to discover a way for a computer to track multiple possible parses that is compact, efficient to create, and efficient to read.

Earley's algorithm maintains an "Earley table" which contains "Earley sets", one for each token. Each Earley set contains "Earley items". Here are some Earley items from Earley set 25 in one of our test cases:

	origin = 22; <atomic expression> ::=   '[' <expression> '|' . <guards> ']'
	origin = 25; <guards> ::= . <guard<>
	origin = 25; <guards> ::= . <guards> ',' <guard<>
	origin = 25; <guard<>  ::= . <pattern> '< <expression>
	origin = 25; <guard<>  ::= . <expression> [11]

In the code, these represent the state of the parse just after the pipe symbol ("|") on line 4 of our test code.

Each Earley item describes progress in one rule of the grammar. There is a dot (".") in each rule, which indicates how far the parse has progressed inside the rule. One of the rules has the dot just after the pipe symbol, as you would expect, since we have just seen a pipe symbol.

The other four rules have the dot at the beginning of the RHS. These four rules are "predictions" -- none of their symbols have been parsed yet, but we know that these rules might occur, starting at the location of this Earley set.

Each item also records an "origin": the location in the input where the rule described in the item began. For predictions the origin is always the same as the Earley set. For the first Earley item, the origin is 3 tokens earlier, in Earley set 22.

The "secret" of non-determinism

And now we have come to the secret of efficient non-deterministic parsing -- a "secret" which I hope to convince the reader is not magic, or even much of a mystery. Here, again, are two of the items from Earley set 25:

	origin = 25; <guard<>  ::= . <pattern> '< <expression>
	origin = 25; <guard<>  ::= . <expression>  [12]

At this point there are two possibilities going forward -- a generator guard or a Boolean expression guard. And there is an Earley item for each of these possibilities in the Earley set.

That is the basic idea -- that is all there is to it. Going forward in the parse, for as long as both possibilities stay live, Earley items for both will appear in the Earley sets.

From this point of view, it should now be clear why the Earley algorithm can keep track of several possibilities without lookahead or backtracking. No lookahead is needed because all possibilities are in the Earley set, and selection among them will take place as the rest of the input is read. And no backtracking is needed because every possibility was already recorded -- there is nothing new to be found by backtracking.

It may also be clearer why I claim that Marpa is left-eidetic, and how the Ruby Slippers work.[13] Marpa has perfect knowledge of everything in the parse so far, because it is all in the Earley tables. And, given left-eidetic knowledge, Marpa also knows what terminals are expected at the current location, and can "wish" them into existence as necessary.

The code, comments, etc.

A permalink to the full code and a test suite for this prototype, as described in this blog post, is on Github. In particular, the permalink of the the test suite file for list comprehension is here. I expect to update this code, and the latest commit can be found here.

To learn more about Marpa, a good first stop is the semi-official web site, maintained by Ron Savage. The official, but more limited, Marpa website is my personal one. Comments on this post can be made in Marpa's Google group, or on our IRC channel: #marpa at freenode.net.


1. If you are interested in my Marpa-driven Haskell subset parser, this blog post may be the best introduction. The code is on Github.

2. The Earley sets for the ambigious prefix immediately reach a size of 46 items, and then stay at that level. This is experimental evidence that the Earley set sizes stay constant.

And, if the Earley items are examined, and their derivations traced, it can be seen that they must repeat the same Earley item count for as long as the ambiguous prefix continues. The traces I examined are here, and the code which generated them is here, for the reader who wants to convince himself.

The guard prefixes of Haskell are ambiguous, but (modulo mistakes in the standards) the overall Haskell grammar is not. In the literature on Earley's, it has been shown that for an unambiguous grammar, each Earley item has an constant amortized cost in time. Therefore, if a parse produces a Earley sets that are all of less than a constant size, it must have linear time complexity.

3. There are many descriptions of Earley's algorithm out there. The Wikipedia page on Earley's algorithm (accessed 27 August 2018) is one good place to start. I did another very simple introduction to Earley's in an earlier blog post, which may be worth looking at. Note that Marpa contains improvements to Earley's algorithm. Particularly, to fulfill Marpa's claim of linear time for all LR-regular grammars, Marpa uses Joop Leo's speed-up. But Joop's improvement is not necessary or useful for parsing Haskell list comprehension, is not used in this example, and will not be described in this post.

4. Permalink to this code, accessed 27 August 2018.

5. Permalink to this code, accessed 27 August 2018.

6. Permalink to this code, accessed 27 August 2018.

7. Permalink to this code, accessed 27 August 2018.

8. Note that if the list is extended, the patterns matches and Boolean tests fail, so that 42 is no longer the answer. From the parsing point of view, this is immaterial.

9. In several places, including this blog post.

10. This account of the state of the art summarizes Arntzenius's recent post, which should be consulted for the details.

11. Adapted from this trace output, accessed 27 August 2018.

12. Adapted from this trace output, accessed 27 August 2018.

13. For more on the Ruby Slippers see my just previous blog post,

Ocean of Awareness: Sherlock Holmes and the Case of the Missing Parsing Solution

Always approach a case with an absolutely blank mind. It is always an advantage. Form no theories, just simply observe and draw inferences from your observations. — Sherlock Holmes, quoted in "The Adventure of the Cardboard Box".
It is a capital mistake to theorize before one has data. — Holmes, in "A Scandal in Bohemia".
I make a point of never having any prejudices, and of following docilely wherever fact may lead me. — Holmes, in "The Reigate Puzzle".
When you have eliminated the impossible, whatever remains, no matter how improbable, must be the truth. — Holmes, in "The Sign of Four".
In imagination there exists the perfect mystery story. Such a story presents the essential clues, and compels us to form our own theory of the case. If we follow the plot carefully, we arrive at the complete solution for ourselves just before the author's disclosure at the end of the book. The solution itself, contrary to those of inferior mysteries, does not disappoint us; moreover, it appears at the very moment we expect it. Can we liken the reader of such a book to the scientists, who throughout successive generations continue to seek solutions of the mysteries in the book of nature? The comparison is false and will have to be abandoned later, but it has a modicum of justification which may be extended and modified to make it more appropriate to the endeavour of science to solve the mystery of the universe. — Albert Einstein and Leopold Infeld. [1]

The Sherlock Holmes approach

My timeline history of parsing theory is my most popular writing, but it is not without its critics. Many of them accuse the timeline of lack of objectivity or of bias.

Einstein assumed his reader's idea of methods of proper investigation, in science as elsewhere, would be similar to those Conan Doyle's Sherlock Holmes. I will follow Einstein's lead in starting there.

The deductions recorded in the Holmes' canon often involve a lot of theorizing. To make it a matter of significance what the dogs in "Silver Blaze" did in the night, Holmes needs a theory of canine behavior, and Holmes' theory sometimes outpaces its pack of facts by a considerable distance. Is it really true that only dangerous people own dangerous dogs?[2]

Holmes's methods, at least as stated in the Conan Doyle stories, are incapable of solving anything but the fictional problems he encounters. In real life, a "blank mind" can observe nothing. There is no "data" without theory, just white noise. Every "fact" gathered relies on many prejudgements about what is relevant and what is not. And you certainly cannot characterize anything as "impossible", unless you have, in advance, a theory about what is possible.

The Einstein approach

Einstein, in his popular account of the evolution of physics, finds the Doyle stories "admirable"[3]. But to solve real-life mysteries, more is needed. Einstein begins his description of his methods at the start of his Chapter II:

The following pages contain a dull report of some very simple experiments. The account will be boring not only because the description of experiments is uninteresting in comparison with their actual performance, but also because the meaning of the experiments does not become apparent until theory makes it so. Our purpose is to furnish a striking example of the role of theory in physics. [4]

Einstein follows with a series of the kind of experiments that are performed in high school physics classes. One might imagine these experiments allowing an observer to deduce the basics of electromagnetism using materials and techniques available for centuries.

But, and this is Einstein's point, this is not how it happened. The theory came first, and the experiments were devised afterwards.

In the first pages of our book we compared the role of an investigator to that of a detective who, after gathering the requisite facts, finds the right solution by pure thinking. In one essential this comparison must be regarded as highly superficial. Both in life and in detective novels the crime is given. The detective must look for letters, fingerprints, bullets, guns, but at least he knows that a murder has been committed. This is not so for a scientist. It should not be difficult to imagine someone who knows absolutely nothing about electricity, since all the ancients lived happily enough without any knowledge of it. Let this man be given metal, gold foil, bottles, hard-rubber rod, flannel, in short, all the material required for performing our three experiments. He may be a very cultured person, but he will probably put wine into the bottles, use the flannel for cleaning, and never once entertain the idea of doing the things we have described. For the detective the crime is given, the problem formulated: who killed Cock Robin? The scientist must, at least in part, commit his own crime, as well as carry out the investigation. Moreover, his task is not to explain just one case, but all phenomena which have happened or may still happen. — Einstein and Infeld [5]

Commiting our own crime

If then, we must commit the crime of theorizing before the facts, where does out theory come from?

Science is not just a collection of laws, a catalogue of unrelated facts. It is a creation of the human mind, with its freely invented ideas and concepts. Physical theories try to form a picture of reality and to establish its connection with the wide world of sense impressions. Thus the only justification for our mental structures is whether and in what way our theories form such a link. — Einstein and Infeld [6]
In the case of planets moving around the sun it is found that the system of mechanics works splendidly. Nevertheless we can well imagine that another system, based on different assumptions, might work just as well.
Physical concepts are free creations of the human mind, and are not, however it may seem, uniquely determined by the external world. In our endeavor to understand reality we are somewhat like a man trying to understand the mechanism of a closed watch. He sees the face and the moving hands, even hears its ticking, but he has no way of opening the case. If he is ingenious he may form some picture of a mechanism which could be responsible for all the things he observes, but he may never be quite sure his picture is the only one which could explain his observations. He will never be able to compare his picture with the real mechanism and he cannot even imagine the possibility or the meaning of such a comparison. But he certainly believes that, as his knowledge increases, his picture of reality will become simpler and simpler and will explain a wider and wider range of his sensuous impressions. He may also be believe in the existence of the ideal limit of knowledge and that it is approached by the human mind. He may call this ideal limit the objective truth. -- Einstein and Infeld [7]

It may sound as if Einstein believed that the soundness of our theories is a matter of faith. In fact, Einstein was quite comfortable with putting it exactly that way:

However, it must be admitted that our knowledge of these laws is only imperfect and fragmentary, so that, actually the belief in the existence of basic all-embracing laws in Nature also rests on a sort of faith. All the same this faith has been largely justified so far by the success of scientific research. — Einstein [8]
I believe that every true theorist is a kind of tamed metaphysicist, no matter how pure a "positivist" he may fancy himself. The metaphysicist believes that the logically simple is also the real. The tamed metaphysicist believes that not all that is logically simple is embodied in experienced reality, but that the totality of all sensory experience can be "comprehended" on the basis of a conceptual system built on premises of great simplicity. The skeptic will say this is a "miracle creed." Admittedly so, but it is a miracle creed which has been borne out to an amazing extent by the development of science. — Einstein [9]
The liberty of choice, however, is of a special kind; it is not in any way similar to the liberty of a writer of fiction. Rather, it is similar to that of a man engaged in solving a well-designed puzzle. He may, it is true, propose any word as the solution; but, there is only one word which really solves the puzzle in all its parts. It is a matter of faith that nature — as she is perceptible to our five senses — takes the character of such a well-formulated puzzle. The successes reaped up to now by science do, it is true, give a certain encouragement for this faith. -- Einstein [10]

The puzzle metaphor of the last quote is revealing. Einstein believes there is a single truth, but that we will never know what it is — even its existence can only be taken as a matter of faith. Existence is a crossword puzzle whose answer we will never know. Even the existence of an answer must be taken as a matter of faith.

The very fact that the totality of our sense experience is such that by means of thinking (operations with concepts, and the creation and use of definite functional relations between them, and the coordination of sense experiences to these concepts) it can be put in order, this fact is one which leaves us in awe, but which we shall never understand. One may say that "the eternal mystery of the world is its comprehensibility". — Einstein [11]
In my opinion, nothing can be said a priori concerning the manner in which the concepts are to be formed and connected, and how we are to coordinate them to sense experiences. In guiding us in the creation of such an order of sense experiences, success alone is the determining factor. All that is necessary is to fix a set of rules, since without such rules the acquisition of knowledge in the desired sense would be impossible. One may compare these rules with the rules of a game in which, while the rules themselves are arbitrary, it is their rigidity alone which makes the game possible. However, the fixation will never be final. It will have validity only for a special field of application. — Einstein [12]
There are no eternal theories in science. It always happens that some of the facts predicted by a theory are disproved by experiment. Every theory has its period of gradual development and triumph, after which it may experience a rapid decline. — Einstein and Infeld [13]

In our great mystery story there are no problems wholly solved and settled for all time. — Einstein and Infeld [14]
This great mystery story is still unsolved. We cannot even be sure that it has a final solution. — Einstein and Infeld [15]

Choosing a "highway"

In most of the above, Einstein is focusing on his work in a "hard" science: physics. Are his methods relevant to "softer" fields of study? Einstein thinks so:
The whole of science is nothing more than a refinement of everyday thinking. It is for this reason that the critical thinking of the physicist cannot possibly be restricted to the examination of the concepts of his own specific field. He cannot proceed without considering critically a much more difficult problem, the problem of analyzing the nature of everyday thinking. — Einstein [16]
Einstein's collaboration with Infeld was, like the "Timeline", a description of the evolution of ideas, and in the Einstein–Infeld book they describe their approach:
Through the maze of facts and concepts we had to choose some highway which seemed to us most characteristic and significant. Facts and theories not reached by this road had to be omitted. We were forced, by our general aim, to make a definite choice of facts and ideas. The importance of a problem should not be judged by the number of pages devoted to it. Some essential lines of thought have been left out, not because they seemed to us unimportant, but because they do not lie along the road we have chosen. — Einstein and Infeld [17]

Truth and success

Einstein says that objective truth, while it exists, is not to be attained in the hard sciences, so it is not likely he thought that a historical account could outdo physics in this respect. For Einstein, as quoted above, "success alone is the determining factor".

Success, of course, varies with what the audience for a theory wants. In a very real sense, I consider a theory that can predict the stock market more successful than one which can predict perturbations of planetary orbits invisible to the naked eye. But this is not a reasonable expectation when applied to the theory of general relativity.

Among the expectations reasonable for a timeline of parsing might be these:
  • It helps choose the right parsing algoithm for practical applications.
  • It helps a reader to understand articles in the literature of parsing.
  • It helps guide future research.
  • It predicts the outcome of future research.

When I wrote the first version of Timeline, its goal was none of these. Instead I intended it to explain the sources behind my own research in the Earley/Leo lineage.

With such a criteria of "success", I wondered if Timeline would have an audience much larger than one, and was quite surprised when it started getting thousands of web hits a day. The large audience Timeline 1.0 drew was a sign that there is an large appetite out there for accounts of parsing theory, an appetite so strong that anything resembling a coherent account was quickly devoured.

In response to the unexpectedly large audience, later versions of the Timeline widened their focus. Timeline 3.1 was broadened to give good coverage of mainstream parsing practice including a lot of new material and original analysis. This brought in lot of material on topics which had little or no influence on my Earley/Leo work. The parsing of arithmetic expressions, for example, is trivial in the Earley/Leo context, and before my research for Timeline 3.0 I had devoted little attention to approaches that I felt amounted to needlessly doing things the hard way. But arithmetic expressions are at the borderline of power for traditional approaches and parsing arithmetic expressions was a central motivation for the authors of the algorithms that have so far been most influential on mainstream parsing. So in Timeline 3.1 arithmetic expresssions became a recurring theme, being brought back for detailed examination time and time again.

Is the "Timeline" false?

Is the "Timeline" false? The answer is yes, in three increasingly practical senses.

As Einstein makes clear, every theory that is about reality, will eventually proved be false. The best a theory can hope for is the fate of Newton's physics — to be shown to be a subcase of a larger theory.

In a more specific sense, the truth of any theory of parsing history depends on its degree of success in explaining the facts. This means that the truth of the "Timeline" depends on which facts you require it to explain. If arbitrary choices of facts to be explained are allowed, the "Timeline" will certainly be seen to be false.

But can the "Timeline" be shown to be false for criteria of success which are non-arbitrary? In the next section, I will describe four non-arbitrary criteria of success, all of which are of practical interest, and for all of which the "Timeline" is false.

The Forever Five

"Success" depends a lot on judgement, but my studies have led me to conclude that all but five algorithms are "unsuccessful" in the sense that, for everything that they do, at least one other algorithm does it better in practice. But this means there are five algorithms which do solve some practical problems better than any other algorithm, including each of the other four. I call these the "forever five" because, if I am correct, these algorithms will be of permanent interest.

My "Forever Five" are regular expressions, recursive descent, PEG, Earley/Leo and Sakai's algorithm.[18] Earley/Leo is the focus of my Timeline, so that an effective critique of my "Timeline" could be a parsing historiography centering on any other of the other four.

For example, of the five, regular expressions are the most limited in parsing power. On the other hand, most of the parsing problems you encounter in practice are handled quite nicely by regular expressions.[19] Good implementations of regular expressions are widely available. And, for speed, they are literally unbeatable -- if a parsing problem is a regular expression, no other algorithm will beat a dedicated regular expression engine for parsing it.

Could a Timeline competitor be written which centered on regular expressions? Certainly. And if immediate usefulness to the average programmer is the criterion (and it is a very good criterion), then the Regular Expressions Timeline would certainly give my timeline a run for the money.

What about a PEG Timeline?

The immediate impetus for this article was a very collegial inquiry from Nicolas Laurent, a researcher whose main interest is PEG. Could a PEG Timeline challenge mine? Again, very certainly.

Because there are at least some problems for which PEG is superior to everything else, my own Earley/Leo approach included. As one example, PEG could be an more powerful alternative to regular expressions.

That does not mean that I might not come back with a counter-critique. Among the questions that I might ask:

  • Is the PEG algorithm being proposed a future, or does it have an implementation?
  • What claims of speed and time complexity are made? Is there a way of determining in advance of runtime how fast your algorithm will run? Or is the expectation of practical speed on an "implement and pray" basis?
  • Does the proposed PEG algorithm match human parsing capabilities? If not, it is a claim for human exceptionalism, of a kind not usually accepted in modern computer science. How is exceptionalism justified in this case?
The search for truth is more precious than its possession. -- Einstein, quoting Lessing[20]

Comments, etc.

The background material for this post is in my Parsing: a timeline 3.0, and this post may be considered a supplement to "Timelime". To learn about Marpa, my Earley/Leo-based parsing project, there is the semi-official web site, maintained by Ron Savage. The official, but more limited, Marpa website is my personal one. Comments on this post can be made in Marpa's Google group, or on our IRC channel: #marpa at freenode.net.


1. Einstein, Albert and Infeld, Leopold, The Evolution of Physics, Simon and Schuster, 2007, p. 3

2. "A dog reflects the family life. Whoever saw a frisky dog in a gloomy family, or a sad dog in a happy one? Snarling people have snarling dogs, dangerous people have dangerous ones." From "The Adventure of the Creeping Man".

3. Einstein and Infeld, p. 4.

4. Einstein and Infeld, p. 71.

5. Einstein and Infeld, p 78.

6. Einstein and Infeld, p. 294.

7. Einstein and Infeld, p. 31. See also Einstein, "On the Method of Theoretical Physics", Ideas and Opinions, Wings Books, New York, no publication date, p. 272.

8. Dukas and Hoffman, Albert Einstein: The Human Side, Princeton University Press, 2013, pp 32-33.

9. "On the Generalized Theory of Gravitation", in Ideas and Opinions, p 342.

10. "Physics and Reality", in Ideas and Opinions, pp. 294-295.

11. "Physics and Reality", in Ideas and Opinions, p. 292.

12. "Physics and Reality", in Ideas and Opinions, p. 292.

13. Einstein and Infeld, p. 75.

14. Einstein and Infeld, p. 35.

15. Einstein and Infeld, pp. 7-8

16. "Physics and Reality", Ideas and Opinions, p 290.

17. Einstein and Infeld, p. 78.

18. Three quibbles: Regular expressions do not find structure, so pedantically they are recognizers, not parsers. Recursive descent is technique for creating a family of algorithms, not an algorithm. And the algorithm first described by Sakai is more commonly called CYK, from the initials of three other researchers who re-discovered it over the years.

19. A lot of this is because programmers learn to formulate problems in ways which avoid complex parsing so that, in practice, the alternatives are using regular expressions or rationalizing away the need for parsing.

20. "The Fundaments of Theoretical Physics", in Ideas and Opinions, p. 335.

Dave's Free Press: Journal: I Love Github

Github makes accepting patches from other people and applying them soooooo easy!

Instead of having to extract the patch from an email onto my workstation and manually apply it, applying this contribution was a simple matter of clicking on one button.

Thanks Mark - and thanks Github as well!

And I was also amused to see that the new release of Net::Random was exactly five years after the previous one. This adds support for fetching your randomness over SSL.

Dave's Free Press: Journal: CPAN Testers' CPAN author FAQ

Barbie recently posted that David Golden recently posted regarding a comment from Leon Timmermans on IRC. Leon highlighted a problem when CPAN authors try to find information about CPAN Testers, and how they can request testers to do (or not do) something with a distribution they've just uploaded.

The page they are looking for is the CPAN Author FAQ on the CPAN Testers Wiki. Although there is plenty of information for authors, the page doesn't appear prominently on search engines when some searches for that kind of information.

As such, David has suggested that people tweet or post about the page, which includes this post :-)

Ocean of Awareness: Measuring language popularity

Language popularity

Github's linguist is seen as the most trustworthy tool for estimating language popularity[1], in large part because it reports its result as the proportion of code in a very large dataset, instead of web hits or searches.[2] It is ironic, in this context, that linguist avoids looking at the code, preferring to use metadata -- file name and the vim and shebang lines. Scanning the actual code is linguist's last resort.[3]

How accurate is this? For files that are mostly in a single programming language, currently the majority of them, linguist's method are probably very accurate.

But literate programming often requires mixing languages. It is perhaps an extreme example, but much of the code used in this blog post comes from a Markdown file, which contains both C and Lua. This code is "untangled" from the Lua by ad-hoc scripts[4]. In my codebase, linguist indentifies this code simply as Markdown.[5] linguist then ignores it, as it does all documentation files.[6].

Currently, this kind of homegrown literate programming may be so rare that it is not worth taking into account. But if literate programming becomes more popular, that trend might well slip under linguist's radar. And even those with a lot of faith in linguist's numbers should be happy to know they could be confirmed by more careful methods.

Token-by-token versus line-by-line

linguist avoids reporting results based on looking at the code, because careful line counting for multiple languages cannot be done with traditional parsing methods.[7] To do careful line counting, a parser must be able to handle ambiguity in several forms -- ambiguous parses, ambiguous tokens, and overlapping variable-length tokens.

The ability to deal with "overlapping variable-length tokens" may sound like a bizarre requirement, but it is not. Line-by-line languages (BASIC, FORTRAN, JSON, .ini files, Markdown) and token-by-token languages (C, Java, Javascript, HTML) are both common, and even today commonly occur in the same file (POD and Perl, Haskell's Bird notation, Knuth's CWeb).

Deterministic parsing can switch back and forth, though at the cost of some very hack-ish code. But for careful line counting, you need to parse line-by-line and token-by-token simultaneously. Consider this example:

    int fn () { /* for later
   */ int fn2(); int a = fn2();
   int b = 42;
   return  a + b; /* for later
*/ }

A reader can imagine that this code is part of a test case using code pulled from a LaTeX file. The programmer wanted to indicate the copied portion of code, and did so by commenting out its original LaTeX delimiters. GCC compiles this code without warnings.

It is not really the case that LaTeX is a line-by-line language. But in literate programming systems[8], it is usually required that the \begin{code} and \end{code} delimiters begin at column 0, and that the code block between them be a set of whole lines so, for our purposes in this post, we can treat LaTeX as line-by-line. For LaTeX, our parser finds

  L1c1-L1c29 LaTeX line: "    int fn () { /* for later"
  L2c1-L2c13 \begin{code}
  L3c1-L5c31 [A CODE BLOCK]
  L6c1-L6c10 \end{code}
  L7c1-L7c5 LaTeX line: "*/ }"[9]

Note that in the LaTeX parse, line alignment is respected perfectly: The first and last are ordinary LaTeX lines, the 2nd and 6th are commands bounding the code, and lines 3 through 5 are a code block.

The C tokenization, on the other hand, shows no respect for lines. Most tokens are a small part of their line, and the two comments start in the middle of a line and end in the middle of one. For example, the first comment starts at column 17 of line 1 and ends at column 5 of line 3.[10]

What language is our example in? Our example is long enough to justify classification, and it compiles as C code. So it seems best to classify this example as C code[11]. Our parses give us enough data for a heuristic to make a decision capturing this intuition.[12]

Earley/Leo parsing and combinators

In a series of previous posts[13], I have been developing a parsing method that integrates Earley/Leo parsing and combinator parsing. Everything in my previous posts is available in Marpa::R2, which was Debian stable as of jessie.

The final piece, added in this post, is the ability to use variable length subparsing[14], which I have just added to Marpa::R3, Marpa::R2's successor. Releases of Marpa::R3 pass a full test suite, and the documentation is kept up to date, but R3 is alpha, and the usual cautions[15] apply.

Earley/Leo parsing is linear for a superset of the LR-regular grammars, which includes all other grammar classes in practical use, and Earley/Leo allows the equivalent of infinite lookahead.[16] When the power of Earley/Leo gives out, Marpa allows combinators (subparsers) to be invoked. The subparsers can be anything, including other Earley/Leo parsers, and they can be called recursively[17]. Rare will be the grammar of practical interest that cannot be parsed with this combination of methods.

The example

The code that ran this example is available on Github. In previous posts, we gave larger examples[18], and our tools and techniques have scaled. We expect that the variable-length subparsing feature will also scale -- while it was not available in Marpa::R2, it is not in itself new. Variable-length tokens have been available in other Marpa interfaces for years and they were described in Marpa's theory paper.[19].

The grammars used in the example of this post are minimal. Only enough LaTex is implemented to recognize code blocks; and only enough C syntax is implemented to recognize comments.

The code, comments, etc.

To learn more about Marpa, a good first stop is the semi-official web site, maintained by Ron Savage. The official, but more limited, Marpa website is my personal one. Comments on this post can be made in Marpa's Google group, or on our IRC channel: #marpa at freenode.net.


1. This github repo for linguist is https://github.com/github/linguist/.

2. Their methodology is often left vague, but it seems safe to say the careful line-by-line counting discussed in this post goes well beyond the techniques used in the widely-publicized lists of "most popular programming languages".

In fact, it seems likely these measures do not use line counts at all, but instead report the sum of blob sizes. Github's linguist does give a line count but Github does not vouch for its accuracy: "if you really need to know the lines of code of an entire repo, there are much better tools for this than Linguist." (Quoted from the resolution of Github linguist issue #1331.) The Github API's list-languages command reports language sizes in bytes. The API documentation is vague, but it seems the counts are the sum of blob sizes, with each blob classed as one and only one language.

Some tallies seem even more coarsely grained than this -- they are not even blob-by-blob, but assign entire repos to the "primary language". For more, see Jon Evan's Techcrunch article; and Ben Frederickson's project.

3. linguist's methodology is described in its README.md (permalink as of 30 September 2018).

4. This custom literate programming system is not documented or packaged, but those who cannot resist taking a look can find the Markdown file it processes here, and its own code here (permalinks accessed 2 October 2018).

5. For those who care about getting linguist as accurate as possible. there is a workaround: the linguist-language git attribute. This still requires that each blob be reported as containing lines of only one language.

6. For the treatment of Markdown, see linguist README.md (permalink accessed as of 30 September 2018).

7. Another possibility is a multi-scan approach -- one pass per language. But that is likely to be expensive. At last count there were 381 langauges in linguist's database. Worse, it won't solve the problem: "liberal" recognition even of a single language requires more power than available from traditional parsers.

8. For example, these line-alignment requirements match those in Section 10.4 of the 2010 Haskell Language Report.

9. Adapted from test code in Github repo, permalink accessed 2 October 2018.

10. See the test file on Gihub.

11. Some might think the two LaTex lines should be counted as LaTex and, using subparsing of comments, that heuristic can be implemented.

12. To be sure, a useful tool would want to include considerably more of C's syntax. It is perhaps not necessary to be sure that a file compiles before concluding it is C. And we might want to class a file as C in spite of a fleeting failure to compile. But we do want to lower the probably of a false positive.

13. Marpa and procedural parsing; Marpa and combinator parsing; and Marpa and combinator parsing 2

14. There is documentation of the interface, but it is not a good starting point for a reader who has just started to look at the Marpa::R3 project. Once a user is familiar with Marpa::R3 standard DSL-based interface, they can start to learn about its alternatives here.

15. Specifically, since Marpa::R3 is alpha, its features are subject to change without notice, even between micro releases, and changes are made without concern for backward compatibility. This makes R3 unsuitable for a production application. Add to this that, while R3 is tested, it has seen much less usage and testing than R2, which has been very stable for some time.

16. Technically, a grammar is LR-regular if it can be parsed deterministically using a regular set as its lookahead. A "regular set" is a set of regular expressions. The regular set itself must be finite, but the regular expressions it contains can match lookaheads of arbitrary length.

17. See Marpa and combinator parsing 2

18. The largest example is in Marpa and combinator parsing 2

19. Kegler, Jeffrey. Marpa, A Practical General Parser: The Recognizer. Online version accessed of 24 April 2018. The link is to the 19 June 2013 revision of the 2012 original.

Dave's Free Press: Journal: Thankyou, Anonymous Benefactor!

I got home this evening to find an Unexpected Parcel waiting for me, full of books. I have no idea who it's from, but I'm guessing that it's from someone who finds <a href=http://deps.cpantesters.org/>CPANdeps</a> useful. Thank you, Anonymous Benefactor! Your generosity is much appreciated!

Dave's Free Press: Journal: Number::Phone release

There's a new release, <a href=http://www.cantrell.org.uk/david/tech/perl-modules/Number-Phone-1.58.tar.gz>version 1.58</a>, of Number::Phone, my set of perl modules for picking information out of phone numbers. Changes from the previous release are that Mayotte, Reunion and Comoros can't decide which country is which, and there's the usual updates to the database of UK numbers, mostly to support the <a href=http://www.ofcom.org.uk/media/news/2007/02/nr_20070213b>new 03 numbers</a>.

Dave's Free Press: Journal: Palm Treo call db module

To make up for a disappointing gap in Palm's software for the Treo smartphone, I wrote a <a href=http://www.cantrell.org.uk/david/tech/treo/call-dumper/>small perl script</a> to parse the database that stores my call history. I then re-wrote it as <a href=http://search.cpan.org/search?query=Palm%3A%3ATreoPhoneCallDB>a re-useable module</a> which also figgers out whether the call was incoming or outgoing.

Dave's Free Press: Journal: Ill

I am ill. I've been ill since Thursday, with a cold. You're meant to be able to cure a cold with [insert old wives tale remedy here] in 5 days, or if you don't, it'll clear itself up in just under a week. So hopefully today is the last day.

So what have I done while ill?

On Friday I became old (see previous post), and went to the Byzantium exhibition at the Royal Academy. It was good. You should go.

Saturday was the London Perl Workshop. My talk on closures went down well, and people seemed to understand what I was talking about. Hurrah! I decided that rather than hang around nattering and going to a few talks, I'd rather hide under my duvet for the rest of the day.

I mostly hid on Sunday too, and spent most of the day asleep. In a brief moment of productivity, I got my laptop and my phone to talk to each other using magic interwebnet bluetooth stuff. I'd tried previously without success, but that was with the previous release of OS X. With version X.5 it seems to Just Work, so no Evil Hacks were necessary.

The cold means that I can't taste a damned thing, not even bacon. So now I know what it's like to be Jewish. Being Jewish sucks.

And today, I am still coughing up occasional lumps of lung and making odd bubbling noises in my chest, although my nasal demons seem to be Snotting less than they were, so hopefully I'll be back to normal tomorrow.

Dave's Free Press: Journal: CPANdeps upgrade

While you won't notice any changes, there have been biiiig upgrades at CPANdeps. Here's the diff.

Until now, it's used a SQLite database of test results that I downloaded every day and then mangled a bit to do things like add some necessary indices, figure out which reports are from dev versions of perl, and so on. That worked really well back in the summer of 2007, when there were only half a million reports in the database. I started worrying a bit at the beginning of 2009 when we hit 3 million, but the update happened overnight so I didn't care. But now that we've got over 6 million reports, the update would take anywhere between 8 and 14 hours. Not only is that not sustainable given the current growth rate, it also hurts the other users on that machine, because almost all of that time is spent waiting for disk I/O - which means that they're also waiting for the disk. On top of that, when you have big databases, a SQLite CGI ain't a great idea because indices have to be fetched from disk every time, so reads pound the disk too. Doubleplusungood!

Fun fact: SQLite is great for prototyping, but it doesn't scale :-)

So now it uses MySQL. Having a database daemon running all the time means that there's now some caching, so reads are quicker. In addition, given that I can't just simply fiddle with the structure of the database that I download to produce what I want, and instead have to import the data into MySQL, it now only imports new records, so the daily update takes only a few seconds.

I also re-jigged the structure of how it caches test results. Instead of being all in one directory with hundreds of thousands of files, they're split into a hierarchy. This probably won't have any significant effect on normal operations, but it will certainly make it faster for me to navigate around and see what's going on when people submit bug reports!

Dave's Free Press: Journal: YAPC::Europe 2006 report: day 3

There were quite a few interesting talks in the morning, especially Ivor's one on packaging perl applications. Oh, and mine about rsnapshot, of course, in which people laughed at the right places and I judged the length of it just right, finishing with a couple of minutes left for questions.

At the traditional end-of-YAPC auction, I avoided spending my usual stupid amounts of money on stupid things, which was nice. Obviously the hundred quid I put in to buying the hair style of next year's organisers wasn't stupid. Oh no. Definitely not.

An orange mohican will suit Domm beautifully.

Dave's Free Press: Journal: Graphing tool

I made a shiny thing! It can plot arbitrary functions of the form x=f(y) or y=f(x). Under the skin, it just massages its arguments and passes them through to Gnuplot. Here's the source code.

Update: now 48.3% even shinier - see on the right

Dave's Free Press: Journal: Travelling in time: the CP2000AN

My mad experiment in CPAN mirrors has grown a couple of new tentacles. Previously it could be a perl-X.Y.Z-specific mirror, such as the CP5.6.2AN, or an OS-specific mirror such as the cpMSWin32an. Now it can combine the two such as in the CP5.8.8-irixAN and all of those can optionally be combined with a date/time to only include stuff that was already on the CPAN as at that time, such as at the CP2000AN.

Why do this? Let's assume that you have a large complex application which uses lots of stuff from the CPAN, and depends on Elk version 1.009 and ListOfDogs version 5.1, and will break with any later version of Elk (or of ListOfDogs). You get a feature request from a user, and think "ah-ha, there's a module for that", and so you go to install Some::Module. Unfortunately, the latest version of Some::Module depends on Some::Other::Module which in turn needs Another::Module which needs Elk 1.234, so your CPAN client merrily upgrades Elk, breaking everything. Doom and Disaster. Having a CPAN "mirror" nailed to the date of the last release of Elk and ListOfDogs that works for you will save you from pain, suffering, and the Dark Side. Either you'll get older versions that Just Work, or you'll get nothing, and nothing is far better than breaking everything!

Dave's Free Press: Journal: XML::Tiny released

I have released my XML::Tiny module. The parser at its core is less than twenty lines of code. Pretty easy to follow code too, I think, and that also includes error handling. One of my aims in writing it was to keep memory usage and code to the absolute minimum, so it doesn't handle all of XML. The documentation says that it supports "a useful subset of XML". Personally, I think it supports the useful subset. It's certainly enough to parse the data I get back from Amazon when I use their web services, and to parse an RSS feed.

Dave's Free Press: Journal: YAPC::Europe 2007 report: day 1

As is becoming normal, I used the times between talks to bugfix some of my modules - this time Tie::STDOUT and Data::Transactional. The former was failing on perl 5.6, the latter on 5.9.5. The former was a bug in perl (you can't localise tied filehandles and expect the tieing to go away in 5.6, so it now declares a dependency on 5.8), the latter was a bug in my code.

Philippe Bruhat's talk on Net::Proxy was great - you can tell it's great because I came away with ideas for at least four things that I need to write. First up will be a plugin for it to allow the user to specify minimum and maximum permitted data rates for proxied connections. This will permit bandwidth limits for maximum permitted rates, but will also help to defeat IDSes doing traffic analysis if you specify a minimum permitted data rate.

This will protect (eg) ssh sessions from being identified based on their very bursty traffic pattern, by "filling in the blanks" with junk data.

In the evening, the CPAN-testers BOF was productive.

Ocean of Awareness: Parsing Timeline 3.1

Announcing Timeline 3.1

I have just released version 3.1 of my Parsing Timeline. It is a painless introduction to a fascinating and important story which is scattered among one of the most forbidding literatures in computer science. Previous versions of this timeline have been, by far, the most popular of my writings.

A third of Timeline 3.1 is new, added since the 3.0 version. Much of the new material is adapted from previous blog posts, both old and recent. Other material is completely new. The sections that are not new with 3.1 has been carefully reviewed and heavily revised.

Comments, etc.

My interest in parsing stems from my own approach to it -- a parser in the Earley/Leo lineage named Marpa. To learn more about Marpa, a good first stop is the semi-official web site, maintained by Ron Savage. The official, but more limited, Marpa website is my personal one. Comments on this post can be made in Marpa's Google group, or on our IRC channel: #marpa at freenode.net.

Header image by Tambako the Jaguar. Some rights reserved.