PAL-Blog: Kulinarischer Sarkasmus

Deutsche Behörden sind dafür bekannt, einfache Dinge zu verkomplizieren. Ich hätte allerdings nicht erwartet, dass ein ganz normales Unternehmen das alles noch toppen kann. Die Bestellung für Zoe's Mittagessen belehrte mich eines Besseren.

Peter Sergeant: Install XML::LibXML without root

Get a source tarball for libxml2-dev. On `apt` systems:

apt-get source libxml2-dev

Configure it to install locally, and then install it:

cd libxml2-dev/
./configure --prefix=/home/peters/perl5/alien/
make install

Ignoring all other options like LIBS, INC, and the rest of the stuff that's kindly recommended, you only care about showing Makefile.PL where to find xml2config:

perl Makefile.PL XMLPREFIX=~/perl5/alien/

And then it should cleanly make.

Laufeyjarson writes... » Perl: PBP: 040 Multiline Strings

The PBP suggests breaking your strings on any embedded newlines so they look in code something like they’ll look when output.

It’s not a bad idea, and I generally support it.  I don’t find it as useful as it could be, because things tend to be full of long variable names or object calls that make the string in the program wider than the string that will be displayed.  The revers is sometimes true, too; the variable $address might contain, “1600 Pennsylvania Avenue NW, Washington, DC 20500″.

Their trivial example is helped a lot:

$usage = "usage: $0 <file> [-full]\n"

. "(Use the -full option for full dump)\n"


If your code has more work to do, things are less clearly beneficial:

$greeting = "Hello "

. $user->full_name

. " and welcome to "

. $site->name

. "\n"

. "Today is $full_date, and the system has been up for $uptime, "

. "with $highest_caller callers having connected $total_connections times for $average_call_length each.\n"


The next practice helps with this some.  Templates and getting the verbiage out of your program help even more.

Perlbuzz: Perlbuzz news roundup for 2014-09-15

These links are collected from the Perlbuzz Twitter feed. If you have suggestions for news bits, please mail me at

Perlbuzz: Template Toolkit's DEFAULT directive does not short-circuit

Template Toolkit's DEFAULT does not do short-circuit evaluation like you might think it would.

If you have

[% DEFAULT foo = user.calculate_foo; %]

Then the results of the method call user.calculate_foo are assigned to foo, unless foo already has a true value.

However, whether or not foo already has a true value, user.calculcate_foo is always invoked.

You might expect the code to look like this:

$foo = $user->calculate_foo() unless $foo;

But it's really like this;

$temp = $user->calculate_foo();
$foo = $temp unless $foo;

So, if user.calculate_foo is an expensive function that you don't want to invoke unless you have to, you'll have to resort to an explicit IF block:

IF !foo;
    foo = user.calculate_foo;

It's also worth noting that DEFAULT is not for checking if a variable is set, but for checking if it is true.

brian d foy: Revitalizing the Perl Power Tools

I've started revitalizing the Perl Power Tools. I love this project, also known as the Unix Reconstruction Project, but there's been a some bit rot and I think we can make it easier to find. Googling "PPT" means so many other thinks, including PowerPoint. I've always had a hard time finding it, and I know it's there! Many of the links to it are dead or buried, and when I talk about it (such as in my Stackoverflow post about paste or my lightning talk at the Swiss Perl Workshop).

Toolbag with power tools

Some of these programs date back to the 1990s and I've discovered weird syntax things that Perl 5.22 won't support. Did you know that you could %hash->{$key} with a normal hash? Not after v5.20 you can't! Also gone is defined( @array ).

Dana Jacobson already fixed a bug in the factor program, which now has tests.

Some of the programs needed modules for CPAN, but the distribution declared no dependencies. Where we can, I'd like to see some of those dependencies reduced with something like App::FatPacker.

Some programs shell out to other things. We need to be a bit careful there to ensure the right things happen. I'll have to make a decision about a least version of Perl, perhaps one that includes the three argument open.

I'd like to make the docs across all programs consistent.

There is plenty to do. I don't care so much about changing style or reimplementing everything. I want it usable by normal people. It has to install with the least hassle and do what it actually advertises.

Perl Foundation News: September 2014 Grant Proposals

The Grants Committee has received five grant proposals for the September round. Before the Committee members vote, we would like to solicit feedback from the Perl community on the proposal.

Review the proposals below and please comment there. The Committee members will start the voting process on September 26th and the conclusion will be announced by September 30th.

For the proposals from Ingy and David, we also got the following statement from them:

Ingy döt Net and David Oswald have a number of ambitious Perl related projects that we would like to collaborate on. We think that having a pair of programmers fulfilling grants will lead to a better experience for all.

We have thought up about a dozen potential ideas, and we think 4 of them are ready to propose now. The others will come to fruition over time.


Our ideal situation would be to continually propose 3-5 grants every 2 months, but only have 1 of them granted at a time. This is of course, up to you and based on your perception of the merit of our proposals, but hopefully we can get better and better at providing projects that really move Perl forward.

Perl Foundation News: Grant Proposal: Swim to Pod

We have received the following grant application "Swim to Pod". Please leave feedback in the comments field by September 25th, 2014.

Swim to Pod

  • Name:
    • Ingy döt Net
    • David Oswald
  • Amount Requested

    USD $3000


Provide Swim tools that allow Perl programmers to produce elaborate Pod documention, painlessly.

Provide all the functionality of Pod in a syntax better than Markdown.

Provide 5 plugins for extending Swim in ways useful to Perl people.

Benefits to the Perl Community

Pod excels in providing a great model for documentation. That means it supports all (most of) the semantic concepts needed to write great docs. Where Pod falls down is on syntax. The syntax is so verbose that common idioms (like creating lists of lists) are often avoided because they are to hard to write.

Many Perl people use Markdown which offers simpler syntax, but lacks many of the common semantic idioms in Pod. For instance, Markdown has no ability to do 'Data Definition' lists that are so common in CPAN docs (dl/dt/dd HTML tags). Markdown is also weak in nesting semantic blocks.

This summer Ingy created Swim, after thinking about it and discussing it for 8 years. He has converted all the documentation in over 80 of his CPAN (and other language) distributions. The result is incredibly easy to write and maintain doc, all published to Pod for CPAN and GitHub rendering.


  1. Tested support for full Pod semantics in Swim
  2. Tools to convert Swim to Pod
    • From .swim files
    • From .pm files embedded
  3. Provide 5 new Swim plugin modules
    • Useful Perl/doc things
    • Examples to write more
  4. Module automation plugins for turning Swim to Pod
    • Dist::Zilla
    • Module::Install
  5. Pod-to-Swim conversion tool

Project Details

Here are the things that Swim currently does:

  • Supports all basic markup semantics
  • Converts to Pod, HTML, Markdown, nroff (manpages) Text, DVI, PDF and more
  • Extensible by plugins A Travis badge plugin is currently on CPAN
  • Swim is Pegex based
    • Defined by a readable (human friendly) grammar
    • Will have implementations in many languages (including JavaScript)


  1. Write tests for all the semantic capabilities of Pod.
  2. Make basic swim --to=pod pass those tests.
  3. Write a Pod parsing grammar for Pod-to-Swim tool.
  4. Swim plugin module candidates:
    • Unicode escapes <U+2603> <U SNOWMAN>
    • HTML table support
    • Video embedding
    • Side-by-side comarison widget
    • Syntax highlighting code blocks
  5. Dist::Zilla plugin that transforms:
    • from: doc/My/Module.swim
    • or from: lib/My/
    • to: lib/My/Module.pod

Project Schedule

This project will take 2 months and can be started immediately upon acceptance.

Completeness Criteria

Above modules released to CPAN with all functionality and tests in place. Passing all tests.


Ingy döt Net has created markup languages in the past including the Kwiki wiki markup and the Socialtext wiki markup. He is extremely familiar with parsing, syntax and semantics issues in this space.

David Oswald has been a Perl user for over a decade, is an author of several CPAN modules, and maintainer of more. David also runs Salt Lake Perl Mongers.

Ingy and David work well together and have decided to collaborate on a number of projects that will benefit Perl and Software Development.


This document was written entirely in Swim and converted to Pod.

See these links:

Perl Foundation News: Grant Proposal: Pegex Grammar for YAML

We have received the following grant application "Pegex Grammar for YAML". Please leave feedback in the comments field by September 25th, 2014.

Pegex Grammar for YAML

  • Name:
    • Ingy döt Net
    • David Oswald
  • Amount Requested

    USD $3500


Make and YAML::Tiny driven by a common formal grammar.

Benefits to the Perl Community

Perl has four major YAML implementations:

They all have major incompatabilities. In the past year the #yaml channel on has gotten all the right people together to resolve this. A great next step would be to make the two pure Perl implementations YAML and YAML::Tiny be grammar driven.

Pegex is a Perl 6 Rules inspired framework that greatly lowers the barriers to writing parsers. The main goal of Pegex is to make grammars for parsing a language or syntax be as human friendly as possible. Pegex is also extremely fast for pure Perl code.

By having the Load functions for YAML and YAML::Tiny be grammar driven, the following benefits would occur:

  1. Both modules would parse the same YAML exactly the same
  2. Bugs could easily be fixed for both modules in the same grammar
  3. YAML::Tiny would be tinyer
  4. would become faster


This project will provide:

  • Pegex grammar for YAML
  • YAML and YAML::Tiny parser/loaders based on the grammar
  • Common test suite proving compatability

Project Details

Pegex is four years old, and has several CPAN modules based on it. It makes language defining grammars appear crystal clear. It has undergone an optimization development phase that makes it very fast.

Recent work was done to get YAML indentation working in Pegex. This was a major hurdle. This is now a good time to make a complete YAML grammar. Since Pegex works in many languages, eventually there will be exactly compatible YAML parsers in Perl, Python, Ruby, JS, etc.


  • Write a grammar for YAML in Pegex
    • Grammar will be well documented
    • Each grammar rule will be tested
  • Convert YAML to use the grammar for its loader
  • Convert YAML::Tiny to use (a subset of) the grammar for its loader
  • Both modules pass a common test suite

Project Schedule

This project will take 2-3 months and can be started immediately upon acceptance.

Completeness Criteria

Both modules released to CPAN, using the new Pegex grammar and passing the same tests. Pegex/YAML grammar published in its own GitHub repo.


Ingy döt Net invented the YAML language, is the author and maintainer of YAML and YAML::XS and is one of he people currently actively maintaining YAML::Tiny. He also is the creator of the Pegex parsing framework.

David Oswald has been a Perl user for over a decade, is an author of several CPAN modules, and maintainer of more. David also runs Salt Lake Perl Mongers.

Ingy and David work well together and have decided to collaborate on a number of projects that will benefit Perl and Software Development.

Perl Foundation News: Grant Proposal: Inline::C(PP) Module Support

We have received the following grant application "Inline::C(PP) Module Support". Please leave feedback in the comments field by September 25th, 2014.

Inline::C(PP) Module Support

  • Name
    • Ingy döt Net
    • David Oswald
  • Amount Requested

    USD $2,000


Make Inline::C and Inline::CPP the best choice for writing "XS" modules.

Benefits to the Perl Community

In 2000, and Inline::C brought XS from "hard things possible" to "hard things simple". People with basic knowledge of Perl and C could combine the two, without having to learn the entirety of the black art of XS.

Soon after, support was added for C++, Java and ~25 other languages. Inline was consider rather successful.

The missing piece of all this is that Inline (C, C++) was never really polished to write CPAN modules; CPAN dists that would handle the compilation parts at install/build time, and then become nearly indistinguishable from plain XS modules after installation.

A common pattern has been for module programmers to start with Inline::C and then use it to learn XS, so that they can release the modules to CPAN. Often Inline::C has been used to generate the XS which is then pasted into a module distribution with a few adaptations to fit the distribution's framework. Though very successful, Inline should go further to simplify and remove the need for authors to manually work through tedious and error-prone steps.

In the summer of 2014, a number of core Inline developers came together to get Inline up to modern standards. We've decided the next big move is to polish up Inline so that it is ModuleReady™; so that it supports module authorship and distribution.

This will allow many more people to use C and C++ to make Perl modules, without ever needing to learn XS boilerplate, which is a significant barrier to entry. Those who already know XS will still be pleased to rediscover that Inline is an easy way to create XS, and that these enhancements make it a good choice as a basis for XS module distributions.


  • Allow compilation of inlined C code to happen during a module's build time, and then allow Inline to get out of the way to a greater degree than currently happens.
  • Make use Inline not trigger a C compile or a C source check when running installed.
  • Provide support integration for:
    • Dist::Zilla
    • Module::Install
    • ExtUtils::MakeMaker
    • Module::Build
  • Make sure that the right things happen at test and build time (vs runtime).
  • Test that all works properly with C++.
  • Provide support for C modules where the C code is not 'inlined'
    • Like YAML::XS
  • Release a few current XS modules using Inline.
    • YAML::XS
    • String::Slice

Project Details

Inline development has resurged in #inline on The primary maintainers of Inline, Inline::C and Inline::CPP (C++) are the ones who want to make this happen.

These three modules above have already undergone considerable refactoring, modernization and bug fixing in the past 3 months, with many releases to show for it.

The goals for this grant are not conceptually hard, but require some focus. This grant will give us the time to focus. There is no reason to expect this work will take more than 2 months.


  1. Identify existing (or create new) modules that use different 'XS' strategies. These will be the driving test cases.

    They should include:

    • Inlined C
    • External C
    • Currently using XS
    • Trying to ship with Inline
    • C++
  2. Adjust Inline to know about development vs build vs installed runtimes.

    There are a few strategies. Likely there will be a dependency on a small module, say Inline::Module, that knows how to DTRTs, at the right times.

  3. Facilitate making Inline and Inline::C/CPP authoring and build tools for distributions, not just runtime tools.

  4. Make sure that Inline::C and Inline::CPP work right. In particular, upgrade the grammar tests and parsing to handle many C and C++ constructs that have been found to be lacking in current parsing solutions.
  5. Release real modules to CPAN that exercise each of the new development strategies.

    Watch for results on cpantesters.

  6. Write automation helpers:

    • Dist::Zilla plugin
    • Module::Install plugin
    • Documentation for ExtUtils::MakeMaker and Module::Build
  7. Write documention for how to be an "XS" author without learning XS boilerplate.

    Likely this will include a tutorial.

Project Schedule

David and Ingy see no reason this can't be done in two months. One month for code and modules. One month for documentation and testing. We hope to work on other grants, each about 2 months, so there is good reason to get this one done, done right, and out of the way.

Completeness Criteria

Release of these modules to CPAN (with above support):


Ingy döt Net is the original creator of Inline and Inline::C. His primary focus in Perl is to bring the "hard" things into the hands of beginners.

David Oswald has been maintaining Inline::CPP for four years, and is the only person to have made releases on Inline::CPP since 2003.

Ingy and David work well together and have decided to collaborate on a number of big projects that benefit Perl and Software Development. Inline was the obvious first choice.

Perl Foundation News: Grant Proposal: IO::All Redux

We have received the following grant application "IO::All Redux". Please leave feedback in the comments field by September 25th, 2014.

IO::All Redux

  • Name
    • Ingy döt Net
    • David Oswald
  • Amount Requested

    USD $3,000


Rewrite IO::All to match the needs of 10 years experience.

Benefits to the Perl Community

IO::All is a Perl module that attempts to make all Input/Output operations in Perl, as simple, normal, and uniform as possible. IO::All has been in existence since 2004. It is useful and somewhat extensible, but has a number of inconsistencies, flaws and misgivings.

The proposal is the refactor and/or rewrite the IO::All code to become the perfect way to encode almost all of Perl IO needs. In other words:

  • All things are done safely
  • Unicode issues are all handled correctly
  • Options have the proper scope (lexicalpackageobject)
  • Can be used safely in production
  • Stays simple for 1-liners (small needs)
  • Internals are all plugin based rather than semi-hardcoded
  • Defaults are sane
  • Mo(o(se)) compatible OO
  • Weirder stuff like operator overloads need not be turned on explicitly


Using the Alt module strategy, we deliver an Alt-IO-All-Redux dist that installs a new IO::All. This is tested in beta until deemed safe to release properly as the new IO-All dist.

Current IO::All plugins are revised to the new strategy.

Project Details



  • Create Alt-IO-All-Redux dist so that new code can be used and tested by the community immediately
  • Write tests for new behaviors
  • Refactor the code to match the new design
  • Port current IO::All extension modules to new design
  • Get people on #io-all IRC channel and beyond to test the rewrite
  • Release as IO-All when deemed ready

Project Schedule

This project will take 2 months and can be started immediately upon acceptance.

Completeness Criteria

Alt-IO-All-Redux passes all tests, is vetted by the community and is released to CPAN as the new IO-All dist.


Ingy döt Net wrote IO::All in 2004. Of his many CPAN modules, it is probably one of the 3 most popular.

David Oswald has been a Perl user for over a decade, is an author of several CPAN modules, and maintainer of more. David also runs Salt Lake Perl Mongers.

Ingy and David work well together and have decided to collaborate on a number of projects that will benefit Perl and Software Development.

Perl Foundation News: Grant Proposal: Nile - Visual Web App Framework Separating Code From Design Multi Lingual And Multi Theme

We have received the following grant application "Nile - Visual Web App Framework Separating Code From Design Multi Lingual And Multi Theme". Please leave feedback in the comments field by September 25th, 2014.

Nile - Visual Web App Framework Separating Code From Design Multi Lingual And Multi Theme

  • Name:

    Ahmed Amin Elsheshtawy

  • Amount Requested:

    USD 10,000


The main idea in this framework is to separate all the html design, text, and layout from programming with easy interface like Android.

The framework uses html templates for the design with special xml tags for inserting the dynamic output into the templates.

All the application text is separated in language files in xml format supporting multi lingual applications for easy translating and modifying all the text. The framework supports SEO links and routing in many ways.

The framework build on Modern Perl using Moose for easy interface.

The entire framework uses object oriented style for a unified and clean interface like working with Java.

Benefits to the Perl Community

This framework will highly reduce the development time of a powerful and modern web application in Perl as you can build a full working modern web application with routed URL, template driven multi lingual database driven in few lines of code like Android.


I am working on this project "Nile" framework over a year and I already released it on or github in alpha versions:

Project Details

The main idea of the project has been released as a module on cpan and github.

Some of the main features of the framework:

Separate coding from design

Multi lingual applications

Multi theme applications

SEO URL friendly


Plugin based

Full OO style

Shared application context

Modern Moose based

Helper classes for files, pagination, request, response, XML, database, Serialization, Router, Dispatcher, Language, Logs.

Helper classes for shared Config (XML based files), Settings (database based), Vars.

Views class for easy working with dynamic templates.

Applications can be build with inline actions routing for quick applications or routed to plugins.

Still I have a lots of ideas to be implemented and modifications based on developers and community feedback.


The first alpha version has been released.

Subsequent alpha versions is being released everyday.

New sub modules are being added every while.

New ideas added everyday.

Beta versions will start after main API is been stable.

Final versions will be within one year of working full time.

Considering feedback from the Perl communities and Developers.

Keep working on developing the framework and maintaining it.

Project Schedule

I already started on this project over a year now and work full time on it and I will keep working on it for a long time as it is expected to be a great support for the Perl developers and the Perl community.

The first stable release for production I expect to be within six months working full time.

Completeness Criteria

The framework alpha version is released on cpan and work on it will continue forever by receiving feedback and feature requests from all the developers all over the world. Until version 1.00 the API will be unstable.


My short resume is located here online:

I am working with Perl since 1998 as a self-hired for my online personal business at as a developer for Auction software, classifieds software, pay per click/impression software, forums, etc. for my own business.

I have also a RHCSA and RHCE training and have a good experience with Linux and *nix systems and server issues.

I also I just started Java and Mobile Applications.

Perl Foundation News: Outreach Program for Women: Intern's Summary

Pattawan Kaewduangdee writes:

First of all, I would like to say thank you to The Perl Foundation for giving me such a great opportunity to work on MetaCPAN as an intern in the GNOME Outreach Program for Women. MetaCPAN is a really friendly part of the Perl community where I have learnt a lot from smart and specialized Perl developers.

While I was participating in the program I improved the user experience of MetaCPAN by making existing features more visible. For example, adding feed icons for subscription and making the feed data more descriptive. I fixed bugs such as adding an error page title, upgrading the autocomplete plugin including making table sort types be persistent. In addition, I enhanced the search result page with keyword suggestions, upgraded infrastructure components like Bootstrap3 as well as introducing Font Awesome.

As well as learning technical skills I also learnt about the social aspects of the community. I got to know a lot of Perl people as a MetaCPAN intern. I was given support, friendship, and knowledge exchange from people who are specialists in many different areas. This is a great thing that really makes me like the Open Source community. I also believe that taking part in the Outreach Program for Women was a key factor in my success in getting sponsorship from the Enlightened Perl Organization to enable me to participate in YAPC::Asia.

After finishing this program I plan to keep working on MetaCPAN as a contributor. If I have free time during my studies I want to contribute to other Perl projects in the area of search engine technology which is also my study area and is of personal interest.

Also, thank you so much to Randy Stauner, Olaf Alders and Thomas Sibley for being my mentors. Thanks to every MataCPAN contributor and user for any feedback and suggestions on my work which really helped me to learn. This project is a memorable experience in my life.

brian d foy: 2014 Swiss Perl Workshop redux

The Swiss Perl Workshop did not disappoint, although as a featured speaker there was plenty to worry about. One of the organizers, Matthias Bloch, was well trained to take care of the chaotic situation of herding cats and programmers. He was a trained primary school instructor, in which he explained the getting a group of first graders to do anything is much harder. I'm dubious, having been to many workshops, but everything went well. The trick is to embrace the fear.

Matthias Bloch

Matthias and Roman Baumer, the main organizers, have set out to discover the Perl community is Switzerland. They run the Swiss Perl Community Meetup and have been doing what they can to find the Swiss companies using Perl. It's a lot of leg work and old fashioned networking. You can't merely rely on social networking and email for that We talked a bit that it's not uncommon in my own work to find Perl programmers who don't know about CPAN or Stackoverflow. If they don't know about those, they probably aren't paying attention to other sources.

Matthias Bloch

Matthias Bloch herding the cats

This is similar to what I spoke about in my keynote, The Whitespace in the Perl Community (which I need to write out). The Perl projects that survive and do well are the ones that have constant human input and have someone who pays attention to them everyday. It's not enough to have a webpage or a Twitter account, or to upload some code. I highlighted a few of one my involvements, such as the perlfaq, PAUSE, and Perl mongers--all of which work not because of tech but constant attention. It's not something that we can optimize or automate away. We've seen what happens when we do that: no one sticks around. If you want to build community, you need dedicated organizers and agitators who personally interact with people to find out what they think and want. You can't get that from surveys. Someone needs their finger on the zeitgeist.

Roman Baumer

Roman Baumer takes the pulse of the audience

The workshop was truly a workshop, too. I gave my Become a CPAN Author interactive class, which is getting better and better each time I give it, and Paul Johnson gave a Devel::Cover. In each of these, people had the chance to setup their environment with the help of experts.

Many events take on the title "Workshop", but they really mean "Mini Conference". They have talks where most people watch and one person speaks. I like that this event was an actual workshop and people knew that they would be able to go through a complete task with help. I'd like to see the events that call themselves "workshops" do much more of this, even to the point where we have a first-rate idea about which workshops do the most to bring people into the community.

My own workshop goes far beyond uploading your first distribution to CPAN. I now go through these steps, which seem simple when you've done it once but are daunting as a single task:

  • Create a PAUSE account
  • Create a GitHub account
  • Create a distribution with Module::Starter, h2xs, or whatever you like
  • Import the distro into Git, and push to GitHub
  • Make the dist and upload to PAUSE
  • Make another person in the class a comaintainer of the module
  • Fork the module on GitHub
  • Improve another person's module and send a pull request
  • Upload the other person's module
  • Merge a pull request
  • Set up GitHub and Travis CI integration

Several other good things came out of the conference. Paul Johnson mentioned that cpan -T ..., which should turn off testing, doesn't but he doesn't know why. I was able to answer "Because I did it wrong! It's my fault!" I think that is no longer the case and the shame motivated me to send Andreas pull request 77.

Tobi Oetiker

Tobi Oetiker talking about Callbackery

I think by accident that there was quite a bit of Mojolicious going on. Some of it comes Tobi Oetiker (who talked about callbackery), the author of RRDTool. His company exchanged some money for some consulting from me to help me afford to go to Switzerland. I'm not much for free money and prefer to work for it, usually through classes.

Tobi Oetiker

Lee Johnson talking about fun and non-profits

After the conference I moved on to Zürich for a two-day Mastering Perl class at Colab Zürich. I experimented with Kickstarter to use their multiple rewards system, unlike the campaigns I've run through Crowdtilt. I wanted people to be able to sponsor Swiss students to attend the class, and some did. That expanded the market so more people could participate and I could meet my goal more easily. Remember, the Swiss Perl community is still developing.

I think Kickstarter worked well, but I don't know if it's that effective for discovering new people to bring into the community. Getting someone to create an account on a service they might not use otherwise is a tough sell, especially if it's a US-centric service. But, as I said before, these things go off with constant human involvement and encouragement.

Perl Foundation News: PPW 2014 Call for Talks

The Pittsburgh Perl Workshop Organizers are happy to announce our 7th annual event this November 7th through 9th in downtown Pittsburgh, PA. Our web site is now live and accepting registrations. We are also accepting talk submissions through September 26th:

Thanks to Mark Jason Dominus and Tom Christiansen for volunteering talks for PPW already. And thanks to Infinity Interactive for covering the costs for John Anderson to come to PPW and teach our famous "From Zero to Perl" class on November 7th at no extra charge for anybody buying a ticket to PPW.

We do have room for one more paid day-long class on the 7th. Talks on the 8th and 9th may be 20 or 50 minutes long. We'll give you a ticket to the event, a thank-you gift, and some world-famous PPW coffee!

You may wonder "what topic is right for PPW?" While you might find an academic talk here and there, PPW tends to prefer talks that focus on practical solutions to real-world problems. Talks do not necessarily need to involve Perl code, they just need to be relevant to Perl developers. Successful subjects in the past have included: Revision control, Tools for developers, Integrating other languages with Perl, Job hunting / recruiting, Personal health topics, Project management, and Perl community related talks.

Ultimately, the litmus test for any talk we select for PPW is: "Will our attendees find value in this talk?"

This is your opportunity to share in the excitement of being a part of PPW. Seize the moment and submit your talk today (please!)

Thank you,

The 2014 PPW Organizers

Perl Foundation News: More Perl::Lint Progress

It's a while since I reported on Taiki Kawakami's grant to work on Perl::Lint, a static analyser for Perl 5.

Taiki continues to work on Perl::Lint actively: you can track his progress in the project's public Github repository. Recently he has implemented more policy checks and added filters to exclude policies.

The impressive Perl::Lint playground lets you experiment with the module from within your Web browser and the preview release on CPAN makes it easy to install an experimental version of the module on your own machine. Also, Taiki has shared the slides from his talk about Perl::Lint at YAPC::Asia recently.

Please play in the playground or install the module yourself and let us know what you think.

Laufeyjarson writes... » Perl: PBP: 039 Long Numbers

Great big numbers full of digits are hard to read and be sure they’re right, so the PBP suggest using the magic underscore in the numbers to spread them out and make them more readable.

And it is true, it helps:

$money = 123000000000000;

$cost = 111111111111111111;

Can you afford it?

$money = 123_000_000_000_000;
$cost  = 111_111_111_111_111_111;

Apparently not.

The book also points out that before Perl 5.8 you could only put an _ every three characters, and only in integers.  Later perls can put them between any two digits, including in floating point and non-decimal values.  Like in hex:

$netmask = 0xFF_FF_FE_00;

$junk = 0xDEAD_BEEF;

I agree with this suggestion, and even occasionally remember to do it.  I usually forget until I wind up dealing with tens of millions.

Peter Sergeant: Install Net::SSLeay without root

Want to install Net::SSLeay on a Debian or Ubuntu system that you don't have root on, and doesn't have `libssl-dev` installed? Let me see if I can save you some time...

You are going to need to keep a compiled version of libssl around, and so you need to decide where that will live: I went for: `~/perl5/alien`...

mkdir ~/perl5/alien-src mkdir ~/perl5/alien

cd ~/perl5/alien-src

apt-get source lib64z1-dev # Or the 32 bit version
apt-get source libssl-dev

cd zlib- # Or whatever it is
./configure --prefix=~/perl5/alien
make install

cd ..

cd openssl-1.0.1/ # Or whatever it is
# These options took me approximately 15 years to figure out
./config shared -fPIC --prefix=~/perl5/alien

But this doesn't make install cleanly, as it wants to install the `man` pages centrally. I found a way around this by messing around with other flags but ... the `man` pages don't build anyway - ironically enough they throw a POD-related error. So: open up `Makefile`, find the line that looks like this:

install: all install_docs install_sw

And change it to:

install: all install_sw

Now you can:

make install

Change to wherever you have the Net::SSLeay code:

cd .cpanm/latest-build/Net-SSLeay-1.66 OPENSSL_PREFIX=~/perl5/alien/ perl Makefile.POD-related make make test make install

Laufeyjarson writes... » Perl: PBP: 038 Leading Zeroes

Best Practice: Don’t pad decimal numbers with leading zeroes.

In my mind, this isn’t a best practice.  It’s an error.  You can’t pad decimal numbers with leading zeroes – they turn into octal numbers and are wrong.

The book suggests people might do this to make columns of assignments line up.  If I liked that, and if I did it by hand it might be tempting, but I don’t.  I use spaces.  Better, I let perltidy via the editor use spaces and stop spending brain power on it.

The book also suggests not using leading zeroes for octal digits, and using the oct() function.  I think this is overkill.  Leading zeroes mean octal, use them as they need to be used.  Don’t add a run-time function call instead of just using the right constant because octal is unusual in this era.

I dislike having to make the assumption that the person reading this program doesn’t actually know the language and will be confused.


Perl Foundation News: Grant Extension Request: Tony Cook

Tony Cook has requested an extension of $20,000 for his Maintaining Perl 5 grant. This grant has been running successfully since July 2013. The requested extension would allow Tony to devote another 400 hours to the project. The funds for this extension would come from the Perl 5 Core Maintenance Fund.

As well as posting reports on the p5p mailing list Tony provides detailed monthly reports, the most recent of which can be found in the following blog posts:

August 2014
July 2014
June 2014

Before we make a decision on this extension we would like to have a period of community consultation that will last for seven days. Please leave feedback in the comments or, if you prefer, email your comments to karen at

Perl Foundation News: Maintaining Perl 5: Grant Report for Month 13

Tony Cook writes:

Approximately 22 tickets were reviewed or otherwise worked on, and 3 patches were applied.

With this month I've exceeded the hours in my grant.

8.36#121159 code, tests
#121159 debugging, coding (needs some re-work)
#121159 refactor, testing
0.50#121404 fix, retest, apply to blead and comment
1.12#122005 review is_common(), ptr_table_* code, produce a crash, comment
0.10#122024 review and close
0.57#122107 re-test, apply to blead, comment
1.12#122112 look into a solution
2.43#122143 testing, debugging, comment
0.22#122373 comment
0.33#122405 comment
0.47#122424 review, comment and close - Unicode::Collate and taint
2.13#122445 debug, backtrace, try possible fixes - segfault
with bignum and debugger
9.30#122455 review
#122455 try to make a test
#122455 produce a test, comment
#122455 try to add set/get magic for debug vars, research $DB::signal
#122455 debugging, testing, polish, comment
4.50#122476 write a patch, some testing
#122476 testing, TODO test, produce patch, comment
#122476 re-test, apply to blead and close
0.98#122490 (security) research, comment
0.30#122517 research and comment
1.28cpan #97773 - supply a hang prevention fix for Socket
1.12create report for ptr_table is_common() bug
0.17defined(@) and defined(%) - comment
1.67Fedora 20 and debugger - debug, fix y command, comment
0.22global method cache reset - comment
1.12p5p catch-up
p5p catch-up
0.43smartmatch: check I didn't break when BLOCK

38.44 Hours Total

Dave's Free Press: Journal: Devel::CheckLib can now check libraries' contents : Rakudo's Abstract Syntax Tree

After or while a compiler parses a program, the compiler usually translates the source code into a tree format called Abstract Syntax Tree, or AST for short.

The optimizer works on this program representation, and then the code generation stage turns it into a format that the platform underneath it can understand. Actually I wanted to write about the optimizer, but noticed that understanding the AST is crucial to understanding the optimizer, so let's talk about the AST first.

The Rakudo Perl 6 Compiler uses an AST format called QAST. QAST nodes derive from the common superclass QAST::Node, which sets up the basic structure of all QAST classes. Each QAST node has a list of child nodes, possibly a hash map for unstructured annotations, an attribute (confusingly) named node for storing the lower-level parse tree (which is used to extract line numbers and context), and a bit of extra infrastructure.

The most important node classes are the following:

A list of statements. Each child of the node is considered a separate statement.
A single operation that usually maps to a primitive operation of the underlying platform, like adding two integers, or calling a routine.
Those hold integer, float ("numeric") and string constants respectively.
Holds a reference to a more complex object (for example a class) which is serialized separately.
A list of statements that introduces a separate lexical scope.
A variable
A node that can evaluate to different child nodes, depending on the context it is compiled it.

To give you a bit of a feel of how those node types interact, I want to give a few examples of Perl 6 examples, and what AST they could produce. (It turns out that Perl 6 is quite a complex language under the hood, and usually produces a more complicated AST than the obvious one; I'll ignore that for now, in order to introduce you to the basics.)

Ops and Constants

The expression 23 + 42 could, in the simplest case, produce this AST:

Here an QAST::Op encodes a primitive operation, an addition of two numbers. The :op argument specifies which operation to use. The child nodes are two constants, both of type QAST::IVal, which hold the operands of the low-level operation add.

Now the low-level add operation is not polymorphic, it always adds two floating-point values, and the result is a floating-point value again. Since the arguments are integers and not floating point values, they are automatically converted to float first. That's not the desired semantics for Perl 6; actually the operator + is implemented as a subroutine of name &infix:<+>, so the real generated code is closer to
    :name('&infix:<+>'),    # name of the subroutine to call,,

Variables and Blocks

Using a variable is as simple as writing'name-of-the-variable')), but it must be declared first. This is done with'name-of-the-variable'), :decl('var'), :scope('lexical')).

But there is a slight caveat: in Perl 6 a variable is always scoped to a block. So while you can't ordinarily mention a variable prior to its declaration, there are indirect ways to achieve that (lookup by name, and eval(), to name just two).

So in Rakudo there is a convention to create QAST::Block nodes with two QAST::Stmts children. The first holds all the declarations, and the second all the actual code. That way all the declaration always come before the rest of the code.

So my $x = 42; say $x compiles to roughly this:'$x'), :decl('var'), :scope('lexical')),

Polymorphism and QAST::Want

Perl 6 distinguishes between native types and reference types. Native types are closer to the machine, and their type name is always lower case in Perl 6.

Integer literals are polymorphic in that they can be either a native int or a "boxed" reference type Int.

To model this in the AST, QAST::Want nodes can contain multiple child nodes. The compile-time context decides which of those is acutally used.

So the integer literal 42 actually produces not just a simple QAST::IVal node but rather this:

(Note that is just a nice notation to indicate a boxed integer object; it doesn't quite work like this in the code that translate Perl 6 source code into ASTs).

The first child of a QAST::Want node is the one used by default, if no other alternative matches. The comes a list where the elements with odd indexes are format specifications (here Ii for integers) and the elements at even-side indexes are the AST to use in that case.

An interesting format specification is 'v' for void context, which is always chosen when the return value from the current expression isn't used at all. In Perl 6 this is used to eagerly evaluate lazy lists that are used in void context, and for several optimizations.

Dave's Free Press: Journal: I Love Github

Dave's Free Press: Journal: Palm Treo call db module

Ocean of Awareness: Evolvable languages

Ideally, if a syntax is useful and clear, and a programmer can easily read it at a glance, you should be able to add it to an existing language. In this post, I will describe a modest incremental change to the Perl syntax.

It's one I like, because that's beside the point, for two reasons. First, it's simply intended as an example of language evolution. Second, regardless of its merits, it is unlikely to happen, because of the way that Perl 5 is parsed. In this post I will demonstrate a way of writing a parser, so that this change, or others, can be made in a straightforward way, and without designing your language into a corner.

When initializing a hash, Perl 5 allows you to use not just commas, but also the so-called "wide comma" (=>). The wide comma is suggestive visually, and it also has some smarts about what a hash key is: The hash key is always converted into a string, so that wide comma knows that in a key-value pair like this:

    key1 => 711,

that key1 is intended as a string.

But what about something like this?

   company name => 'Kamamaya Technology',
   employee 1 => first name => 'Jane',
   employee 1 => last name => 'Doe',
   employee 1 => title => 'President',
   employee 2 => first name => 'John',
   employee 2 => last name => 'Smith',
   employee 3 => first name => 'Clarence',
   employee 3 => last name => 'Darrow',

Here I think the intent is obvious -- to create an employee database in the form of a hash of hashes, allowing spaces in the keys. In Data::Dumper format, the result would look like:

              'employee 2' => {
                                'last name' => '\'Smith\'',
                                'first name' => '\'John\''
              'company name' => '\'Kamamaya Technology\'',
              'employee 3' => {
                                'last name' => '\'Darrow\'',
                                'first name' => '\'Clarence\''
              'employee 1' => {
                                'title' => '\'President\'',
                                'last name' => '\'Doe\'',
                                'first name' => '\'Jane\''

And in fact, that is the output of the script in this Github gist, which parses the previous "extended Perl 5" snippet using a Marpa grammar before passing it on to Perl.

Perl 5 does not allow a syntax like this, and looking at its parsing code will tell you why -- it's already a maintenance nightmare. The extension I've described above could, in theory, be added to Perl 5, but doing so would aggravate an already desperate maintenance situation.

Now, depending on taste, you may be just as happy that you'll never see the extensions I have just outlined in Perl 5. But I don't think it is as easy to be happy about a parsing technology that quickly paints the languages which use it into a corner.

How it works

The code is in a Github gist. For the purposes of the example, I've implemented a toy subset of Perl. But this approach has been shown to scale. There are full Marpa-powered parsers of C, ECMAScript, XPath, and liberal HTML.

Marpa is a general BNF parser, which means that anything you can write in BNF, Marpa can parse. For practical parsing, what matters are those grammars that can be parsed in linear time, and with Marpa that class is vast, including all the classes of grammar currently in practical use. To describe the class of grammars that Marpa parses in linear time, assume that you have either a left or right parser, with infinite lookahead, that uses regular expressions. (A parser like this is called LR-regular.) Assume that this LR-regular parser parses your grammar. In that case, you can be sure that Marpa will parse that grammar in linear time, and without doing the lookahead. (Instead Marpa tracks possibilities in a highly-optimized table.) Marpa also parses many grammars that are not LR-regular in linear time, but just LR-regular is very likely to include any class of grammar that you will be interested in parsing. The LR-regular grammars easily include all those that can be parsed using yacc, recursive descent or regular expressions.

Marpa excels at those special hacks so necessary in recursive descent and other techniques. Marpa allows you to define events that will stop it at symbols or rules, both before and after. While stopped, you can hand processing over to your own custom code. Your custom code can feed your own tokens to the parse for as long as you like. In doing so, it can consult Marpa to determine exactly what symbols and rules have been recognized and which ones are expected. Once finished with custom processing, you can then ask Marpa to pick up again at any point you wish.

The craps game is over

The bottom line is that if you can describe your language extension in BNF, or in BNF plus some hacks, you can rely on Marpa parsing it in reasonable time. Language design has been like shooting crap in a casino that sets you up to win a lot of the first rolls before the laws of probability grind you down. Marpa changes the game.

To learn more

Marpa::R2 is available on CPAN. A list of my Marpa tutorials can be found here. There are new tutorials by Peter Stuifzand and amon. The Ocean of Awareness blog focuses on Marpa, and it has an annotated guide. Marpa has a web page that I maintain and Ron Savage maintains another. For questions, support and discussion, there is the "marpa parser" Google Group.


Comments on this post can be made in Marpa's Google group. : and p6doc


Earlier this year I tried to assess the readiness of the Perl 6 language, compilers, modules, documentation and so on. While I never got around to publish my findings, one thing was painfully obvious: there is a huge gap in the area of documentation.

There are quite a few resources, but none of them comprehensive (most comprehensive are the synopsis, but they are not meant for the end user), and no single location we can point people to.


So, in the spirit of xkcd, I present yet another incomplete documentation project: and p6doc.

The idea is to take the same approach as perldoc for Perl 5: create user-level documentation in Pod format (here the Perl 6 Pod), and make it available both on a website and via a command line tool. The source (documentation, command line tool, HTML generator) lives at The website is

Oh, and the last Rakudo Star release (2012.06) already shipped p6doc.

Status and Plans

Documentation, website and command line tool are all in very early stages of development.

In the future, I want both p6doc SOMETHING and to either document or link to documentation of SOMETHING, be it a built-in variable, an operator, a type name, routine name, phaser, constant or... all the other possible constructs that occur in Perl 6. URLs and command line arguments specific to each type of construct will also be available (/type/SOMETHING URLs already work).

Finally I want some way to get a "full" view of a type, ie providing all methods from superclasses and roles too.

Help Wanted

All of that is going to be a lot of work, though the most work will be to write the documentation. You too can help! You can write new documentation, gather and incorporate already existing documentation with compatible licenses (for example synopsis, perl 6 advent calendar, examples from rosettacode), add more examples, proof-read the documentation or improve the HTML generation or the command line tool.

If you have any questions about contributing, feel free to ask in #perl6. Of course you can also; create pull requests right away :-).

Ocean of Awareness: Language design: Exploiting ambiguity

Currently, in designing languages, we don't allow ambiguities -- not even potential ones. We insist that it must not be even possible to write an ambiguous program. This is unnecessarily restrictive.

This post is written in English, which is full of ambiguities. Natural languages are always ambiguous, because human beings find that that's best way for versatile, rapid, easy communication. Human beings arrange things so that every sentence is unambiguous in context. Mistakes happen, and ambiguous sentences occur, but in practice, the problem is manageable. In a conversation, for example, we would just ask for clarification.

If we allow our computer languages to take their most natural forms, they will often have the potential for ambiguity. This is even less of a problem on a computer than it is in conversation -- a computer can always spot an actual ambiguity immediately. When actual ambiguities occur, we can deal with them in exactly the same way that we deal with any other syntax problem: The computer catches it and reports it, and we fix it.

An example

To illustrate, I'll use a DSL-writing DSL language. It'll be tiny -- just lexeme declarations and BNF rules. Newlines will not be significant. Statements can end with a semicolon, but that's optional. (The code for this post is in a Github gist.)

Here is a toy calculator written in our tiny DSL-writing language:

  Number matches '\d+'
  E ::= T '*' F
  E ::= T
  T ::= F '+' Number
  T ::= Number

Trying an improvement

With a grammar this small, just about anything is readable. But let's assume we want to improve it, and that we decide that the lexeme declaration of Number really belongs after the rules which use it. (If our grammar was longer, this could make a real difference.) So we move the lexeme declaration to the end:

  E ::= T '*' F
  E ::= T
  T ::= F '+' Number
  T ::= Number
  Number matches '\d+'

But there's an issue

It turns out the grammar for our toy DSL-writer is ambiguous. When a lexeme declaration follows a BNF rule, there's no way to tell whether or not it is actually a lexeme declaration, or part of the BNF rule. Our parser catches that:

Parse of BNF/Scanless source is ambiguous
Length of symbol "Statement" at line 4, column 1 is ambiguous
  Choices start with: T ::= Number
  Choice 1, length=12, ends at line 4, column 12
  Choice 1: T ::= Number
  Choice 2, length=33, ends at line 5, column 20
  Choice 2: T ::= Number\nNumber matches '\\d

Here Marpa tells you why it thinks your script is ambiguous. Two different statements can start at line 4. Both of them are BNF rules, but one is longer than the other.

Just another syntax error

Instead of having to design a language where ambiguity was not even possible, we designed one where ambiguities can happen. This allows us to design a much more flexible language, like the ones we choose when we humans communicate with each other. The downside is that actual ambiguities will occur, but they can be reported, and fixed, just like any other syntax error.

In this case, we recall we allowed semi-colons to terminate a rule, and our fix is easy:

  E ::= T '*' F
  E ::= T
  T ::= F '+' Number
  T ::= Number ;
  Number matches '\d+'

To learn more

The code for this post is a gist on Github. It was written using Marpa::R2, which is available on CPAN. A list of my Marpa tutorials can be found here. There are new tutorials by Peter Stuifzand and amon. The Ocean of Awareness blog focuses on Marpa, and it has an annotated guide. Marpa has a web page that I maintain and Ron Savage maintains another. For questions, support and discussion, there is a "marpa parser" Google Group and an IRC channel: #marpa at


Comments on this post can be made in Marpa's Google group.

Dave's Free Press: Journal: Graphing tool

Dave's Free Press: Journal: XML::Tiny released : Pattern Matching and Unpacking

When talking about pattern matching in the context of Perl 6, people usually think about regex or grammars. Those are indeed very powerful tools for pattern matching, but not the only one.

Another powerful tool for pattern matching and for unpacking data structures uses signatures.

Signatures are "just" argument lists:

sub repeat(Str $s, Int $count) {
    #     ^^^^^^^^^^^^^^^^^^^^  the signature
    # $s and $count are the parameters
    return $s x $count

Nearly all modern programming languages have signatures, so you might say: nothing special, move along. But there are two features that make them more useful than signatures in other languages.

The first is multi dispatch, which allows you to write several routines with the name, but with different signatures. While extremely powerful and helpful, I don't want to dwell on them. Look at Chapter 6 of the "Using Perl 6" book for more details.

The second feature is sub-signatures. It allows you to write a signature for a sigle parameter.

Which sounds pretty boring at first, but for example it allows you to do declarative validation of data structures. Perl 6 has no built-in type for an array where each slot must be of a specific but different type. But you can still check for that in a sub-signature

sub f(@array [Int, Str]) {
    say @array.join: ', ';
f [42, 'str'];      # 42, str
f [42, 23];         # Nominal type check failed for parameter '';
                    # expected Str but got Int instead in sub-signature
                    # of parameter @array

Here we have a parameter called @array, and it is followed by a square brackets, which introduce a sub-signature for an array. When calling the function, the array is checked against the signature (Int, Str), and so if the array doesn't contain of exactly one Int and one Str in this order, a type error is thrown.

The same mechanism can be used not only for validation, but also for unpacking, which means extracting some parts of the data structure. This simply works by using variables in the inner signature:

sub head(*@ [$head, *@]) {
sub tail(*@ [$, *@tail]) {
say head <a b c >;      # a
say tail <a b c >;      # b c

Here the outer parameter is anonymous (the @), though it's entirely possible to use variables for both the inner and the outer parameter.

The anonymous parameter can even be omitted, and you can write sub tail( [$, *@tail] ) directly.

Sub-signatures are not limited to arrays. For working on arbitrary objects, you surround them with parenthesis instead of brackets, and use named parameters inside:

multi key-type ($ (Numeric :$key, *%)) { "Number" }
multi key-type ($ (Str     :$key, *%)) { "String" }
for (42 => 'a', 'b' => 42) -> $pair {
    say key-type $pair;
# Output:
# Number
# String

This works because the => constructs a Pair, which has a key and a value attribute. The named parameter :$key in the sub-signature extracts the attribute key.

You can build quite impressive things with this feature, for example red-black tree balancing based on multi dispatch and signature unpacking. (More verbose explanation of the code.) Most use cases aren't this impressive, but still it is very useful to have occasionally. Like for this small evaluator.

Dave's Free Press: Journal: YAPC::Europe 2007 report: day 2 : YAPC Europe 2013 Day 3

The second day of YAPC Europe climaxed in the river boat cruise, Kiev's version of the traditional conference dinner. It was a largish boat traveling on the Dnipro river, with food, drinks and lots of Perl folks. Not having fixed tables, and having to get up to fetch food and drinks led to a lot of circulation, and thus meeting many more people than at traditionally dinners. I loved it.

Day 3 started with a video message from next year's YAPC Europe organizers, advertising for the upcoming conference and talking a bit about the oppurtunities that Sofia offers. Tempting :-).

Monitoring with Perl and Unix::Statgrab was more about the metrics that are available for monitoring, and less about doing stuff with Perl. I was a bit disappointed.

The "Future Perl Versioning" Discussion was a very civilized discussion, with solid arguments. Whether anybody changed their minds remain to be seen.

Carl Mäsak gave two great talks: one on reactive programming, and one on regular expressions. I learned quite a bit in the first one, and simply enjoyed the second one.

After the lunch (tasty again), I attended Jonathan Worthington's third talk, MoarVM: a metamodel-focused runtime for NQP and Rakudo. Again this was a great talk, based on great work done by Jonathan and others during the last 12 months or so. MoarVM is a virtual machine designed for Perl 6's needs, as we understand them now (as opposed to parrot, which was designed towards Perl 6 as it was understood around 2003 or so, which is considerably different).

How to speak manager was both amusing and offered a nice perspective on interactions between managers and programmers. Some of this advice assumed a non-tech-savy manager, and thus didn't quite apply to my current work situation, but was still interesting.

I must confess I don't remember too much of the rest of the talks that evening. I blame five days of traveling, hackathon and conference taking their toll on me.

The third session of lightning talks was again an interesting mix, containing interesting technical tidbits, the usual "we are hiring" slogans, some touching and thoughtful moments, and finally a song by Piers Cawley. He had written the lyrics in the previous 18 hours (including sleep), to (afaict) a traditional irish song. Standing up in front of ~300 people and singing a song that you haven't really had time to practise takes a huge amount of courage, and I admire Piers both for his courage and his great performance. I hope it was recorded, and makes it way to the public soon.

Finally the organizers spoke some closing words, and received their well-deserved share of applause.

As you might have guess from this and the previous blog posts, I enjoyed this year's YAPC Europe very much, and found it well worth attending, and well organized. I'd like to give my heart-felt thanks to everybody who helped to make it happen, and to my employer for sending me there.

This being only my second YAPC, I can't make any far-reaching comparisons, but compared to YAPC::EU 2010 in Pisa I had an easier time making acquaintances. I cannot tell what the big difference was, but the buffet-style dinners at the pre-conference meeting and the river boat cruise certainly helped to increase the circulation and thus the number of people I talked to.

Dave's Free Press: Journal: YAPC::Europe 2007 travel plans : A small regex optimization for NQP and Rakudo

Recently I read the course material of the Rakudo and NQP Internals Workshop, and had an idea for a small optimization for the regex engine. Yesterday night I implemented it, and I'd like to walk you through the process.

As a bit of background, the regex engine that Rakudo uses is actually implemented in NQP, and used by NQP too. The code I am about to discuss all lives in the NQP repository, but Rakudo profits from it too.

In addition one should note that the regex engine is mostly used for parsing grammar, a process which involves nearly no scanning. Scanning is the process where the regex engine first tries to match the regex at the start of the string, and if it fails there, moves to the second character in the string, tries again etc. until it succeeds.

But regexes that users write often involve scanning, and so my idea was to speed up regexes that scan, and where the first thing in the regex is a literal. In this case it makes sense to find possible start positions with a fast string search algorithm, for example the Boyer-Moore algorithm. The virtual machine backends for NQP already implement that as the index opcode, which can be invoked as start = index haystack, needle, startpos, where the string haystack is searched for the substring needle, starting from position startpos.

From reading the course material I knew I had to search for a regex type called scan, so that's what I did:

$ git grep --word scan
3rdparty/libtommath/bn_error.c:   /* scan the lookup table for the given message
3rdparty/libtommath/bn_mp_cnt_lsb.c:   /* scan lower digits until non-zero */
3rdparty/libtommath/bn_mp_cnt_lsb.c:   /* now scan this digit until a 1 is found
3rdparty/libtommath/bn_mp_prime_next_prime.c:                   /* scan upwards 
3rdparty/libtommath/changes.txt:       -- Started the Depends framework, wrote d
src/QRegex/P5Regex/Actions.nqp:            :rxtype<sca
src/QRegex/P6Regex/Actions.nqp:            :rxtype<sca
src/vm/jvm/QAST/Compiler.nqp:    method scan($node) {
src/vm/moar/QAST/QASTRegexCompilerMAST.nqp:    method scan($node) {
Binary file src/vm/moar/stage0/NQPP6QRegexMoar.moarvm matches
Binary file src/vm/moar/stage0/QASTMoar.moarvm matches
src/vm/parrot/QAST/Compiler.nqp:    method scan($node) {
src/vm/parrot/stage0/P6QRegex-s0.pir:    $P5025 = $P5024."new"("scan" :named("rx
src/vm/parrot/stage0/QAST-s0.pir:.sub "scan" :subid("cuid_135_1381944260.6802") 
src/vm/parrot/stage0/QAST-s0.pir:    push $P5004, "scan"

The binary files and .pir files are generated code included just for bootstrapping, and not interesting for us. The files in 3rdparty/libtommath are there for bigint handling, thus not interesting for us either. The rest are good matches: src/QRegex/P6Regex/Actions.nqp is responsible for compiling Perl 6 regexes to an abstract syntax tree (AST), and src/vm/parrot/QAST/Compiler.nqp compiles that AST down to PIR, the assembly language that the Parrot Virtual Machine understands.

So, looking at src/QRegex/P6Regex/Actions.nqp the place that mentions scan looked like this:

    $block<orig_qast> := $qast;
    $qast := :rxtype<concat>,
        :rxtype<scan> ),
                      ?? :rxtype<pass> )
                      !! (nqp::substr(%*RX<name>, 0, 12) ne '!!LATENAME!!'
                            ?? :rxtype<pass>, :name(%*RX<name>) )
                            !! :rxtype<pass>,
                                       :name(nqp::substr(%*RX<name>, 12)),

So to make the regex scan, the AST (in $qast) is wrapped in<concat>, :rxtype<scan> ), $qast, ...), plus some stuff I don't care about.

To make the optimization work, the scan node needs to know what to scan for, if the first thing in the regex is indeed a constant string, aka literal. If it is, $qast is either directly of rxtype literal, or a concat node where the first child is a literal. As a patch, it looks like this:

--- a/src/QRegex/P6Regex/Actions.nqp
+++ b/src/QRegex/P6Regex/Actions.nqp
@@ -667,9 +667,21 @@ class QRegex::P6Regex::Actions is HLL::Actions {
     self.store_regex_nfa($code_obj, $block,$qast))
     self.alt_nfas($code_obj, $block, $qast);
+    my $scan := :rxtype<scan> );
+    {
+        my $q := $qast;
+        if $q.rxtype eq 'concat' && $q[0] {
+            $q := $q[0]
+        }
+        if $q.rxtype eq 'literal' {
+            nqp::push($scan, $q[0]);
+            $scan.subtype($q.subtype);
+        }
+    }
     $block<orig_qast> := $qast;
     $qast := :rxtype<concat>,
-        :rxtype<scan> ),
+                 $scan,

Since concat nodes have always been empty so far, the code generators don't look at their child nodes, and adding one with nqp::push($scan, $q[0]); won't break anything on backends that don't support this optimization yet (which after just this patch were all of them). Running make test confirmed that.

My original patch did not contain the line $scan.subtype($q.subtype);, and later on some unit tests started to fail, because regex matches can be case insensitive, but the index op works only case sensitive. For case insensitive matches, the $q.subtype of the literal regex node would be ignorecase, so that information needs to be carried on to the code generation backend.

Once that part was in place, and some debug nqp::say() statements confirmed that it indeed worked, it was time to look at the code generation. For the parrot backend, it looked like this:

    method scan($node) {
        my $ops := self.post_new('Ops', :result(%*REG<cur>));
        my $prefix := self.unique('rxscan');
        my $looplabel := self.post_new('Label', :name($prefix ~ '_loop'));
        my $scanlabel := self.post_new('Label', :name($prefix ~ '_scan'));
        my $donelabel := self.post_new('Label', :name($prefix ~ '_done'));
        $ops.push_pirop('repr_get_attr_int', '$I11', 'self', %*REG<curclass>, '"$!from"');
        $ops.push_pirop('ne', '$I11', -1, $donelabel);
        $ops.push_pirop('goto', $scanlabel);
        $ops.push_pirop('inc', %*REG<pos>);
        $ops.push_pirop('gt', %*REG<pos>, %*REG<eos>, %*REG<fail>);
        $ops.push_pirop('repr_bind_attr_int', %*REG<cur>, %*REG<curclass>, '"$!from"', %*REG<pos>);
        self.regex_mark($ops, $looplabel, %*REG<pos>, 0);

While a bit intimidating at first, staring at it for a while quickly made clear what kind of code it emits. First three labels are generated, to which the code can jump with goto $label: One as a jump target for the loop that increments the cursor position ($looplabel), one for doing the regex match at that position ($scanlabel), and $donelabel for jumping to when the whole thing has finished.

Inside the loop there is an increment (inc) of the register the holds the current position (%*REG<pos>), that position is compared to the end-of-string position (%*REG<eos>), and if is larger, the cursor is marked as failed.

So the idea is to advance the position by one, and then instead of doing the regex match immediately, call the index op to find the next position where the regex might succeed:

--- a/src/vm/parrot/QAST/Compiler.nqp
+++ b/src/vm/parrot/QAST/Compiler.nqp
@@ -1564,7 +1564,13 @@ class QAST::Compiler is HLL::Compiler {
         $ops.push_pirop('goto', $scanlabel);
         $ops.push_pirop('inc', %*REG<pos>);
-        $ops.push_pirop('gt', %*REG<pos>, %*REG<eos>, %*REG<fail>);
+        if nqp::elems($node.list) && $node.subtype ne 'ignorecase' {
+            $ops.push_pirop('index', %*REG<pos>, %*REG<tgt>, self.rxescape($node[0]), %*REG<pos>);
+            $ops.push_pirop('eq', %*REG<pos>, -1, %*REG<fail>);
+        }
+        else {
+            $ops.push_pirop('gt', %*REG<pos>, %*REG<eos>, %*REG<fail>);
+        }
         $ops.push_pirop('repr_bind_attr_int', %*REG<cur>, %*REG<curclass>, '"$!from"', %*REG<pos>);
         self.regex_mark($ops, $looplabel, %*REG<pos>, 0);

The index op returns -1 on failure, so the condition for a cursor fail are slightly different than before.

And as mentioned earlier, the optimization can only be safely done for matches that don't ignore case. Maybe with some additional effort that could be remedied, but it's not as simple as case-folding the target string, because some case folding operations can change the string length (for example ß becomes SS while uppercasing).

After successfully testing the patch, I came up with a small, artifical benchmark designed to show a difference in performance for this particular case. And indeed, it sped it up from 647 ± 28 µs to 161 ± 18 µs, which is roughly a factor of four.

You can see the whole thing as two commits on github.

What remains to do is implementing the same optimization on the JVM and MoarVM backends, and of course other optimizations. For example the Perl 5 regex engine keeps track of minimal and maximal string lengths for each subregex, and can anchor a regex like /a?b?longliteral/ to 0..2 characters before a match of longliteral, and generally use that meta information to fail faster.

But for now I am mostly encouraged that doing a worthwhile optimization was possible in a single evening without any black magic, or too intimate knowledge of the code generation.

Update: the code generation for MoarVM now also uses the index op. The logic is the same as for the parrot backend, the only difference is that the literal needs to be loaded into a register (whose name fresh_s returns) before index_s can use it. : Quo Vadis Perl?

The last two days we had a gathering in town named Perl (yes, a place with that name exists). It's a lovely little town next to the borders to France and Luxembourg, and our meeting was titled "Perl Reunification Summit".

Sadly I only managed to arrive in Perl on Friday late in the night, so I missed the first day. Still it was totally worth it.

We tried to answer the question of how to make the Perl 5 and the Perl 6 community converge on a social level. While we haven't found the one true answer to that, we did find that discussing the future together, both on a technical and on a social level, already brought us closer together.

It was quite a touching moment when Merijn "Tux" Brand explained that he was skeptic of Perl 6 before the summit, and now sees it as the future.

We also concluded that copying API design is a good way to converge on a technical level. For example Perl 6's IO subsystem is in desperate need of a cohesive design. However none of the Perl 6 specification and the Rakudo development team has much experience in that area, and copying from successful Perl 5 modules is a viable approach here. Path::Class and IO::All (excluding the crazy parts) were mentioned as targets worth looking at.

There is now also an IRC channel to continue our discussions -- join #p6p5 on if you are interested.

We also discussed ways to bring parallel programming to both perls. I missed most of the discussion, but did hear that one approach is to make easier to send other processes some serialized objects, and thus distribute work among several cores.

Patrick Michaud gave a short ad-hoc presentation on implicit parallelism in Perl 6. There are several constructs where the language allows parallel execution, for example for Hyper operators, junctions and feeds (think of feeds as UNIX pipes, but ones that allow passing of objects and not just strings). Rakudo doesn't implement any of them in parallel right now, because the Parrot Virtual Machine does not provide the necessary primitives yet.

Besides the "official" program, everybody used the time in meat space to discuss their favorite projects with everybody else. For example I took some time to discuss the future of with Patrick and Gabor Szabgab, and the relation to perl6maven with the latter. The Rakudo team (which was nearly completely present) also discussed several topics, and I was happy to talk about the relation between Rakudo and Parrot with Reini Urban.

Prior to the summit my expectations were quite vague. That's why it's hard for me to tell if we achieved what we and the organizers wanted. Time will tell, and we want to summarize the result in six to nine months. But I am certain that many participants have changed some of their views in positive ways, and left the summit with a warm, fuzzy feeling.

I am very grateful to have been invited to such a meeting, and enjoyed it greatly. Our host and organizers, Liz and Wendy, took care of all of our needs -- travel, food, drinks, space, wifi, accommodation, more food, entertainment, food for thought, you name it. Thank you very much!

Update: Follow the #p6p5 hash tag on twitter if you want to read more, I'm sure other participants will blog too.

Other blogs posts on this topic: PRS2012 – Perl5-Perl6 Reunification Summit by mdk and post-yapc by theorbtwo

Dave's Free Press: Journal: Wikipedia handheld proxy

Dave's Free Press: Journal: Bryar security hole

Dave's Free Press: Journal: Thankyou, Anonymous Benefactor!

Dave's Free Press: Journal: Number::Phone release

Dave's Free Press: Journal: Ill

Dave's Free Press: Journal: CPANdeps upgrade : iPod nano 5g on linux -- works!

For Christmas I got an iPod nano (5th generation). Since I use only Linux on my home computers, I searched the Internet for how well it is supported by Linux-based tools. The results looked bleak, but they were mostly from 2009.

Now (December 2012) on my Debian/Wheezy system, it just worked.

The iPod nano 5g presents itself as an ordinary USB storage device, which you can mount without problems. However simply copying files on it won't make the iPod show those files in the play lists, because there is some meta data stored on the device that must be updated too.

There are several user-space programs that allow you to import and export music from and to the iPod, and update those meta data files as necessary. The first one I tried, gtkpod 2.1.2, worked fine.

Other user-space programs reputed to work with the iPod are rhythmbox and amarok (which both not only organize but also play music).

Although I don't think anything really depends on some particular versions here (except that you need a new enough version of gtkpod), here is what I used:

  • Architecture: amd64
  • Linux: 3.2.0-4-amd64 #1 SMP Debian 3.2.35-2
  • Userland: Debian GNU/Linux "Wheezy" (currently "testing")
  • gtkpod: 2.1.2-1

Dave's Free Press: Journal: CPANdeps

Dave's Free Press: Journal: Module pre-requisites analyser

Dave's Free Press: Journal: Perl isn't dieing

Dave's Free Press: Journal: YAPC::Europe 2007 report: day 3 : The Fun of Running a Public Web Service, and Session Storage

One of my websites, Sudokugarden, recently surged in traffic, from about 30k visitors per month to more than 100k visitors per month. Here's the tale of what that meant for the server side.

As a bit of background, I built the website in 2007, when I knew a lot less about the web and programming. It runs on a host that I share with a few friends; I don't have root access on that machine, though when the admin is available, I can generally ask him to install stuff for me.

Most parts of the websites are built as static HTML files, with Server Side Includes. Parts of those SSIs are Perl CGI scripts. The most popular part though, which allows you to solve Sudoku in the browser and keeps hiscores, is written as a collection of Perl scripts, backed by a mysql database.

When at peak times the site had more than 10k visitors a day, lots of visitors would get a nasty mysql: Cannot connect: Too many open connections error. The admin wasn't available for bumping the connection limit, so I looked for other solutions.

My first action was to check the logs for spammers and crawlers that might hammered the page, and I found and banned some; but the bulk of the traffic looked completely legitimate, and the problem persisted.

Looking at the seven year old code, I realized that most pages didn't actually need a database connection, if only I could remove the session storage from the database. And, in fact, I could. I used CGI::Session, which has pluggable backend. Switching to a file-based session backend was just a matter of changing the connection string and adding a directory for session storage. Luckily the code was clean enough that this only affected a single subroutine. Everything was fine.

For a while.

Then, about a month later, the host ran out of free disk space. Since it is used for other stuff too (like email, and web hosting for other users) it took me a while to make the connection to the file-based session storage. What happened was 3 million session files on a ext3 file system with a block size of 4 kilobyte. A session is only about 400 byte, but since a file uses up a multiple of the block size, the session storage amounted to 12 gigabyte of used-up disk space, which was all that was left on that machine.

Deleting those sessions turned out to be a problem; I could only log in as my own user, which doesn't have write access to the session files (which are owned by www-data, the Apache user). The solution was to upload a CGI script that deleted the session, but of course that wasn't possible at first, because the disk was full. In the end I had to delete several gigabyte of data from my home directory before I could upload anything again. (Processes running as root were still writing to reserved-to-root portions of the file system, which is why I had to delete so much data before I was able to write again).

Even when I was able to upload the deletion script, it took quite some time to actually delete the session files; mostly because the directory was too large, and deleting files on ext3 is slow. When the files were gone, the empty session directory still used up 200MB of disk space, because the directory index doesn't shrink on file deletion.

Clearly a better solution to session storage was needed. But first I investigated where all those sessions came from, and banned a few spamming IPs. I also changed the code to only create sessions when somebody logs in, not give every visitor a session from the start.

My next attempt was to write the sessions to an SQLite database. It uses about 400 bytes per session (plus a fixed overhead for the db file itself), so it uses only a tenth of storage space that the file-based storage used. The SQLite database has no connection limit, though the old-ish version that was installed on the server doesn't seem to have very fine-grained locking either; within a few days I could errors that the session database was locked.

So I added another layer of workaround: creating a separate session database per leading IP octet. So now there are up to 255 separate session database (plus a 256th for all IPv6 addresses; a decision that will have to be revised when IPv6 usage rises). After a few days of operation, it seems that this setup works well enough. But suspicious as I am, I'll continue monitoring both disk usage and errors from Apache.

So, what happens if this solution fails to work out? I can see basically two approaches: move the site to a server that's fully under my control, and use redis or memcached for session storage; or implement sessions with signed cookies that are stored purely on the client side. : YAPC Europe 2013 Day 2

The second day of YAPC Europe was enjoyable and informative.

I learned about ZeroMQ, which is a bit like sockets on steriods. Interesting stuff. Sadly Design decisions on p2 didn't quite qualify as interesting.

Matt's PSGI archive is a project to rewrite Matt's infamous script archive in modern Perl. Very promising, and a bit entertaining too.

Lunch was very tasty, more so than the usual mass catering. Kudos to the organizers!

After lunch, jnthn talked about concurrency, parallelism and asynchrony in Perl 6. It was a great talk, backed by great work on the compiler and runtime. Jonathans talk are always to be recommended.

I think I didn't screw up my own talk too badly, at least the timing worked fine. I just forgot to show the last slide. No real harm done.

I also enjoyed mst's State of the Velociraptor, which was a summary of what went on in the Perl world in the last year. (Much better than the YAPC::EU 2010 talk with the same title).

The Lightning talks were as enjoyable as those from the previous day. So all fine!

Next up is the river cruise, I hope to blog about that later on. : Stop The Rewrites!

What follows is a rant. If you're not in the mood to read a rant right now, please stop and come back in an hour or two.

The Internet is full of people who know better than you how to manage your open source project, even if they only know some bits and pieces about it. News at 11.

But there is one particular instance of that advice that I hear often applied to Rakudo Perl 6: Stop the rewrites.

To be honest, I can fully understand the sentiment behind that advice. People see that it has taken us several years to get where we are now, and in their opinion, that's too long. And now we shouldn't waste our time with rewrites, but get the darn thing running already!

But Software development simply doesn't work that way. Especially not if your target is moving, as is Perl 6. (Ok, Perl 6 isn't moving that much anymore, but there are still areas we don't understand very well, so our current understanding of Perl 6 is a moving target).

At some point or another, you realize that with your current design, you can only pile workaround on top of workaround, and hope that the whole thing never collapses.

Picture of
a Jenga tower
Image courtesy of sermoa

Those people who spread the good advice to never do any major rewrites again, they never address what you should do when you face such a situation. Build the tower of workarounds even higher, and pray to Cthulhu that you can build it robust enough to support a whole stack of third-party modules?

Curiously this piece of advice occasionally comes from people who otherwise know a thing or two about software development methodology.

I should also add that since the famous "nom" switchover, which admittedly caused lots of fallout, we had three major rewrites of subsystems (longest-token matching of alternative, bounded serialization and qbootstrap), All three of which caused no new test failures, and two of which caused no fallout from the module ecosystem at all. In return, we have much faster startup (factor 3 to 4 faster) and a much more correct regex engine. : The REPL trick

A recent discussion on IRC prompted me to share a small but neat trick with you.

If there are things you want to do quite often in the Rakudo REPL (the interactive "Read-Evaluate-Print Loop"), it makes sense to create a shortcut for them. And creating shortcuts for often-used stuff is what programming languages excel at, so you do it right in Perl module:

use v6;
module REPLHelper;

sub p(Mu \x) is export {
    x.^ *.^name;

I have placed mine in $HOME/.perl6/repl.

And then you make sure it's loaded automatically:

$ alias p6repl="perl6 -I$HOME/.perl6/repl/ -MREPLHelper"
$ p6repl
> p Int
Int Cool Any Mu

Now you have a neat one-letter function which tells you the parents of an object or a type, in method resolution order. And a way to add more shortcuts when you need them. : News in the Rakudo 2012.06 release

Rakudo development continues to progress nicely, and so there are a few changes in this month's release worth explaining.

Longest Token Matching, List Iteration

The largest chunk of development effort went into Longest-Token Matching for alternations in Regexes, about which Jonathan already blogged. Another significant piece was Patrick's refactor of list iteration. You probably won't notice much of that, except that for-loops are now a bit faster (maybe 10%), and laziness works more reliably in a couple of cases.

String to Number Conversion

String to number conversion is now stricter than before. Previously an expression like +"foo" would simply return 0. Now it fails, ie returns an unthrown exception. If you treat that unthrown exception like a normal value, it blows up with a helpful error message, saying that the conversion to a number has failed. If that's not what you want, you can still write +$str // 0.

require With Argument Lists

require now supports argument lists, and that needs a bit more explaining. In Perl 6 routines are by default only looked up in lexical scopes, and lexical scopes are immutable at run time. So, when loading a module at run time, how do you make functions available to the code that loads the module? Well, you determine at compile time which symbols you want to import, and then do the actual importing at run time:

use v6;
require Test <&plan &ok &is>;
#            ^^^^^^^^^^^^^^^ evaluated at compile time,
#                            declares symbols &plan, &ok and &is
#       ^^^                  loaded at run time

Module Load Debugging

Rakudo had some trouble when modules were precompiled, but its dependencies were not. This happens more often than it sounds, because Rakudo checks timestamps of the involved files, and loads the source version if it is newer than the compiled file. Since many file operations (including simple copying) change the time stamp, that could happen very easily.

To make debugging of such errors easier, you can set the RAKUDO_MODULE_DEBUG environment variable to 1 (or any positive number; currently there is only one debugging level, in the future higher numbers might lead to more output).

$ RAKUDO_MODULE_DEBUG=1 ./perl6 -Ilib t/spec/S11-modules/require.t
MODULE_DEBUG: loading blib/Perl6/BOOTSTRAP.pbc
MODULE_DEBUG: done loading blib/Perl6/BOOTSTRAP.pbc
MODULE_DEBUG: loading lib/Test.pir
MODULE_DEBUG: done loading lib/Test.pir
MODULE_DEBUG: loading t/spec/packages/Fancy/
MODULE_DEBUG: done loading t/spec/packages/Fancy/
ok 1 - can load Fancy::Utilities at run time
ok 2 - can call our-sub from required module
MODULE_DEBUG: loading t/spec/packages/
MODULE_DEBUG: loading t/spec/packages/
MODULE_DEBUG: loading t/spec/packages/B/
MODULE_DEBUG: done loading t/spec/packages/B/
MODULE_DEBUG: done loading t/spec/packages/
MODULE_DEBUG: done loading t/spec/packages/
ok 3 - can require with variable name
ok 4 - can call subroutines in a module by name
ok 5 - require with import list

Module Loading Traces in Compile-Time Errors

If module myA loads module myB, and myB dies during compilation, you now get a backtrace which indicates through which path the erroneous module was loaded:

$ ./perl6 -Ilib -e 'use myA'
Placeholder variable $^x may not be used here because the surrounding block
takes no signature
at lib/
  from module myA (lib/
  from -e:1

Improved autovivification

Perl allows you to treat not-yet-existing array and hash elements as arrays or hashes, and automatically creates those elements for you. This is called autovivification.

my %h;
%h<x>.push: 1, 2, 3; # worked in the previous release too
push %h<y>, 4, 5, 6; # newly works in the 2012.06

Dave's Free Press: Journal: Travelling in time: the CP2000AN : Localization for Exception Messages

Ok, my previous blog post wasn't quite as final as I thought.. My exceptions grant said that the design should make it easy to enable localization and internationalization hooks. I want to discuss some possible approaches and thereby demonstrate that the design is flexible enough as it is.

At this point I'd like to mention that much of the flexibility comes from either Perl 6 itself, or from the separation of stringifying and exception and generating the actual error message.

Mixins: the sledgehammer

One can always override a method in an object by mixing in a role which contains the method on question. When the user requests error messages in a different language, one can replace method Str or method message with one that generates the error message in a different language.

Where should that happen? The code throws exceptions is fairly scattered over the code base, but there is a central piece of code in Rakudo that turns Parrot-level exceptions into Perl 6 level exceptions. That would be an obvious place to muck with exceptions, but it would mean that exceptions that are created but not thrown don't get the localization. I suspect that's a fairly small problem in the real world, but it still carries code smell. As does the whole idea of overriding methods.

Another sledgehammer: alternative setting

Perl 6 provides built-in types and routines in an outer lexical scope known as a "setting". The default setting is called CORE. Due to the lexical nature of almost all lookups in Perl 6, one can "override" almost anything by providing a symbol of the same name in a lexical scope.

One way to use that for localization is to add another setting between the user's code and CORE. For example a file DE.setting:

my class X::Signature::Placeholder does X::Comp {
    method message() {
        'Platzhaltervariablen können keine bestehenden Signaturen überschreiben';

After compiling, we can load the setting:

$ ./perl6 --target=pir --output=DE.setting.pir DE.setting
$ ./install/bin/parrot -o DE.setting.pbc DE.setting.pir
$ ./perl6 --setting=DE -e 'sub f() { $^x }'
Platzhaltervariablen können keine bestehenden Signaturen überschreiben
at -e:1

That works beautifully for exceptions that the compiler throws, because they look up exception types in the scope where the error occurs. Exceptions from within the setting are a different beast, they'd need special lookup rules (though the setting throws far fewer exceptions than the compiler, so that's probably manageable).

But while this looks quite simple, it comes with a problem: if a module is precompiled without the custom setting, and it contains a reference to an exception type, and then the l10n setting redefines it, other programs will contain references to a different class with the same name. Which means that our precompiled module might only catch the English version of X::Signature::Placeholder, and lets our localized exception pass through. Oops.

Tailored solutions

A better approach is probably to simply hack up the string conversion in type Exception to consider a translator routine if present, and pass the invocant to that routine. The translator routine can look up the error message keyed by the type of the exception, and has access to all data carried in the exception. In untested Perl 6 code, this might look like this:

# required change in CORE
my class Exception {
    multi method Str(Exception:D:) {
        return self.message unless defined $*LANG;
        if %*TRANSLATIONS{$*LANG}{self.^name} -> $translator {
            return $translator(self);
        return self.message; # fallback

# that's what a translator could write:

%*TRANSLATIONS<de><X::TypeCheck::Assignment> = {
        "Typenfehler bei Zuweisung zu '$_.symbol()': "
        ~ "'{$_.expected.^name}' erwartet, aber '{$^name} bekommen"

And setting the dynamic language $*LANG to 'de' would give a German error message for type check failures in assignment.

Another approach is to augment existing error classes and add methods that generate the error message in different languages, for example method message-fr for French, and check their existence in Exception.Str if a different language is requested.


In conclusion there are many bad and enough good approaches; we will decide which one to take when the need arises (ie when people actually start to translate error messages).

Dave's Free Press: Journal: YAPC::Europe 2007 report: day 1

Ocean of Awareness: Significant newlines? Or semicolons?

Should statements have explicit terminators, like the semicolon of Perl and the C language? Or should they avoid the clutter, and separate statements by giving whitespace syntactic significance and a real effect on the semantics, as is done in Python and Javascript?

Actually we don't have to go either way. As an example, let's look at some BNF-ish DSL. It defines a small calculator. At first glance, it looks as if this language has taken the significant-whitespace route -- there certainly are no explicit statement terminators.

:default ::= action => ::first
:start ::= Expression
Expression ::= Term
Term ::=
    | Term '+' Term action => do_add
Factor ::=
    | Factor '*' Factor action => do_multiply
Number ~ digits
digits ~ [\d]+
:discard ~ whitespace
whitespace ~ [\s]+

The rule is that there isn't one

If we don't happen to like the layout of the above DSL, and rearrange it in various ways, we'll find that everything we try works. If we become curious about what exactly what the rules for newlines are, and look at the documentation, we won't find any. That's because there aren't any.

We can see this by thoroughly messing up the line structure:

:default ::= action => ::first :start ::= Expression Expression ::= Term
Term ::= Factor | Term '+' Term action => do_add Factor ::= Number |
Factor '*' Factor action => do_multiply Number ~ digits digits ~
[\d]+ :discard ~ whitespace whitespace ~ [\s]+

The script will continue to run just fine.

How does it work?

How does it work? Actually, pose the question this way: Can a human reader tell where the statements end? If the reader is not used to reading BNF, he might have trouble with this particular example but, for a language that he knows, the answer is simple: Yes, of course he can. So really the question is, why do we expect the parser to be so stupid that it cannot?

The only trick is that this is done without trickery. Marpa's DSL is written in itself, and Marpa's self-grammar describes exactly what a statement is and what it is not. The Marpa parser is powerful enough to simply take this self-describing DSL and act on it, finding where statements begin and end, much as a human reader is able to.

To learn more

This example was produced with the Marpa parser. Marpa::R2 is available on CPAN. The code for this example is based on that in the synopsis for its top-level document, but it is isolated conveniently in a Github gist.

A list of my Marpa tutorials can be found here. There are new tutorials by Peter Stuifzand and amon. The Ocean of Awareness blog focuses on Marpa, and it has an annotated guide. Marpa has a web page that I maintain and Ron Savage maintains another. For questions, support and discussion, there is the "marpa parser" Google Group. Comments on this post can be made there.

Dave's Free Press: Journal: Thanks, Yahoo!

Ocean of Awareness: Marpa has a new web page

Marpa has a new official public website, which Ron Savage has generously agreed to manage. For those who have not heard of it, Marpa is a parsing algorithm. It is new, but very much based on earlier work by Jay Earley, Joop Leo, John Aycock and R. Nigel Horspool. Marpa is intended to replace, and to go well beyond, recursive descent and the yacc family of parsers.

  • Marpa is fast. It parses in linear time:
    • all the grammar classes that recursive descent parses;
    • the grammar class that the yacc family parses;
    • in fact, all unambiguous grammars, as long as they are free of unmarked middle recursions; and
    • all ambiguous grammars that are unions of a finite set of any of the above grammars.
  • Marpa is powerful. Marpa will parse anything that can be written in BNF. This includes any mixture of left, right and middle recursions.
  • Marpa is convenient. Unlike recursive descent, you do not have to write a parser -- Marpa generates one from BNF. Unlike PEG or yacc, parser generation is unrestricted and exact. Marpa converts any grammar which can be written as BNF into a parser which recognizes everything in the language described by that BNF, and which rejects everything that is not in that language. The programmer is not forced to make arbitrary choices while parsing. If a rule has several alternatives, all of the alternatives are considered for as long as they might yield a valid parse.
  • Marpa is flexible. Like recursive descent, Marpa allows you to stop and do your own custom processing. Unlike recursive descent, Marpa makes available to you detailed information about the parse so far -- which rules and symbols have been recognized, with their locations, and which rules and symbols are expected next.


Comments on this post can be made in Marpa's Google group.

Dave's Free Press: Journal: POD includes

Dave's Free Press: Journal: cgit syntax highlighting : First day at YAPC::Europe 2013 in Kiev

Today was the first "real" day of YAPC Europe 2013 in Kiev. In the same sense that it was the first real day, we had quite a nice "unreal" conference day yesterday, with a day-long Perl 6 hackathon, and in the evening a pre-conference meeting a Sovjet-style restaurant with tasty food and beverages.

The talks started with a few words of welcome, and then the announcement that the YAPC Europe next year will be in Sofia, Bulgaria, with the small side note that there were actually three cities competing for that honour. Congratulations to Sofia!

Larry's traditional keynote was quite emotional, and he had to fight tears a few times. Having had cancer and related surgeries in the past year, he still does his perceived duty to the Perl community, which I greatly appreciate.

Afterwards Dave Cross talked about 25 years of Perl in 25 minutes, which was a nice walk through some significant developments in the Perl world, though a bit hasty. Maybe picking fewer events and spending a bit more time on the selected few would give a smoother experience.

Another excellent talk that ran out of time was on Redis. Having experimented a wee bit with Redis in the past month, this was a real eye-opener on the wealth of features we might have used for a project at work, but in the end we didn't. Maybe we will eventually revise that decision.

Ribasushi talked about how hard benchmarking really is, and while I was (in principle) aware of that fact that it's hard to get right, there were still several significant factors that I overlooked (like the CPU's tendency to scale frequency in response to thermal and power-management considerations). I also learned that I should use Dumbbench instead of the core module. Sadly it didn't install for me (Capture::Tiny tests failing on Mac OS X).

The Perl 6 is dead, long live Perl 5 talk was much less inflammatory than the title would suggest (maybe due to Larry touching on the subject briefly during the keynote). It was mostly about how Perl 5 is used in the presenter's company, which was mildly interesting.

After tasty free lunch I attended jnthn's talk on Rakudo on the JVM, which was (as is typical for jnthn's talk) both entertaining and taught me something, even though I had followed the project quite a bit.

Thomas Klausner's Bread::Board by example made me want to refactor the OTRS internals very badly, because it is full of the anti-patterns that Bread::Board can solve in a much better way. I think that the OTRS code base is big enough to warrant the usage of Bread::Board.

I enjoyed Denis' talk on Method::Signatures, and was delighted to see that most syntax is directly copied from Perl 6 signature syntax. Talk about Perl 6 sucking creativity out of Perl 5 development.

The conference ended with a session of lighning talks, something which I always enjoy. Many lightning talks had a slightly funny tone or undertone, while still talking about interesting stuff.

Finally there was the "kick-off party", beverages and snacks sponsored by There (and really the whole day, and yesterday too) I not only had conversations with my "old" Perl 6 friends, but also talked with many interesting people I never met before, or only met online before.

So all in all it was a nice experience, both from the social side, and from quality and contents of the talks. Venue and food are good, and the wifi too, except when it stops working for a few minutes.

I'm looking forward to two more days of conference!

(Updated: Fixed Thomas' last name)

Dave's Free Press: Journal: CPAN Testers' CPAN author FAQ : Correctness in Computer Programs and Mathematical Proofs

While reading On Proof and Progress in Mathematics by Fields Medal winner Bill Thurston (recently deceased I was sorry to hear), I came across this gem:

The standard of correctness and completeness necessary to get a computer program to work at all is a couple of orders of magnitude higher than the mathematical community’s standard of valid proofs. Nonetheless, large computer programs, even when they have been very carefully written and very carefully tested, always seem to have bugs.

I noticed that mathematicians are often sloppy about the scope of their symbols. Sometimes they use the same symbol for two different meanings, and you have to guess from context which on is meant.

This kind of sloppiness generally doesn't have an impact on the validity of the ideas that are communicated, as long as it's still understandable to the reader.

I guess on reason is that most mathematical publications still stick to one-letter symbol names, and there aren't that many letters in the alphabets that are generally accepted for usage (Latin, Greek, a few letters from Hebrew). And in the programming world we snort derisively at FORTRAN 77 that limited variable names to a length of 6 characters.

Ocean of Awareness: Parsing: a timeline

1960: The ALGOL 60 spec comes out. It specifies, for the first time, a block structured language. The ALGOL committee is well aware that nobody knows how to parse such a language. But they believe that, if they specify a block-structured language, a parser for it will be invented. Risky as this approach is, it pays off ...

1961: Ned Irons publishes his ALGOL parser. In fact, the Irons parser is the first parser of any kind to be described in print. Ned's algorithm is a left parser -- a form of recursive descent. Unlike modern recursive descent, the Irons algorithm is general and syntax-driven. "General" means it can parse anything written in BNF. "Syntax-driven" (aka declarative) means that parser is actually created from the BNF -- the parser does not need to be hand-written.

1961: Almost simultaneously, hand-coded approaches to left parsing appear. These we would now recognize as recursive descent. Over the following years, hand-coded approaches will become more popular for left parsers than syntax-driven algorithms. Three factors are at work:

  • In the 1960's, memory and CPU are both extremely limited. Hand-coding pays off, even when the gains are small.
  • Pure left parsing is a very weak parsing technique. Hand-coding is often necessary to overcome its limits. This is as true today as it is in 1961.
  • Left parsing works well in combination with hand-coding -- they are a very good fit.

1965: Don Knuth invents LR parsing. Knuth is primarily interested in the mathematics. He describes a parsing algorithm, but it is not thought practical.

1968: Jay Earley invents the algorithm named after him. Like the Irons algorithm, Earley's algorithm is syntax-driven and fully general. Unlike the Irons algorithm, it does not backtrack. Earley's core idea is to track everything about the parse in tables. Earley's algorithm is enticing, but it has three major issues:

  • First, there is a bug in the handling of zero-length rules.
  • Second, it is quadratic for right recursions.
  • Third, the bookkeeping required to set up the tables is, by the standards of 1968 hardware, daunting.

1969: Frank DeRemer described a new variant of Knuth's LR parsing. DeRemer's LALR algorithm requires only a stack and a state table of quite manageable size.

1972: Aho and Ullmann describe a straightforward fix to the zero-length rule bug in Earley's original algorithm. Unfortunately, this fix involves adding even more bookkeeping to Earley's.

1975: Bell Labs converts its C compiler from hand-written recursive descent to DeRemer's LALR algorithm.

1977: The first "Dragon book" comes out. This soon-to-be classic textbook is nicknamed after the drawing on the front cover, in which a knight takes on a dragon. Emblazoned on the knight's lance are the letters "LALR". From here on out, to speak lightly of LALR will be to besmirch the escutcheon of parsing theory.

1987: Larry Wall introduces Perl 1. Perl embraces complexity like no previous language. Larry uses LALR very aggressively -- to my knowledge more aggressively than anyone before or since.

1991: Joop Leo discovers a way of speeding up right recursions in Earley's algorithm. Leo's algorithm is linear for just about every unambiguous grammar of practical interest, and many ambiguous ones as well. In 1991 hardware is six orders of magnitude faster than 1968 hardware, so that the issue of bookkeeping overhead had receded in importance. This is a major discovery. When it comes to the speed, the game has changed in favor of Earley algorithm. But Earley parsing is almost forgotten. It will be 20 years before anyone writes a practical implementation of Leo's algorithm.

1990's: Earley's is forgotten. So everyone in LALR-land is content, right? Wrong. Far from it, in fact. Users of LALR are making unpleasant discoveries. While LALR automatically generates their parsers, debugging them is so hard they could just as easily write the parser by hand. Once debugged, their LALR parsers are fast for correct inputs. But almost all they tell the users about incorrect inputs is that they are incorrect. In Larry's words, LALR is "fast but stupid".

2000: Larry Wall decides on a radical reimplementation of Perl -- Perl 6. Larry does not even consider using LALR again.

2002: Aycock&Horspool publish their attempt at a fast, practical Earley's parser. Missing from it is Joop Leo's improvement -- they seem not to be aware of it. Their own speedup is limited in what it achieves and the complications it introduces can be counter-productive at evaluation time. But buried in their paper is a solution to the zero-length rule bug. And this time the solution requires no additional bookkeeping.

2006: GNU announces that the GCC compiler's parser has been rewritten. For three decades, the industry's flagship C compilers have used LALR as their parser -- proof of the claim that LALR and serious parsing are equivalent. Now, GNU replaces LALR with the technology that it replaced a quarter century earlier: recursive descent.

2000 to today: With the retreat from LALR comes a collapse in the prestige of parsing theory. After a half century, we seem to be back where we started. If you took Ned Iron's original 1961 algorithm, changed the names and dates, and translated from code from the mix of assembler and ALGOL into Haskell, you would easily republish it today, and bill it as as revolutionary and new.


Over the years, I had come back to Earley's algorithm again and again. Around 2010, I realized that the original, long-abandoned vision -- an efficient, practical, general and syntax-driven parser -- was now, in fact, quite possible. The necessary pieces had fallen into place.

Aycock&Hospool has solved the zero-length rule bug. Joop Leo had found the speedup for right recursion. And the issue of bookkeeping overhead had pretty much evaporated on its own. Machine operations are now a billion times faster than in 1968, and probably no longer relevant in any case -- caches misses are now the bottleneck.

But while the original issues with Earley's disappeared, a new issue emerged. With a parsing algorithm as powerful as Earley's behind it, a syntax-driven approach can do much more than it can with a left parser. But with the experience with LALR in their collective consciousness, few modern programmers are prepared to trust a purely declarative parser. As Lincoln said, "Once a cat's been burned, he won't even sit on a cold stove."

To be accepted, Marpa needed to allow procedure parsing, not just declarative parsing. So Marpa allows the user to specify events -- occurrences of symbols and rules -- at which declarative parsing pauses. While paused, the application can call procedural logic and single-step forward token by token. The procedural logic can hand control back over to syntax-driven parsing at any point it likes. The Earley tables can provide the procedural logic with full knowledge of the state of the parse so far: all rules recognized in all possible parses so far, and all symbols expected. Earley's algorithm is now a even better companion for hand-written procedural logic than recursive descent.

For more

For more about Marpa, there's the official web site maintained by Ron Savage. I also have a Marpa web site. Comments on this post can be made in Marpa's Google group.

Dave's Free Press: Journal: YAPC::Europe 2006 report: day 3

Header image by Tambako the Jaguar. Some rights reserved.