LWP PUT of a Large File Upload Without Form Data

Perl questions on StackOverflow

Published by Timothy R. Butler on Saturday 01 June 2024 05:38

I'm trying to implement Google's YouTube video uploading API in Perl, which, alas has no library for such. I've run into an issue with the actual PUT to upload the video file. LWP::UserAgent has a very helpful function to avoid having to stream the upload of a large file by simply referencing the file's name in an arrayref like so:

my $ua = LWP::UserAgent->new;
my $upload_url = "https://myapi";
my $localPath = "myBigVideoFile.mp4";

my $upload_response = $ua->put( $upload_url,  'Content_Type' => 'form-data', 'Content' => [ 'file' => [ $localPath ] ] );

However, that isn't the format Google's API expects. It wants the Content-Type to be 'video/*' and it wants the entire body of the response to be the file, not have it tucked away as the "file" field in a form. But changing the code to match Google's expectations disables LWP's handy file loading function. For example:

my $upload_response = $ua->put( $upload_url,  'Content_Type' => 'video/*', 'Content' => [ $localPath ] );

In that case, the LWP request object shows just the file name as the content, rather than streaming out the content.

Is there any way to activate LWP's file loading magic, or easily simulate it, so that I can achieve Google's required format without preloading the entire file (obviously not a good idea!).

Here's Google needed HTTP format:

PUT API_ADDRESS_GOES_HERE HTTP/1.1
Authorization: Bearer AUTH_TOKEN_GOES_HERE
Content-Length: CONTENT_LENGTH_GOES_HERE
Content-Type: video/*

BINARY_FILE_DATA

(In the actual code, I'm using LWP::Authen::OAuth2 on top of LWP::UserAgent, but everything I outlined above happens when I send the data to my own endpoint using just UserAgent.

List of new CPAN distributions – May 2024

r/perl

Published by /u/perlancar on Saturday 01 June 2024 01:39

List of new CPAN distributions – May 2024

Perlancar

Published by perlancar on Saturday 01 June 2024 01:37

dist author abstract date
Alien-NLopt DJERIUS Build and Install the NLopt library 2024-05-01T05:00:12
Alien-cue PLICEASE Find or download the cue configuration language tool 2024-05-07T11:34:32
Alien-libversion GDT Alien wrapper for libversion 2024-05-01T20:53:07
Alien-poetry OLIVER Download and install poetry 2024-05-11T11:33:08
Amon2-Plugin-Web-Flash YOSHIMASA Ruby on Rails flash for Amon2 2024-05-26T03:25:23
App-Codit HANJE IDE for and in Perl 2024-05-20T08:51:39
App-NutrientUtils PERLANCAR Utilities related to nutrients 2024-05-26T00:06:02
App-htidx GBROWN generate static HTML directory listings. 2024-05-29T11:03:56
App-rdapper GBROWN a simple console-based RDAP client. 2024-05-29T23:00:49
Archive-SCS NAUTOFON SCS archive controller 2024-05-21T18:27:52
Authorization-AccessControl TYRRMINAL Hybrid RBAC/ABAC access control 2024-05-16T03:53:26
Bencher-ScenarioBundle-Accessors PERLANCAR Scenarios to benchmark class accessors 2024-05-13T00:05:21
Bencher-ScenarioBundle-Algorithm-Diff PERLANCAR Scenarios to benchmark Algorithm::Diff 2024-05-11T00:06:17
Bencher-ScenarioBundle-Graphics-ColorNames PERLANCAR Scenarios to benchmark Graphics::ColorNames and related modules 2024-05-12T00:05:13
Bencher-ScenarioBundle-Log-Any PERLANCAR Scenarios for benchmarking Log::Any 2024-05-20T00:06:19
Bencher-ScenarioBundle-Log-ger PERLANCAR Scenarios for benchmarking Log::ger 2024-05-21T00:05:46
Complete-Nutrient PERLANCAR Completion routines related to nutrients 2024-05-31T00:05:25
Couch-DB MARKOV thick CouchDB interface 2024-05-29T16:37:08
Data-HTML-Footer SKIM Data object for HTML footer. 2024-05-31T09:32:54
Data-Message-Board SKIM Data objects for message board. 2024-05-27T18:30:24
Data-Person SKIM Data objects for person. 2024-05-27T08:46:48
Dist-Zilla-Plugin-Sorter PERLANCAR Plugin to use when building Sorter::* distribution 2024-05-07T00:05:19
Dist-Zilla-Stash-OnePasswordLogin RJBS get login credentials from 1Password 2024-05-25T16:30:09
Feed-Data-AlJazeera LNATION The great new Feed::Data::AlJazeera! 2024-05-09T05:50:01
Feed-Data-BBC LNATION Waiting for comedians to present the news 2024-05-02T08:38:34
Feed-Data-CNN LNATION The rest of the world will follow. 2024-05-02T10:11:29
FreeDesktop-Icons HANJE Use icon libraries quick & easy 2024-05-31T17:26:09
Game-Cribbage LNATION The great new Game::Cribbage! 2024-05-15T00:15:17
Graphics-ColorNamesCMYK PERLANCAR Define CMYK values for common color names 2024-05-10T00:05:12
Graphics-ColorNamesCMYK-BannersCom PERLANCAR Basic CMYK colors from banners.com 2024-05-17T00:05:09
Graphics-ColorNamesCMYK-JohnDecemberCom PERLANCAR CMYK color names from johndecember.com 2024-05-19T00:05:21
Graphics-ColorNamesCMYK-Pantone PERLANCAR Pantone colors 2024-05-14T00:05:54
Graphics-ColorNamesCMYK-ToutesLesCouleursCom PERLANCAR CMYK colors from http://toutes-les-couleurs.com/ (red) 2024-05-16T00:06:24
Graphics-ColorNamesLite PERLANCAR Define RGB values for common color names (lite version) 2024-05-09T00:05:39
Hades-Realm-Rope LNATION Hades realm for Moose 2024-05-20T13:39:42
HashData-Color-CMYK-JohnDecemberCom PERLANCAR CMYK color names (from johndecember.com) 2024-05-18T00:05:57
HashData-Color-CMYK-ToutesLesCouleursCom PERLANCAR CMYK color names (from ToutesLesCouleursCom) 2024-05-15T00:06:01
HashData-Color-PantoneToCMYK PERLANCAR Mapping of Pantone color names to CMYK values 2024-05-08T00:05:41
HashData-ColorCode-CMYK-JohnDecemberCom PERLANCAR CMYK color names (from johndecember.com) 2024-05-22T00:06:06
HashData-ColorCode-CMYK-Pantone PERLANCAR Mapping of Pantone color names to CMYK values 2024-05-23T00:05:32
HashData-ColorCode-CMYK-ToutesLesCouleursCom PERLANCAR CMYK color names (from ToutesLesCouleursCom) 2024-05-24T00:06:00
Linux-Landlock MBALLARIN An interface to the Landlock sandboxing facility of Linux 2024-05-09T20:12:52
Locale-Unicode JDEGUEST Unicode Locale Identifier compliant with BCP47 and CLDR 2024-05-17T08:05:23
Log-Log4perl-Config-YamlConfigurator SVW Reads Log4perl YAML configurations 2024-05-29T07:51:19
Math-NLopt DJERIUS Math::NLopt – Perl interface to the NLopt optimization library 2024-05-01T07:53:48
Mojolicious-Plugin-Authorization-AccessControl TYRRMINAL Integrate Authorization::AccessControl into Mojolicious 2024-05-16T23:23:55
Mojolicious-Plugin-Config-Structured-Bootstrap TYRRMINAL Autoconfigure Mojolicious application and plugins 2024-05-20T13:39:53
Mojolicious-Plugin-Data-Transfigure TYRRMINAL Mojolicious adapter for Data::Transfigure 2024-05-18T03:42:37
Net-EPP-MITMProxy GBROWN A generic EPP proxy server framework. 2024-05-02T11:56:41
Ogma LNATION Command Line Applications via Rope 2024-05-07T08:27:11
OpenSearch LHRST It's new $module 2024-05-15T15:22:31
Password-OnePassword-OPCLI RJBS get items out of 1Password with the "op" CLI 2024-05-25T15:24:06
PerlIO-win32console TONYC Win32 console output layer 2024-05-26T13:03:53
Plack-Middleware-Zstandard PLICEASE Compress response body with Zstandard 2024-05-10T18:08:23
QRCode-Any PERLANCAR Common interface to QRCode functions 2024-05-06T00:06:19
RT-Extension-Import-CSV BPS RT-Extension-Import-CSV Extension 2024-05-15T18:16:51
Sah-SchemaBundle-Business-ID-NIK PERLANCAR Sah schemas related to Indonesian citizenship registration numbers (NIK) 2024-05-01T00:05:13
Sah-SchemaBundle-Business-ID-NKK PERLANCAR Sah schemas related to Indonesian family card number (NKK) 2024-05-02T00:05:52
Sah-SchemaBundle-Business-ID-NOPPBB PERLANCAR Sah schemas related to Indonesian property tax numbers (NOP PBB) 2024-05-03T00:05:32
Sah-SchemaBundle-Business-ID-NPWP PERLANCAR Sah schemas related to Indonesian taxpayer registration number (NPWP) 2024-05-04T00:05:59
Sah-SchemaBundle-Business-ID-SIM PERLANCAR Sah schemas related to Indonesian driving license number (nomor SIM) 2024-05-05T00:06:14
Salus LNATION The great new Salus! 2024-05-09T20:06:13
SortKey-Num-similarity_jaccard PERLANCAR Jaccard coefficient of a string to a reference string, as sort key 2024-05-30T00:05:29
Super-Powers LNATION The hiddden truth 2024-05-02T04:21:41
TableData-Business-ID-BPOM-NutritionLabelRef PERLANCAR Nutrients 2024-05-27T00:05:19
TableData-Health-Nutrient PERLANCAR Nutrients 2024-05-25T00:05:57
TableDataRole-Source-DBI PERLANCAR Role to access table data from DBI 2024-05-28T00:05:38
TableDataRole-Source-SQLite PERLANCAR Role to access table data from SQLite database table/query 2024-05-29T00:05:31
Tags-HTML-DefinitionList SKIM Tags helper for definition list. 2024-05-17T16:54:28
Tags-HTML-Navigation-Grid SKIM Tags helper for navigation grid. 2024-05-10T18:53:42
Tags-HTML-Tree SKIM Tags helper for Tree. 2024-05-01T16:50:02
Tk-DynaMouseWheelBind HANJE Wheel scroll panes filled with widgets 2024-05-20T18:53:11
URI-Shorten TEODESIAN Shorten URIs so that you don't have to rely on external services 2024-05-07T08:50:48
URI-Shortener TEODESIAN Shorten URIs so that you don't have to rely on external services 2024-05-07T15:53:48
Version-libversion-XS GDT Perl binding for libversion 2024-05-01T20:09:03
XDR-Parse EHUELS Parse XDR (eXternal Data Representation) definitions into an AST (Abstract Syntax Tree) 2024-05-17T13:01:54
e TIMKA The great new e! 2024-05-08T15:09:45
optional EXODIST Pragma to optionally load a module (or pick from a list of modules) and provide a constant and some tools for taking action depending on if it loaded or not. 2024-05-14T21:33:17
perl-libssh QGARNIER Support for the SSH protocol via libssh. 2024-05-28T08:35:27

Stats

Number of new CPAN distributions this period: 79

Number of authors releasing new CPAN distributions this period: 25

Authors by number of new CPAN distributions this period:

No Author Distributions
1 PERLANCAR 31
2 LNATION 8
3 SKIM 6
4 TYRRMINAL 4
5 HANJE 3
6 GBROWN 3
7 TEODESIAN 2
8 DJERIUS 2
9 RJBS 2
10 PLICEASE 2
11 GDT 2
12 BPS 1
13 YOSHIMASA 1
14 NAUTOFON 1
15 OLIVER 1
16 MARKOV 1
17 TONYC 1
18 LHRST 1
19 MBALLARIN 1
20 TIMKA 1
21 SVW 1
22 EXODIST 1
23 QGARNIER 1
24 JDEGUEST 1
25 EHUELS 1

Problem getting Perl command line arguments

Perl questions on StackOverflow

Published by Anonymous on Saturday 01 June 2024 00:01

I just started learning Perl. I attempted to make a simple calculator that takes input from the command line. Input: 5 * 10. Output: 50. But instead, it just prints 5. Here's the code:

#!usr/bin/perl
use strict;
use warnings;
my $op = $ARGV[1];
my $outpt = eval("return $ARGV[0]"."$op"."$ARGV[2]");
print "$outpt"."\n";

Any advice would be appreciated.

I tried to input all as one string instead, but that resulted in terminal output no matches found. How should I fix the error.

Paella. An interactive calendar application for the terminal

r/perl

Published by /u/saiftynet on Friday 31 May 2024 20:16

Displaying an image to the user

r/perl

Published by /u/sue_d_nymme on Friday 31 May 2024 19:55

Is it possible to display an image to the user, without loading all the trappings of a whole widget / event-loop environment like Prima, Tk, Wx, Win32::GUI, etc?

Specifically, I want something simple that I can execute in a BEGIN block to display a splash image to the user while the rest of the application is compiled and initializes, which takes about 5-10 seconds. The program in question is a perl Wx application running under MS Windows.

submitted by /u/sue_d_nymme
[link] [comments]

I am trying to use unpack logic of perl in unix and Linux, But see it sa huge diffrence when I iterate all the input binary string , Not sure how to solve the issue ??? or is platform related ???

$binaryString = "\x00\x00\x00\x00";

so my code in Unix

$intval = unpack("i*", $binaryString);

This returns 0 as expected.

When I run the same in Linux, it gives a large value. In order to fix this issue I read only the last bytes

unpack("i*", substr($binaryString,3,1))

This returns 0 as expected.

But if the Binary String contains the value

$binaryString = "\x00\x00\x89\x9d";

Unix return 35229 but Linux its return only 157 as I am reading the last character...But I want the same value 35229 in Linux

I have a bunch of old perl scripts that use SSH::Expect. When they get to the $ssh->login() stage they occasionally appear to fail on the password prompt. It will indicate a login failed and prompt for password again.

I havent been able to figure out why this started happening, but am wondering if there's a workaround. I'm not able to change to a different SSH model (like openssh etc) at this time due to the large # that use SSH::Expect.

If it is sending the first password attempt too quickly, is there a way to make sure it waits for the full prompt to appear?

If that 2nd password prompt appears, is there a way to get it to send the password again?

this is the relevant code:

my $ssh = Net::SSH::Expect->new (
            host => $deviceIpAddr,
            timeout => 3,
            log_stdout => 1,
            password=> $passwd,
            user => $username,
            ssh_option => '-o UserKnownHostsFile=/dev/null',
);
$login_output = $ssh->login();
perldelta: fix 'perlguts for PERL_RC_STACK' entry

The original entry was copied from 5.39.something's perldelta, and just
mentioned some more rpp_ functions being added to perlguts. But *all*
rpp_ function are new to 5.40.0, not just those extra ones. So make the
entry in perldelta more generic.

I have a file (file.txt). Data gets appended on it every minute. Now, the fie size is getting higher and higher. What I want is that whenever the file size crosses 800MB size, all the lines from the top gets deleted. How can this be done in perl?

document use importing builtin version bundles in perldelta

document builtin module as stable in perldelta

Perl commits on GitHub

Published by haarg on Friday 31 May 2024 05:40

document builtin module as stable in perldelta
document try/catch and multi-var for as stable in perldelta

normalize indentation in perldelta

Perl commits on GitHub

Published by haarg on Friday 31 May 2024 05:40

normalize indentation in perldelta

Get CPU usage of PID

r/perl

Published by /u/DemosaiDelacroix on Friday 31 May 2024 03:28

I am making a script with multiple threads, and one of the threads I wish to make is a "cpu usage monitoring thread" that checks both "overall" cpu usage and the "current script cpu usage" or external PID cpu usage.

Then I can decide if any of my threads need to sleep for the moment while CPU is recovering from something (like maybe its executing something heavy) then my perl script needs to adjust.

I wish it will also be EFFICIENT and ACCURATE as much as possible. I don't want the perl script have high CPU usage "IF" there are apps still doing some heavy jobs. Cross Platform would be nice, but if a Windows Solution is more efficient. Please share.

For now I can't find any good solution. :(

So I need:

  • Accurate and Efficient way to capture overall cpu usage (and maybe memory)
    • It should be cross platform if possible/ else I need a windows solution
  • Accurate and Efficient way to capture cpu usage of a PID

Here, there is a subroutine named thread_proc_monitor inefficiently just check for the overall CPU usage

# https://perldoc.perl.org/threads::shared use strict; use warnings; use threads; use threads::shared; use Time::HiRes qw(time); # Multi-Threaded Sync of 3 functions: Output should be in order 1 -> 2 -> 3 # Shared global variable our $global_counter :shared = 0; our $max_global_counter :shared = 50000; our $global_lock :shared = "UNLOCKED"; our $global_order :shared = 1; our $global_prev_order :shared = $global_order +1; our $cpu_usage :shared = 0; our $sleep_time :shared = 0; # Thread subroutine sub thread_function { my $subroutine_name = (caller(0))[3]; my $order = shift; while($global_counter < $max_global_counter) { thread_termination(); if ($global_lock eq "UNLOCKED" && $global_order == $order) { $global_lock = "LOCKED"; $global_counter++; if ($global_order != $global_prev_order) { print "GOOD-> CUR:$global_order PREV:$global_prev_order "; } else { die; } print "Thread $order ", threads->self->tid, ": Global counter = $global_counter\n"; if ($global_order > 2){ $global_order = 1; } else { $global_prev_order = $global_order; $global_order++; } # Keep looping # if ($global_counter > 900) { $global_counter = 0;} $global_lock = "UNLOCKED"; } my $actual_sleep_time = $sleep_time; my $start_time = time(); # sleep $global_counter; sleep $sleep_time; my $end_time = time(); my $duration = $end_time - $start_time; # print "sleep:[$actual_sleep_time] $duration seconds has passed...\n"; } $global_lock = "RELEASED"; } sub thread_proc_monitor { # Monitor overall CPU process usage, adjust accordingly while(){ thread_termination(); $cpu_usage = `wmic cpu get loadpercentage /format:value`; $cpu_usage =~ s/\n//g; (my $na, $cpu_usage) = split '=', $cpu_usage; sleep 1; } } sub thread_sleep_time { while(){ thread_termination(); if ($cpu_usage < 10){ $sleep_time = 0.0; } elsif ($cpu_usage < 20){ $sleep_time = 0.5; } elsif ($cpu_usage < 30){ $sleep_time = 1.0; } elsif ($cpu_usage < 40){ $sleep_time = 2.5; } elsif ($cpu_usage < 50){ $sleep_time = 4; } else { $sleep_time = 5; } if ($cpu_usage >= 20){ print "Slowing down by ".$sleep_time." seconds...\n"; } sleep(1); } } sub thread_termination { if ($global_lock eq "RELEASED"){ threads->exit(0); } } # Create three threads my $thread1 = threads->create(\&thread_function, 1); my $thread2 = threads->create(\&thread_function, 2); my $thread3 = threads->create(\&thread_function, 3); my $thread4 = threads->create(\&thread_proc_monitor, 4); my $thread5 = threads->create(\&thread_sleep_time, 5); # Wait for the threads to complete $thread1->join(); $thread2->join(); $thread3->join(); $thread4->join(); $thread5->join(); # other notes: # threads->exit(); # my $errno :shared = dualvar($!,$!); # https://perldoc.perl.org/threads::shared use strict; use warnings; use threads; use threads::shared; use Time::HiRes qw(time); # Multi-Threaded Sync of 3 functions: Output should be in order 1 -> 2 -> 3 # Shared global variable our $global_counter :shared = 0; our $max_global_counter :shared = 50000; our $global_lock :shared = "UNLOCKED"; our $global_order :shared = 1; our $global_prev_order :shared = $global_order +1; our $cpu_usage :shared = 0; our $sleep_time :shared = 0; # Thread subroutine sub thread_function { my $subroutine_name = (caller(0))[3]; my $order = shift; while($global_counter < $max_global_counter) { thread_termination(); if ($global_lock eq "UNLOCKED" && $global_order == $order) { $global_lock = "LOCKED"; $global_counter++; if ($global_order != $global_prev_order) { print "GOOD-> CUR:$global_order PREV:$global_prev_order "; } else { die; } print "Thread $order ", threads->self->tid, ": Global counter = $global_counter\n"; if ($global_order > 2){ $global_order = 1; } else { $global_prev_order = $global_order; $global_order++; } # Keep looping # if ($global_counter > 900) { $global_counter = 0;} $global_lock = "UNLOCKED"; } my $actual_sleep_time = $sleep_time; my $start_time = time(); # sleep $global_counter; sleep $sleep_time; my $end_time = time(); my $duration = $end_time - $start_time; # print "sleep:[$actual_sleep_time] $duration seconds has passed...\n"; } $global_lock = "RELEASED"; } sub thread_proc_monitor { # Monitor overall CPU process usage, adjust accordingly while(){ thread_termination(); $cpu_usage = `wmic cpu get loadpercentage /format:value`; $cpu_usage =~ s/\n//g; (my $na, $cpu_usage) = split '=', $cpu_usage; sleep 1; } } sub thread_sleep_time { while(){ thread_termination(); if ($cpu_usage < 10){ $sleep_time = 0.0; } elsif ($cpu_usage < 20){ $sleep_time = 0.5; } elsif ($cpu_usage < 30){ $sleep_time = 1.0; } elsif ($cpu_usage < 40){ $sleep_time = 2.5; } elsif ($cpu_usage < 50){ $sleep_time = 4; } else { $sleep_time = 5; } if ($cpu_usage > 20){ print "Slowing down by ".$sleep_time." seconds...\n"; } sleep(1); } } sub thread_termination { if ($global_lock eq "RELEASED"){ threads->exit(0); } } # Create three threads my $thread1 = threads->create(\&thread_function, 1); my $thread2 = threads->create(\&thread_function, 2); my $thread3 = threads->create(\&thread_function, 3); my $thread4 = threads->create(\&thread_proc_monitor, 4); my $thread5 = threads->create(\&thread_sleep_time, 5); # Wait for the threads to complete $thread1->join(); $thread2->join(); $thread3->join(); $thread4->join(); $thread5->join(); 
submitted by /u/DemosaiDelacroix
[link] [comments]

This week in PSC (149) | 2024-05-30

blogs.perl.org

Published by Perl Steering Council on Thursday 30 May 2024 21:22

This week it was just Paul and Philippe; we discussed the final changes for the upcoming RC2 and stable release, and marked some issues/PR as release blockers.

Graham expects to release 5.40-RC2 before the week-end.

Getting started with PERL

r/perl

Published by /u/SquareRaspberry3808 on Thursday 30 May 2024 16:51

Hey, I have a chance at getting an interview for a position through a connection (internship), and the position I was referred to said the job would mainly focus on PERL, how could I get ready for this interview? On my resume, I want to add a small portion where I say I'm developing my PERL skills. I saw some basic tutorials for making simple calculators and whatnot. What could I do to get ready and impress my interviewers? Also, should I add these simple projects like a calculator on my Git Hub just to show I have at least a little experience? If not, what other projects could I work on to develop my skills with PERL, I'd love any advice I could get, thanks!

Some background: I've only done Python and Java through my university and did a bit of webdev in my free time.

submitted by /u/SquareRaspberry3808
[link] [comments]

Maintaining Perl (Tony Cook) March 2024

Perl Foundation News

Published by alh on Thursday 30 May 2024 08:35


Tony writes:

``` [Hours] [Activity] 2024/03/04 Monday 0.10 #22061 review and approve 0.15 #22054 review and approve 0.70 #22059 review, testing and comments 0.70 #22057 review changes, testing against App::Ack and Module::Info 0.35 #21877 debugging

1.18 #21877 debugging

3.18

2024/03/05 Tuesday

0.55 #22063 review, briefly comment

0.55

2024/03/06 Wednesday 0.10 #22063 review and approve 0.55 #21261 research and comment 0.08 #22052 review and briefly comment 0.10 #21925 recheck, apply to blead 0.57 #21686 review, work up a new patch and push for CI 1.27 #21925 work on fixes for leftover warnings, research into

strange gcc behaviour

2.67

2024/03/07 Thursday 0.15 #21686 review CI results, make PR 22065 0.18 #22052 research, comment and approve 1.55 #21925 commit a fix for the cast function warning, fix the wcscpy() restrict warning, testing and push for CI 0.20 #21925 review CI results, minor commit message fix, make PR 22066 1.47 #21881 testing, review code, comment, try to work out a

fix

3.55

2024/03/11 Monday 2.20 #21261 try to reproduce i386 failure 0.08 #21261 comment 0.27 #22070 research and comment 0.55 #22071 research, review and approve

0.25 #22067 review code, research

3.35

2024/03/12 Tuesday 0.40 #22070 read discussion, comment 0.80 #22068 work on fix (skip) for this and 22067 0.17 #22073 review and approve 0.15 #22075 review and approve 0.30 #22068 testing, make PR 22076

1.37 #21877 debugging

3.19

2024/03/13 Wednesday 0.15 #22024 review updates and approve 0.75 #22076 fixes and testing

1.27 #21877 debugging

2.17

2024/03/14 Thursday 0.22 #22062 comment 0.08 #22078 review and approve 1.48 #21877 debugging

0.50 #21877 debugging

2.28

2024/03/18 Monday 2.27 #22082 read discussion, work on workaround for ASAN issue, testing, make PR 22084, some updates, review change and approve 1.15 #22083 try to make blead fail based on the fix here 0.47 #22084 squash, check CI runs to a minimal point, apply to blead, research

1.33 #22083 make a crash this doesn’t fix

5.22

2024/03/19 Tuesday 0.43 #22086 review and comment 0.85 #22077 review and approve 0.68 #22081 review, testing

0.27 #22081 comment

2.23

2024/03/20 Wednesday 1.02 #22086 review latest changes and comment 1.37 #22079 research, comment

0.53 #22079 TODO tests on AFS that expect ENOTTY

2.92

2024/03/21 Thursday 2.43 #21981 find where the desync between PL_comppad and PL_curpad happens, look for a simpler test case, work up a fix and push for CI 0.65 #21611 try profiling, some issues, doesn’t look like it’s threading

2.03 #21611 get it profiling, review results, long comment

5.11

2024/03/22 Friday

0.10 rebase ASLR workaround revert and push for CI

0.10

2024/03/25 Monday 0.52 #22090 apply to blead, perldelta 0.32 #22097 review, research and approve 1.13 #22088 review, testing and comment

1.92 #21877 debugging

3.89

2024/03/26 Tuesday 1.33 #21877 debugging

1.35 #21877 debugging

2.68

2024/03/27 Wednesday 2.48 look over cygwin CI failures, try local test, reproduce, bisect down to the perl dll name change, research 1.47 cygwin failure: more research, testing, create issue

#22104

3.95

2024/03/28 Thursday 1.78 #22104 research, work on a workaround, testing, perldelta notes for the issue, push for CI 0.18 #22104 look over CI failure and fix 1.40 #22104 debug another issue (default static_ext being

ignored), figure it out, testing, push again for CI

3.36

2024/03/29 Friday

0.55 mailing list, experimental features

0.55

Which I calculate is 50.95 hours.

Approximately 34 tickets were reviewed or worked on, and 3 patches were applied. ```

Perl Weekly Challenge 271: Sort by 1 Bits

blogs.perl.org

Published by laurent_r on Tuesday 28 May 2024 22:23

These are some answers to the Week 271, Task 2, of the Perl Weekly Challenge organized by Mohammad S. Anwar.

Spoiler Alert: This weekly challenge deadline is due in a few days from now (on June 2, 2024 at 23:59). This blog post provides some solutions to this challenge. Please don’t read on if you intend to complete the challenge on your own.

Task 2: Sort by 1 Bits

You are given an array of integers, @ints.

Write a script to sort the integers in ascending order by the number of 1 bits in their binary representation. In case more than one integers have the same number of 1 bits then sort them in ascending order.

Example 1

Input: @ints = (0, 1, 2, 3, 4, 5, 6, 7, 8)
Output: (0, 1, 2, 4, 8, 3, 5, 6, 7)

0 = 0 one bits
1 = 1 one bits
2 = 1 one bits
4 = 1 one bits
8 = 1 one bits
3 = 2 one bits
5 = 2 one bits
6 = 2 one bits
7 = 3 one bits

Example 2

Input: @ints = (1024, 512, 256, 128, 64)
Output: (64, 128, 256, 512, 1024)

All integers in the given array have one 1-bits, so just sort them in ascending order.

Sort by 1 Bits in Raku

We first build an auxiliary bit weight subroutine (bit-w), which returns the number of 1's in the binary representation of the input integer. This is done by converting the input integer into its binary representation, using the base routine, splitting this binary representation into individual digits, and computing the sum of these digits.

We then simply sort the input array by bit weight or by value when the bit weights are equal.

sub bit-w($in) {
    # bit weight function: returns number of 1s in the
    # binary representation of the input integer
    return [+] $in.base(2).comb;
}
sub bit-sort (@test) {
    sort { bit-w($^a) cmp bit-w($^b) or $^a cmp $^b }, @test;
}

my @tests = (0, 1, 2, 3, 4, 5, 6, 7, 8), 
            (1024, 512, 256, 128, 64),
            (7, 23, 512, 256, 128, 64);
for @tests -> @test {
    printf "%-20s => ", "@test[]";
    say bit-sort @test;
}

This program displays the following output:

$ raku ./sort-1-bit.raku
0 1 2 3 4 5 6 7 8    => (0 1 2 4 8 3 5 6 7)
1024 512 256 128 64  => (64 128 256 512 1024)
7 23 512 256 128 64  => (64 128 256 512 7 23)

Note that the two subroutines each have only one code line. In fact, the implementation is so simple that we could compact it into a Raku one-liner (shown here over three lines for blog post formatting reasons):

$ raku -e 'my @in = say sort { [+] $^a.Int.base(2).comb
    cmp [+] $^b.Int.base(2).comb or $^a cmp $^b }, 
    @*ARGS'  0 1 2 3 4 5 6 7 8
(0 1 8 4 2 3 5 6 7)

But I would think that the original version with two subroutines is probably clearer.

Sort by 1 Bits in Perl

This is a port to Perl of the above Raku program. The only significant change is the use of a loop to compute the sum of the digits of the binary representation of the input integer.

use strict;
use warnings;
use feature 'say';

sub bit_w {
    # bit weight function: returns number of 1s in the
    # binary representation of the input integer
    my $out = 0;
    $out += $_ for split //, sprintf "%b", shift;
    return $out;
}
sub bit_sort {
    sort { bit_w($a) <=> bit_w($b) or $a <=> $b } @_;
}

my @tests = ( [0, 1, 2, 3, 4, 5, 6, 7, 8], 
              [1024, 512, 256, 128, 64],
              [7, 23, 512, 256, 128, 64] );
for my $test (@tests) {
    printf "%-20s => ", "@$test";
    say join " ", bit_sort @$test;
}

This program displays the following output:

$ perl ./sort-1-bit.pl
0 1 2 3 4 5 6 7 8    => 0 1 2 4 8 3 5 6 7
1024 512 256 128 64  => 64 128 256 512 1024
7 23 512 256 128 64  => 64 128 256 512 7 23

Wrapping up

The next week Perl Weekly Challenge will start soon. If you want to participate in this challenge, please check https://perlweeklychallenge.org/ and make sure you answer the challenge before 23:59 BST (British summer time) on June 9, 2024. And, please, also spread the word about the Perl Weekly Challenge if you can.

Perl Weekly Challenge 271: Maximum Ones

blogs.perl.org

Published by laurent_r on Tuesday 28 May 2024 22:11

These are some answers to the Week 271, Task 1, of the Perl Weekly Challenge organized by Mohammad S. Anwar.

Spoiler Alert: This weekly challenge deadline is due in a few days from now (on June 2, 2024 at 23:59). This blog post provides some solutions to this challenge. Please don’t read on if you intend to complete the challenge on your own.

Task 1: Maximum Ones

You are given a m x n binary matrix.

Write a script to return the row number containing maximum ones, in case of more than one rows then return smallest row number.

Example 1

Input: $matrix = [ [0, 1],
                   [1, 0],
                 ]
Output: 1

Row 1 and Row 2 have the same number of ones, so return row 1.

Example 2

Input: $matrix = [ [0, 0, 0],
                   [1, 0, 1],
                 ]
Output: 2

Row 2 has the maximum ones, so return row 2.

Example 3

Input: $matrix = [ [0, 0],
                   [1, 1],
                   [0, 0],
                 ]
Output: 2

Row 2 have the maximum ones, so return row 2.

Note that, in Perl, Raku, and most programming languages, array subscripts start at 0, so that the first row of a matrix would have index 0. Here, the task specification uses common sense row ranks rather than traditional array subscripts. So we will have to add one to the index found to return a common sense row rank.

Maximum Ones in Raku

Since input is a binary matrix, i.e. populated only with 0 and 1, to find the number of ones in a row, we can simply add the items of the row, which we can do with the sum method. We just need to iterate over the matrix rows and keep track of the index of the row with the largest sum.

sub maximum-ones (@mat) {
    my $max = 0; 
    my $max-i;
    for 0..@mat.end -> $i {
        my $sum = @mat[$i].sum;
        if $sum > $max {
            $max = $sum;
            $max-i = $i;
        }
    }
    return $max-i + 1;
}

my @tests = [ [0, 1], [1, 0] ],
            [ [0, 0, 0], [1, 0, 1] ],
            [ [0, 0], [1, 1], [0, 0] ];
for @tests -> @test {
    printf "%-20s => ", @test.gist;
    say maximum-ones @test;
}

This program displays the following output:

$ raku ./maximum-ones.raku
[[0 1] [1 0]]        => 1
[[0 0 0] [1 0 1]]    => 2
[[0 0] [1 1] [0 0]]  => 2

Maximum Ones in Perl

This is a port to Perl of the above Raku program. We iterate over the matrix rows, compute the sum of the row items, and keep track of the index of the row with the largest sum.

use strict;
use warnings;
use feature 'say';

sub maximum_ones {
    my @mat = @_;
    my $max = 0; 
    my $max_i;
    for my $i (0..$#mat) {
        my $sum = 0;
        $sum += $_ for @{$mat[$i]};
        if ($sum > $max) {
            $max = $sum;
            $max_i = $i;
        }
    }
    return $max_i + 1;
}

my @tests = ( [ [0, 1], [1, 0] ],
              [ [0, 0, 0], [1, 0, 1] ],
              [ [0, 0], [1, 1], [0, 0] ] );
for my $test (@tests) {
    printf "%-8s, %-8s, ... => ", 
        "[@{$test->[0]}]", "[@{$test->[1]}]";
    say maximum_ones @$test;
}

This program displays the following output:

$ perl ./maximum-ones.pl
[0 1]   , [1 0]   , ... => 1
[0 0 0] , [1 0 1] , ... => 2
[0 0]   , [1 1]   , ... => 2`

Note that we display only the first two rows of each input test matrix.

Wrapping up

The next week Perl Weekly Challenge will start soon. If you want to participate in this challenge, please check https://perlweeklychallenge.org/ and make sure you answer the challenge before 23:59 BST (British summer time) on June 9, 2024. And, please, also spread the word about the Perl Weekly Challenge if you can.

MariaDB 10 and SQL::Translator::Producer

blogs.perl.org

Published by russbrewer on Tuesday 28 May 2024 20:55

Following up on my previous post (MariaDB 10 and Perl DBIx::Class::Schema::Loader), I wanted to try the 'deploy' feature to create database tables from Schema/Result classes.

I was surprised that I could not create a table in the database when a timestamp field had a default of current_time(). The problem was that the generated CREATE TABLE entry placed quotes around 'current_timestamp()' causing an error and rejected entry.

As mentioned in a previous post, I had created file SQL/Translator/Producer/MariDB.pm as part of the effort to get MariaDB 10 clients to work correctly with DBIx::Class::Schema::Loader. Initially it was a clone of the MySQL.pm file with name substitutions. To correct the current_timestamp problem I added a search/replace in the existing create_field subroutine in the MariaDB.pm file to remove the quotes.

# current_timestamp ?
# current_timestamp (possibly as a default entry for a
# new record field), must not be quoted in the CREATE TABLE command
# provided to the database. Convert 'current_timestamp()'
# to current_timestamp() (no quotes) to prevent CREATE TABLE failure

if ( $field_def =~ /'current_timestamp\(\)'/ ) {
$field_def =~ s/'current_timestamp\(\)'/current_timestamp\(\)/;
}

This entry is made just before the subroutine returns $field_def. Now $schema->deploy(); works correctly to create the entire database.

The code shown below was tested satisfactorily to generate CREATE TABLE output (on a per table or multi-table basis) suitable for exporting (using tables Task and User as example table names):

My $schema = db_connect();

my $trans  = SQL::Translator->new (
     parser      => 'SQL::Translator::Parser::DBIx::Class',
     quote_identifiers => 1,
     parser_args => {
         dbic_schema => $schema,
         add_fk_index => 1,
         sources => [qw/
           Task User
         /],
     },
     producer    => 'MariaDB',
    ) or die SQL::Translator->error;

my $out = $trans->translate() or die $trans->error;

I believe the SQL/Translator/Producer/MySQL.pm file would benefit from this same code addition but I have not tested using a MySQL database and DBD::mysql.

Perl Weekly #670 - Conference Season ...

dev.to #perl

Published by Gabor Szabo on Monday 27 May 2024 05:29

Originally published at Perl Weekly 670

Hi there,

Are you regulars to Perl conference?

If yes then you have two upcoming conferences The Perl and Raku Conference in Las Vegas and London Perl and Raku Conference. Depending on your availability and convenience, I would highly recommend you register your interest to your choice(s) of conference. And if you are attending then do take the plunge give your first talk if you have not done so before. It doesn't have to be long talk, you can try quick 5 minutes lightning talk to begin with.

How about become a sponsor to the conference?

Believe it or not, it is vital that we provide financial support in the form of sponsor. So if you know someone who is in a position to support these events then please do share this TPRC 2024 Sponsors and LPW 2024 Sponsors with them. It would be a big help to organise such events.

Keynote speakers for TPRC 2024...

I came across this post by Curtis Poe where it is announced that Curtis is going to be keynote speaker at the event. Well there is a bonus for all attending the event, Damian Conway would be giving a keynote remotely. I am sure, it is going to be a memorable moment to celebrate the 25th anniversary. Similarly, London Perl Workshop would be celebrating 20th anniversary this year. I wanted to attend the TPRC 2024 in Las Vegas but for personal reason I am unable to attend. What a shame but at least I am definitely going to be part of LPW 2024 as it is local to me. No need to book travel ticket or reserve hotel room.

How many of you know about Pull Request Club?

The Pull Request Club is run by Kivanc Yazan. It started in Jan 2019. I had the pleasure to be associated with it since the beginning. I never missed the assignment until the last I contributed in January 2022. Unfortunately I faced too much distraction and missed the fun ever since. I found this annual report by the creator himself. If you like contributing to opensource projects then you should join the club and have fun.

For all cricket fans in India, did you watch the final of IPL 2024? I did and happy to see my favourite team, Kolkatta Knight Riders lifting the trophy. Although, SRH, the loosing team was my favourite too but it didn't play to their capability. I am now looking forward to the T20 World Cup next. How about you?

Today is Bank Holiday in England, so relax day for me. Enjoy rest of the newsletter. Last but not least, please do look after yourself and your loved ones.

--
Your editor: Mohammad Sajid Anwar.

Sponsors

Getting sarted with Docker for Perl developers (Free Virtual Workshop)

In this virtual workshop you will learn why and how to use Docker for development and deployment of applications written in Perl. The workshop is free of charge thanks to my supporters via Patreon and GitHub

Announcements

Being a Keynote Speaker

TPRC 2024 keynote speaker is announced. I am jealous of those able to attend the event.

Articles

Pull Request Club 2021-2023 Report

Finally we have the long awaited annual report of Pull Request Club. Happy to see it is growing so fast. Congratulation to all contributors.

Deploying Dancer Apps

Being a fan of Dancer2 framework, I found this blog post very informative with plenty of handy tricks.

Perl Toolchain Summit 2024 in Lisbon

It is always pleasure to read about the success story of PTS 2024. Here we have another such from Kenichi. Thanks for sharing the report with us. It proves a point that Perl is in safe hand.

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.

The Weekly Challenge - 271

Welcome to a new week with a couple of fun tasks "Maximum Ones" and "Sort by 1 bits". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 270

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Special Positions" and "Equalize Array" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

Distribute Positions

Don't you love the pictorial representation of algorithm? It makes it so fun follow through the discussion. Highly recommended.

When A Decision Must Be Made

Labelled loop is not very popular among Perl fans but in certain situations it can be very handy. Check it out the reason in the post.

Special Levels

Classic use case of PDL, very impressive. Thanks for sharing the knowledge.

Perl Weekly Challenge 270: Special Positions

As always, we get to know any junction of Raku implementation in Perl. This is the beauty of the post every week, you don't want to skip.

no passion this week!

Compact solutions using the power of Raku is on show. Keep it up great work.

Perl Weekly Challenge 270

Not sure, I have seen Luis used PDL before, I may be wrong. For me, it is encouragung to see the wide use of PDL. Keep it up great work.

Hidden loops. Or no loops at all.

This is truly incredible work, no loops at all. I would suggest, you must take a closer look. Thanks for sharing.

Lonely ones and equalities

Well documented and crafted solutions in Perl and on top you get to play with it. Well done and keep it up great work.

The Weekly Challenge - 270: Special Positions

Clever use of CPAN module, Math::Matrix. I always encourage the use of CPAN. Well done.

The Weekly Challenge - 270: Equalize Array

Interesting tackling of use cases. It is fun getting to the finer details. Thanks for sharing.

The Weekly Challenge #270

Just one solution this week, and typically one line analysis. Keep it up great work.

Special Distribtions Position the Elements

Discussion of solution in Crystal is the highlight for me. It looks easy and readable even when I know nothing about Crystal language. Highly recommended.

Equalizing positions

For Python fans, the post is always dedicated to Python only but we do receive Perl solutions. I really enjoy the compact solution in Python, specially the return list type. I never knew before. Thanks for sharing.

Rakudo

2024.21 Curry Primed

Weekly collections

NICEPERL's lists

Great CPAN modules released last week;
StackOverflow Perl report.

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

The Weekly Challenge - Perl & Raku

The Weekly Challenge

Published on Monday 27 May 2024 02:11

The page you are looking for was moved, removed, renamed or might never existed.

Perl

The Weekly Challenge

Published on Monday 27 May 2024 02:11

TABLE OF CONTENTS 01. HEADLINES 02. STAR CONTRIBUTORS 03. CONTRIBUTION STATS 04. GUESTS 05. LANGUAGES 06. CENTURION CLUB 07. DAMIAN CONWAY’s CORNER 08. ANDREW SHITOV’s CORNER 09. PERL SOLUTIONS 10. RAKU SOLUTIONS 11. PERL & RAKU SOLUTIONS HEADLINES Thank you Team PWC for your continuous support and encouragement. STAR CONTRIBUTORS Following members shared solutions to both tasks in Perl and Raku as well as blogged about it.

Blog

The Weekly Challenge

Published on Monday 27 May 2024 02:11

TABLE OF CONTENTS 01. HEADLINES 02. STAR CONTRIBUTORS 03. CONTRIBUTION STATS 04. GUESTS 05. LANGUAGES 06. CENTURION CLUB 07. DAMIAN CONWAY’s CORNER 08. ANDREW SHITOV’s CORNER 09. PERL SOLUTIONS 10. RAKU SOLUTIONS 11. PERL & RAKU SOLUTIONS HEADLINES Thank you Team PWC for your continuous support and encouragement. STAR CONTRIBUTORS Following members shared solutions to both tasks in Perl and Raku as well as blogged about it.

Prolog

The Weekly Challenge

Published on Monday 27 May 2024 02:11

As you know, The Weekly Challenge, primarily focus on Perl and Raku. During the Week #018, we received solutions to The Weekly Challenge - 018 by Orestis Zekai in Python. It was pleasant surprise to receive solutions in something other than Perl and Raku. Ever since regular team members also started contributing in other languages like Ada, APL, Awk, BASIC, Bash, Bc, Befunge-93, Bourne Shell, BQN, Brainfuck, C3, C, CESIL, Chef, COBOL, Coconut, C Shell, C++, Clojure, Crystal, D, Dart, Dc, Elixir, Elm, Emacs Lisp, Erlang, Excel VBA, F#, Factor, Fennel, Fish, Forth, Fortran, Gembase, GNAT, Go, GP, Groovy, Haskell, Haxe, HTML, Hy, Idris, IO, J, Janet, Java, JavaScript, Julia, Korn Shell, Kotlin, Lisp, Logo, Lua, M4, Maxima, Miranda, Modula 3, MMIX, Mumps, Myrddin, Nelua, Nim, Nix, Node.

Ruby

The Weekly Challenge

Published on Monday 27 May 2024 02:11

As you know, The Weekly Challenge, primarily focus on Perl and Raku. During the Week #018, we received solutions to The Weekly Challenge - 018 by Orestis Zekai in Python. It was pleasant surprise to receive solutions in something other than Perl and Raku. Ever since regular team members also started contributing in other languages like Ada, APL, Awk, BASIC, Bash, Bc, Befunge-93, Bourne Shell, BQN, Brainfuck, C3, C, CESIL, Chef, COBOL, Coconut, C Shell, C++, Clojure, Crystal, D, Dart, Dc, Elixir, Elm, Emacs Lisp, Erlang, Excel VBA, F#, Factor, Fennel, Fish, Forth, Fortran, Gembase, GNAT, Go, GP, Groovy, Haskell, Haxe, HTML, Hy, Idris, IO, J, Janet, Java, JavaScript, Julia, Korn Shell, Kotlin, Lisp, Logo, Lua, M4, Maxima, Miranda, Modula 3, MMIX, Mumps, Myrddin, Nelua, Nim, Nix, Node.

Perl Toolchain Summit 2024 in Lisbon

blogs.perl.org

Published by Kenichi Ishigaki on Sunday 26 May 2024 18:14

Last year at the Perl Toolchain Summit (PTS) in Lyon, I left three draft pull requests: one about the class declaration introduced in Perl 5.37, one about the PAUSE on docker, and one about multifactor authentication. I wanted to brush them up and ask Andreas König to merge some, but which should I prioritize this year?

I focused on the web UI in the past because other people tended to deal with the PAUSE backend, especially its indexer. But this year, when I was able to start thinking about my plan, Ricardo Signes and Matthew Horsfall had already expressed their plan about migrating the PAUSE to a new server. I was unsure if they would use my docker stuff, but I could safely guess I didn't need to touch it. I also thought we wouldn't have time to finish the multifactor authentication because it would need to change the PAUSE itself and the uploader clients, and Ricardo maintained the most favorite uploader module. The change for the new class detection was simple, but that didn't mean the result would also be predictable. I decided to investigate how the 02packages index would change first.

I needed to find a way to rebuild the index from scratch to see the differences. I wrote a script to gather author information from a CPAN mirror and filled the PAUSE's user-related tables with dummy data. I wrote another script to register my distributions in the mirror to my local PAUSE. The PAUSE would complain if I registered an older distribution after a newer one, so I had to gather all the information about my distributions and sort them by creation time. It seemed fine now, but it soon started hanging up when I increased the number of the distributions to register. The PAUSE daemon spawned too many child indexer processes and ate up all the memory I allocated to a virtual machine. After several trials and errors, I limited the number of child processes with Parallel::Runner, which I used for the CPANTS for years. Even if it weren't acceptable to Andreas for some reason, it should be easy to ask for the author's help because he (Chad Granum) would be at the PTS. I also had to fix a deadlock in the database due to the lack of proper indices. Matthew had already made a pull request last year, but I misread it and fixed the issue in a different (and inefficient) way.

Now that the script ran without hanging, I compared the generated 02packages index with the one in the mirror. I found more than four thousand lines of difference. I modified my local PAUSE clone to see why that happened. It looked like most of them were removed due to historical changes in the indexing policy, but instead of digging into it further, I decided to use what I got as a reference point and started changing the indexer. After several comparisons, I modified my local indexer to take care of the byte order mark and let it look for class declarations only when a few "use" statements were found. I applied the same changes to my Parse::PMFile module and made two releases before the PTS.

Day 1 of the PTS in Lisbon started with a discussion of the PAUSE migration. While the migration team was preparing, I asked Andreas to merge some of the existing small pull requests. The first one was to replace Travis CI with GitHub Actions by Ricardo. Unfortunately, it turned out that Test::mysqld and App::yath didn't work well in the GitHub Actions environment. I asked Chad for advice, but we couldn't make it work, so I tweaked the workflow file to use the good old "prove" command. The second was to improve password generation using Crypt::URandom by Leon Timmermans. I made another pull request to add it to the cpanfile for GitHub Actions. It might be better to modify our Makefile.PL to use ExtUtils::MakeMaker::CPANfile so that we wouldn't need to maintain both cpanfile and Makefile.PL. Maybe next time.

After dealing with a few more issues and pull requests, we moved on to class detection. As a starter, I asked Andreas to merge a years-old pull request by Ricardo to make the package detection stricter and then a pull request about the BOM I made. We discussed whether we could ignore class declarations by older modules such as MooseX::Declare. With Andreas' nod, I made another pull request and asked Ricardo and Matthew to review it.

I started day two by adding tests about the class detection with Module::Faker. I made another pull request to create a new 08pumpking index per Graham Knop's request, which MetaCPAN would eventually use. After merging them and a few more pull requests, I recreated a draft pull request on the multifactor authentication with pieces I couldn't implement last year (such as recovery codes). We also discussed the deadlock issue. In the end Andreas chose my pull request plus a commit from the one by Matthew. I was sorry we encountered a disk shortage while adding indices. Robert Spier helped us and optimized the database. By the end of the day, we had a few more pull requests merged, including the one for Parallel::Runner, with the help of Chad.

Day 3 was Deployment day. The migration team was busy, and there was no room for other stuff. I walked through the open issues, replied to some, and made a few small pull requests, hoping to revisit them in the future.

On day 4, I spent some time trying to figure out why uploading a large file to the new server didn't work, but in vain. I also attended a discussion about future PAUSE development. It would be nice to see the development continue after the offline event.

Many thanks to Breno Oliveira, Philippe Bruhat, and Laurent Boivin for organizing this event again and to our generous sponsors.

Monetary sponsors: Booking.com, The Perl and Raku Foundation, Deriv, cPanel, Inc Japan Perl Association, Perl-Services, Simplelists Ltd, Ctrl O Ltd, Findus Internet-OPAC, Harald Joerg, Steven Schubiger.

In-kind sponsors: Fastmail, Grant Street Group, Deft, Procura, Healex GmbH, SUSE, Zoopla.

Equalizing positions

dev.to #perl

Published by Simon Green on Sunday 26 May 2024 12:29

Weekly Challenge 270

Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. It's a great way for us all to practice some coding.

Challenge, My solutions

Task 1: Special Positions

Task

You are given a m x n binary matrix.

Write a script to return the number of special positions in the given binary matrix.

A position (i, j) is called special if $matrix[i][j] == 1 and all other elements in the row i and column j are 0.

My solution

For the input from the command line, I take a JSON string and convert that into a list of lists of integers.

This is a break down of the steps I take to complete the task.

  1. Set the special_position value to 0.
  2. Set rows and cols to the number of rows and columns in the matrix
  3. Create two lists (arrays in Perl) called row_count and col_count with zeros for the number of rows and columns respectively.
  4. Loop through each row and each column in the matrix. If the value is 1, increment the row_count for the row and col_count for the column by one. I also check that the number of items in this row is the same as the number of items in the first row.
  5. Loop through each row and each column in the matrix. If the value at that position is 1 and the row_count for the row is 1 (this would indicate that the other elements in the row are 0) and the col_count is 1, add one to the special_position variable.
  6. Return the special_position value.
def special_positions(matrix: list) -> int:
    rows = len(matrix)
    cols = len(matrix[0])
    special_position = 0

    row_count = [0] * rows
    col_count = [0] * cols

    for row in range(rows):
        if len(matrix[row]) != cols:
            raise ValueError("Row %s has the wrong number of columns", row)

        for col in range(cols):
            if matrix[row][col]:
                row_count[row] += 1
                col_count[col] += 1

    for row in range(rows):
        for col in range(cols):
            if matrix[row][col] and row_count[row] == 1 and col_count[col] == 1:
                special_position += 1

    return special_position

Examples

$ ./ch-1.py "[[1, 0, 0],[0, 0, 1],[1, 0, 0]]"
1

$ ./ch-1.py "[[1, 0, 0],[0, 1, 0],[0, 0, 1]]"
3

Task 2: Equalize Array

Task

You are give an array of integers, @ints and two integers, $x and $y.

Write a script to execute one of the two options:

  • Level 1: Pick an index i of the given array and do $ints[i] += 1.
  • Level 2: Pick two different indices i,j and do $ints[i] +=1 and $ints[j] += 1.

You are allowed to perform as many levels as you want to make every elements in the given array equal. There is cost attach for each level, for Level 1, the cost is $x and $y for Level 2.

In the end return the minimum cost to get the work done.

Known issue

Before I write about my solution, it will return the expected results for the two examples, but will not always give the minimum score.

For the array (4, 4, 2) with $x of 10 and $y of 1, it will return 20 (perform level 1 on the third value twice). However if you perform level 2 on the first and third value (5, 4, 3), and then on the second and third value (5, 5, 4), and finally level 1 on the last value (5, 5, 5), you'd get a score of 12.

File a bug in Bugzilla, Jira or Github, and we'll fix it later :P

My solution

For input from the command line, I take the last two values to be x and y, and the rest of the input to be ints.

The first step I take is to flip the array to be the number needed to reach the target value (maximum of the values).

def equalize_array(ints: list, x: int, y: int) -> str:
    score = 0
    # Calculate the needed values
    max_value = max(ints)
    needed = [max_value - i for i in ints]

I then perform level two only if y is less than twice the value of x. If it isn't, then I will always get the same or a lower score by performing level one on each value.

For level two, I sort the indexes (not values) of the needed list by their value, with the highest value first. If the second highest value is 0, it means there is no more level two tasks to perform, and I exit the loop. Otherwise I take one off the top two values in the needed array, and continue until the second highest value is 0. For each iteration, I add y to the score value.

    if len(ints) > 1 and y < x * 2:
        while True:
            sorted_index = sorted(
                range(len(ints)),
                key=lambda index: needed[index],
                reverse=True
            )

            if needed[sorted_index[1]] == 0:
                break

            needed[sorted_index[0]] -= 1
            needed[sorted_index[1]] -= 1
            score += y

Finally, my code perform the Level One operation. As level one takes one off each needed number, I simply multiple the sum of the remaining needed values by the x value and add it to score. I then return the value of score variable.

    score += sum(needed) * x
    return score

Examples

$ ./ch-2.py 4 1 3 2
9

$ ./ch-2.py 2 3 3 3 5 2 1
6

(cdxcvii) 8 great CPAN modules released last week

Niceperl

Published by Unknown on Saturday 25 May 2024 22:05

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::DBBrowser - Browse SQLite/MySQL/PostgreSQL databases and their tables interactively.
    • Version: 2.413 on 2024-05-23, with 14 votes
    • Previous CPAN version: 2.410 was 19 days before
    • Author: KUERBIS
  2. App::Netdisco - An open source web-based network management tool.
    • Version: 2.076005 on 2024-05-20, with 16 votes
    • Previous CPAN version: 2.076004 was 17 days before
    • Author: OLIVER
  3. Devel::CheckOS - a script to package Devel::AssertOS modules with your code.
    • Version: 2.04 on 2024-05-22, with 17 votes
    • Previous CPAN version: 2.02 was 7 days before
    • Author: DCANTRELL
  4. Dist::Zilla - distribution builder; installer not included!
    • Version: 6.032 on 2024-05-25, with 184 votes
    • Previous CPAN version: 6.031 was 6 months, 4 days before
    • Author: RJBS
  5. MCE - Many-Core Engine for Perl providing parallel processing capabilities
    • Version: 1.890 on 2024-05-24, with 103 votes
    • Previous CPAN version: 1.889 was 8 months, 11 days before
    • Author: MARIOROY
  6. MCE::Shared - MCE extension for sharing data supporting threads and processes
    • Version: 1.887 on 2024-05-24, with 15 votes
    • Previous CPAN version: 1.886 was 8 months, 11 days before
    • Author: MARIOROY
  7. Minion::Backend::mysql - MySQL backend
    • Version: 1.006 on 2024-05-22, with 13 votes
    • Previous CPAN version: 1.005 was 16 days before
    • Author: PREACTION
  8. Object::Remote - Call methods on objects in other processes or on other hosts
    • Version: 0.004004 on 2024-05-23, with 20 votes
    • Previous CPAN version: 0.004001 was 4 years, 5 months, 26 days before
    • Author: HAARG

I wrote some code to use the 1Password CLI

rjbs forgot what he was saying

Published by Ricardo Signes on Saturday 25 May 2024 12:00

Every time I store an API token in a plaintext file or an environment variable, it creates a lingering annoyance that follows me around whenever I go. Every year or two, another one of these lands on the pile. I am finally working on purging them all. I’m doing it with the 1Password CLI, and so far so good.

op

1Password’s CLI is op, which lets you do many, many different things. I was only concerned with two: It lets you read single fields from the vault and it lets you read entire items. For example, take this login:

a screenshot of my Pobox login

You can see there are a bunch of fields, like username and password and website. You can fetch all of them or just one. It’s a little weird, but it’s much easier to get a locator for one field than for the whole item. If you click the “Copy Secret Reference” option, you’ll get something like this on your clipboard:

"op://rjbs/Pobox/password"

You can pass that URL to op read and it will print out the value of the field. Here, that’s the password. Getting one field at a time can be useful if you only need to retrieve a password or TOTP secret or API token. Often, though, you’ll want to get the whole login at once. It would mean you could just store the item’s id rather than a cleartext username and a reference to the password field. Or worse, a reference to the password field and another one to the the TOTP field. Also, since each field needs to be retrieved separately with op read, it means more external processes and more possibility of weird errors.

The op item get command can fetch an entire item with all its fields. It can spit the whole item out as JSON. Here’s a limited subset of such a document:

{
  "fields": [
    {
      "id": "password",
      "type": "CONCEALED",
      "purpose": "PASSWORD",
      "label": "password",
      "value": "eatmorescrapple",
      "reference": "op://rjbs/Pobox/password",
      "password_details": {
        "strength": "DELICIOUS"
      }
    }
  ]
}

Unfortunately, 1Password doesn’t make it trivial to get the argument you need to pass op item get, but it’s not really hard. You can use “Copy Private Link”, which will get you a URL something like this (line breaks introduced by me):

https://start.1password.com/open/i?a=XB4AE5Q2ESODUTKETZB3BQGCM4
    &v=flk3x357inyiw22qpoiubhsgin
    &i=7wdr3xyzzym2xgorp4zx22zq3h
    &h=example.1password.com

The i= parameter is the item’s id. You can use that as the argument to op item get. Alternatively, given the URL like op://rjbs/Pobox/password you can extract the vault name (“rjbs”) and the item name (“Pobox”) and pass those as separate parameters that will be used to search for the item.

But why do either? You can just use Password::OnePassword::OPCLI!

Password::OnePassword::OPCLI

Here are two tiny examples of its use:

my $one_pw = Password::OnePassword::OPCLI->new;

# Get the string found in one field in your 1Password storage:
my $string = $one_pw->get_field("op://rjbs/Pobox/password");

# Get the complete document for an item, as a hashref:
my $pw_item = $one_pw->get_item("7wdr3xyzzym2xgorp4zx22zq3h");

Hopefully by now you can imagine what this is all doing. get_item returns the data structure that you’d get from op item get. You can look at its fields entry and find what you need. It does have one other trick worth mentioning. Because it’s a bit annoying to get the unique identifier for an item id, you can pass one of those op:// URLs, dropping off the field name, like this:

# Get the complete document for an item, as a hashref:
my $pw_item = $one_pw->get_item("op://rjbs/Pobox");

I’m currently imagining a world where I stick those URLs in place of API tokens and make my software smart enough to know that if it’s given an API token that string starts with op://, it should treat it as a 1Password reference. I haven’t implemented everything I need for that, but I did write something to use this with Dist::Zilla

Dist::Zilla and 1Password

The first thing I wanted to use all this for was my PAUSE password. Unfortunately for me, this was sort of complicated. Or, if not complicated, just tedious. I made a few false starts, but I’m just going to describe the one that I’m running with.

Dist::Zilla is the tool I use (and wrote) for making releases of my CPAN distributions. It’s usually configured with an INI file, like this one:

name    = Test-BinaryData
author  = Ricardo Signes <cpan@semiotic.systems>
license = Perl_5
copyright_holder = Ricardo Signes
copyright_year   = 2010

[@RJBS]
perl-window = long-term

Each section (the things in [...]) is a plugin of some sort. If the name starts with an @ it’s a bundle of plugins instead. But there’s another less commonly seen sigil for plugins: %. A percent sign means that the thing being loaded isn’t a plugin but a stash, which holds data for other plugins to use. These will more often be in ~/.dzil/config.ini than in each project.

The UploadToCPAN plugin, which actually uploads tarballs to the CPAN, looks in a few places for your credentials:

  • the %PAUSE stash (or another stash of your choosing)
  • ~/.pause, where CPAN::Uploader usually puts these credentials
  • user input when prompted

The %PAUSE stash was slightly overspecified in the code. It had to be a bit of configuration with the username and passwords given as text. What I did was relax that so that any stash implementing the (long-existing) Login role could be used. Then I wrote a new implementation of that role, Dist::Zilla::Stash::OnePasswordLogin. In that version of the stash, you only need to provide an item locator, and it will look up the username and password just in time. So, I have something like this in my global config now:

[%OnePasswordLogin / %PAUSE]
item = op://rjbs/PAUSE

Who cares if somebody steals this URL? They can’t read the credential unless I authenticate with 1Password at the time of reading. Putting other login credentials into your configuration for other plugins is similarly safe. Now, when I run dzil release, at the end I’m prompted to touch the fingerprint scanner to finish releasing. Not only is it more secure, but it feels very slightly like I’m in some kind of futuristic hacker movie.

What more could I want from my life as a computer programmer?

From Huh to Hero: Demystifying Perl in Two Easy Lessons (Part 2)

Perl on Medium

Published by Chaitanya Agrawal on Friday 24 May 2024 19:32

You’ve conquered the Perl basics in Part 1, but the adventure continues! In Part 2, we’ll delve deeper into the world of Perl, equipping…

Deploying Dancer Apps

perl.com

Published on Friday 24 May 2024 18:25

This article was originally published at Perl Hacks.


Over the last week or so, as a background task, I’ve been moving domains from an old server to a newer and rather cheaper server. As part of this work, I’ve been standardising the way I deploy web apps on the new server and I thought it might be interesting to share the approach I’m using and talking about a couple of CPAN modules that are making my life easier.

As an example, let’s take my Klortho app. It dispenses useful (but random) programming advice. It’s a Dancer2 app that I wrote many years ago and have been lightly poking at occasionally since then. The code is on GitHub and it’s currently running at klortho.perlhacks.com. It’s a simple app that doesn’t need a database, a cache or anything other than the Perl code.

Dancer apps are all built on PSGI, so they have all of the deployment flexibility you get with any PSGI app. You can take exactly the same code and run it as a CGI program, a mod_perl handler, a FastCGI program or as a stand-alone service running behind a proxy server. That last option is my favourite, so that’s what I’ll be talking about here.

Starting a service daemon for a PSGI app is simple enough – just running “plackup app.psgi” is all you really need. But you probably won’t get a particularly useful service daemon out of that. For example, you’ll probably get a non-forking server that will only respond to a single request at a time. It’ll be good enough for testing, but you’ll want something more robust for production. So you’ll want to tell “plackup” to use Starman or something like that.  And you’ll want other options to tell the service which port to run on. You’ll end up with a quite complex start-up command line to start the server. So, if you’re anything like me, you’ll put that all in a script which gets added to the code repo.

But it’s still all a bit amateur. Linux has a flexible and sophisticated framework for starting and stopping service daemons. We should probably look into using that instead. And that’s where my first module recommendation comes into play – Daemon::Control. Daemon::Control makes it easy to create service daemon control scripts that fit in with the standard Linux way of doing things. For example, my Klortho repo contains a file called klortho_service which looks like this:

#!/usr/bin/env perl

use warnings;
use strict;
use Daemon::Control;

use ENV::Util load_dotenv;

use Cwd qw(abs_path);
use File::Basename;

Daemon::Control->new({
  name      => ucfirst lc $ENV{KLORTHO_APP_NAME},
  lsb_start => $syslog $remote_fs,
  lsb_stop  => $syslog,
  lsb_sdesc => Advice from Klortho,
  lsb_desc  => Klortho knows programming. Listen to Klortho,
  path      => abs_path($0),

  program      => /usr/bin/starman,
  program_args => [ ‘–workers, 10, -l, :$ENV{KLORTHO_APP_PORT},
                    dirname(abs_path($0)) . /app.psgi ],

  user  => $ENV{KLORTHO_OWNER},
  group => $ENV{KLORTHO_GROUP},

  pid_file    => /var/run/$ENV{KLORTHO_APP_NAME}.pid,
  stderr_file => $ENV{KLORTHO_LOG_DIR}/error.log,
  stdout_file => $ENV{KLORTHO_LOG_DIR}/output.log,

  fork => 2,
})->run;

This code takes my hacked-together service start script and raises it to another level. We now have a program that works the same way as other daemon control programs like “apachectl” that you might have used. It takes command line arguments, so you can start and stop the service (with “klortho_service start”, “klortho_service stop” and “klortho_service restart”) and query whether or not the service is running with “klortho_service status”. There are several other options, which you can see with “klortho_service status”. Notice that it also writes the daemon’s output (including errors) to files under the standard Linux logs directory. Redirecting those to a more modern logging system is a task for another day.

Actually, thinking about it, this is all like the old “System V” service management system. I should see if there’s a replacement that works with “systemd” instead.

And if you look at line 7 in the code above, you’ll see the other CPAN module that’s currently making my life a lot easier – ENV::Util. This is a module that makes it easy to work with “dotenv” files. If you haven’t come across “dotenv” files, here’s a brief explanation – they’re files that are tied to your deployment environments (development, staging, production, etc.) and they contain definitions of environment variables which are used to control how your software acts in the different environments. For example, you’ll almost certainly want to connect to a different database instance in your different environments, so you would have a different “dotenv” file in each environment which defines the connection parameters for the appropriate database in that environment. As you need different values in different environments (and, also, because you’ll probably want sensitive information like passwords in the file) you don’t want to store your “dotenv” files in your source code control. But it’s common to add a file (called something like “.env.sample”) which contains a list of the required environment variables along with sample values.

My Klortho program doesn’t have a database. But it does need a few environment variables. Here’s its “.env.sample” file:

export KLORTHO_APP_NAME=klortho
export KLORTHO_OWNER=someone
export KLORTHO_GROUP=somegroup
export KLORTHO_LOG_DIR=/var/log/$KLORTHO_APP_NAME
export KLORTHO_APP_PORT=9999

And near the top of my service daemon control program, you’ll see the line:

use ENV::Util -load_dotenv;

That looks to see if there’s a “.env” file in the current directory and, if it finds one, it is loaded and the contents are inserted in the “%ENV” hash – from where they can be accessed by the rest of the code.

There’s one piece of the process missing. It’s nothing clever. I just need to generate a configuration file so the proxy server (I use “nginx”) reroutes requests to klortho.perlhacks.com so that they’re processed by the daemon running on whatever port is configured in “KLORTHO_APP_PORT”. But “nginx” configuration is pretty well-understood and I’ll leave that as an exercise for the reader (but feel free to get in touch if you need any help).

So that’s how it works. I have about half a dozen Dancer2 apps running on my new server using this layout. And knowing that I have standardised service daemon control scripts and “dotenv” files makes looking after them all far easier.

And before anyone mentions it, yes, I should rewrite them so they’re all Docker images. That’s a work in progress. And I should run them on some serverless system. I know my systems aren’t completely up to date. But we’re getting there.

If you have any suggestions for improvement, please let me know.

From Huh to Hero: Demystifying Perl in Two Easy Lessons (Part 1)

Perl on Medium

Published by Chaitanya Agrawal on Thursday 23 May 2024 01:32

Have you ever stumbled across the word “Perl” and thought, “Huh? What’s that?” Or maybe you’ve heard whispers of its cryptic symbols and…

Creating new Perl composite actions from a repository template

dev.to #perl

Published by Juan Julián Merelo Guervós on Wednesday 22 May 2024 12:06

So you want to create a quick-and-dirty GitHub actions that does only one thing and does it well, or glues together several actions, or simply to show off a bit at your next work interview. Here's how you can do it.
Let me introduce you to composite GitHub actions, one of the three types that are there (the other are JavaScript GHAs or container-based GHAs) and maybe one of the most widely unknown. However, they have several things going for them. First, they have low latency: no need to download a container or to set up some JS environment). Second, they are relatively easy to set up: they can be self-contained, with everything needed running directly on the description of the GitHub action. Third, you can leverage all the tools installed on the runner like bash, compilers, build tools... or Perl, which can be that and much more.
Even being easy, it is even easier if you have a bit of boilerplate you can use directly or adapt to your own purposes. This is what has guided the creation of the template for a composite GitHub action based on Perl. It is quite minimalistic. But let me walk you through what it has got so that you can use it easier

First, this action.yml describes what it does and how it does:

name: 'Hello Perl'
description: 'Perl Github Action Template'
inputs:
  template-input:  # Change this
    description: 'What it is about'
    required: false # or not
    default: 'World'
runs:
  using: "composite"
  steps:
    - uses: actions/checkout@v4
    - run: print %ENV;
      shell: perl {0}
    - run: ${GITHUB_ACTION_PATH}/action.pl
      shell: bash

You will have to customize inputs as well as outputs here (and, of course, name and description), but the steps are already baked in. It even includes the correct path to the (downloaded) Github action: when you're acting on a repository, the place where a GHA is is contained in an environment variable, GITHUB_ACTION_PATH. You can access it that way.

In general, that script might need external libraries, even your own, which you might have moved out of the script for testing purposes (everything must be tested). That is why the action also contains App::FatPacker as a dependency; that's a bundler that will put the source (action.src.pl), your library (lib/Action.pm) and every other module you might have used into a single file, the action.pl referenced above.

A Makefile is also provided, so that, after installing fatpack, typing make will process the source and generate the script.

And that's essentially it. Use the template and create your new (composite) action in just a few minutes!

The London Perl & Raku Workshop 2024

perl.com

Published on Wednesday 22 May 2024 11:45

LPW is Back

We’re happy to confirm the return of The London Perl & Raku Workshop after a five year break:

  • When: Saturday 26th October 2024
  • Where: The Trampery, 239 Old Street, London EC1V 9EY

This year’s workshop will be held at The Trampery, Old Street, a dedicated modern event space in central London. We have hired both The Ballroom and The Library; allowing us to run a main track for up to 160 attendees, and second smaller track for up to 35 attendees.

The Trampery is located a two minute walk from the Northern Line’s Old Street tube station in central London. The Northern Line has stops at most of the major train stations in London, or trivial links to others, so we recommend taking the tube to get to the venue.

Sign Up & Submit Talks

If you haven’t already, please sign up and submit talks using the official workshop site

We welcome proposals relating to Perl 5, Raku, other languages, and supporting technologies. We may even have space for a couple of talks entirely tangential as we will have two tracks.

Talks may be long (40 mins), short (20 mins), or very short (aka lightning, 5 mins) but we would prefer talks to be on the shorter side and will likely prioritise 20 minute talks. We would also be pleased to accept proposals for tutorials and discussions. The deadline for submissions is 30th September.

We would really like to have more first time speakers. If you would like help with a talk proposal, and/or the talk itself, let us know - we’ve got people happy to be your talk buddy!

Thanks to this year’s sponsors, without whom LPW would not happen:

If you would like to sponsor LPW then please have a look at the options here

GitHub Sponsors 💰 and Perl 🐫

dev.to #perl

Published by Gabor Szabo on Tuesday 21 May 2024 18:50

I was hoping to be able to write something more interesting about the GitHub Sponsors of various Perl developers, but so far I only found a few people and neither of them, well except myself, if I can still count in the group has any income via GitHub Sponsors.

Is it because they don't promote it enough?

Is it because the Explore GitHub Sponsors does not support the Perl/CPAN ecosystem?

Anyway, it would be really nice to see a few people starting to sponsors these people. That would be an encouragement to them and maybe also to others to support them.

trapd00r (magnus woldrich) · GitHub

linux, perl and inline skating enthusiast 🔌. trapd00r has 260 repositories available. Follow their code on GitHub.

favicon github.com

magnus woldrich

davorg (Dave Cross) · GitHub

Making things with software since 1984. davorg has 200 repositories available. Follow their code on GitHub.

favicon github.com

Dave Cross

giterlizzi (Giuseppe Di Terlizzi) · GitHub

IT Senior Security Consultant & Full Stack Developer - giterlizzi

favicon github.com

Giuseppe Di Terlizzi

nigelhorne (Nigel Horne) · GitHub

nigelhorne has 108 repositories available. Follow their code on GitHub.

favicon github.com

Nigel Horne

michal-josef-spacek (Michal Josef Špaček) · GitHub

michal-josef-spacek has 529 repositories available. Follow their code on GitHub.

favicon github.com

Michal Josef Špaček

szabgab (Gábor Szabó) · GitHub

Teaching Rust, Python, Git, GitHub, Docker, test automation. - szabgab

favicon github.com

Gábor Szabó

Follow me / Sponsor me

If you'd like to read more such posts, don't forget to upvote this one, to follow me here on DEV.to and to sponsor me via GitHub Sponsors.

Installing CPAN modules from git

dev.to #perl

Published by Tib on Tuesday 21 May 2024 18:18

(picture from elevate)

For various reasons, you might want to install CPAN modules from a git repository.

It can be because somehow a git repository is in advance against CPAN:

  • A fix was merged in official git repository but never released to CPAN
  • A branch or a fork contains some valuable changes (this very-little-but-absolutely-needed fix)

Or it can be because the modules are actually not in CPAN: not public and not in a alternative/private CPAN (see Addendum) or simply they are only "experiments"

But this post is not meant to discuss about the "why" but instead mainly share technically the "how" you could do that 😀

I tested various syntax and installers and will share now some working examples.

☝️ Before we continue, be sure to upgrade your installers (App::cpm and App::cpanminus) to their latest

Installing from command line with cpm

Installing with cpm is straighforward:

$ cpm install https://github.com/plack/Plack.git --verbose
33257 DONE fetch     (0.971sec) https://github.com/plack/Plack.git
33257 DONE configure (0.033sec) https://github.com/plack/Plack.git
33257 DONE resolve   (0.031sec) Clone -> Clone-0.46 (from MetaDB)
...
33257 DONE install   (0.364sec) URI-5.28
33257 DONE install   (0.046sec) https://github.com/plack/Plack.git
31 distributions installed.

It can also work the same with ssh git@github.com:plack/Plack.git:

$ cpm install git@github.com:plack/Plack.git --verbose
64383 DONE fetch     (2.498sec) git@github.com:plack/Plack.git
64383 DONE configure (0.039sec) git@github.com:plack/Plack.git
...
64383 DONE install   (0.045sec) git@github.com:plack/Plack.git
31 distributions installed.

Installing from command line with cpanminus

Installing with cpanm is not harder:

$ cpanm https://github.com/plack/Plack.git
Cloning https://github.com/plack/Plack.git ... OK
--> Working on https://github.com/plack/Plack.git
...
Building and testing Plack-1.0051 ... OK
Successfully installed Plack-1.0051
45 distributions installed

Installing from cpanfile

The correct syntax is the following (thank you @haarg):

requires 'Plack', git => 'https://github.com/plack/Plack.git', ref => 'master';

(ref => 'master' is optional)

And it would just work later with cpm:

$ cpm install --verbose
Loading requirements from cpanfile...
33257 DONE fetch     (0.971sec) https://github.com/plack/Plack.git
33257 DONE configure (0.033sec) https://github.com/plack/Plack.git
33257 DONE resolve   (0.031sec) Clone -> Clone-0.46 (from MetaDB)
...
33257 DONE install   (0.364sec) URI-5.28
33257 DONE install   (0.046sec) https://github.com/plack/Plack.git
31 distributions installed.

⚠️ Despite being a cpanfile, please note the use of cpm

Installing from cpmfile

Let's write our first cpmfile and save it as cpm.yml:

prereqs:
  runtime:
    requires:
      Plack:
        git: https://github.com/plack/Plack.git
        ref: master

And then it would just work with cpm:

$ cpm install --verbose
Loading requirements from cpm.yml...
66419 DONE resolve   (0.000sec) Plack -> https://github.com/plack/Plack.git@master (from Custom)
66419 DONE fetch     (1.695sec) https://github.com/plack/Plack.git
66419 DONE configure (0.034sec) https://github.com/plack/Plack.git
...
66419 DONE install   (0.023sec) https://github.com/plack/Plack.git
31 distributions installed.

Beware of "incomplete" repositories

Releases on CPAN are standardized and generally contain what is needed for installers, but distributions living in git repositories are more for development and very often not in a "ready to install" state.

(thank you @karenetheridge)

There's some limitations that you can encounter:

  • cpm would refuse to install if no META file is found (but cpanm would be OK with that)
  • cpm would refuse to install if no Makefile.PL nor Build.PL is found, except if x_static_install: 1 is declared in META (cpanm would still refuse)

Should I mention the repositories with only a dist.ini? (used by authors to generate everything else)

And you would get similar trouble with distributions using Module::Install but having not versioned it.

Conclusion

You should probably not rely too much on "install from git" method but still, it can provide an handy way to install modules to test fixes or experiments.

And now with this post you should have good examples of “how” you can achieve that.

Addendum

For alternative/private CPAN, several tools can come to your rescue:

Deploying Dancer Apps

Perl Hacks

Published by Dave Cross on Sunday 19 May 2024 17:39

Over the last week or so, as a background task, I’ve been moving domains from an old server to a newer and rather cheaper server. As part of this work, I’ve been standardising the way I deploy web apps on the new server and I thought it might be interesting to share the approach I’m using and talking about a couple of CPAN modules that are making my life easier.

As an example, let’s take my Klortho app. It dispenses useful (but random) programming advice. It’s a Dancer2 app that I wrote many years ago and have been lightly poking at occasionally since then. The code is on GitHub and it’s currently running at klortho.perlhacks.com. It’s a simple app that doesn’t need a database, a cache or anything other than the Perl code.

Dancer apps are all built on PSGI, so they have all of the deployment flexibility you get with any PSGI app. You can take exactly the same code and run it as a CGI program, a mod_perl handler, a FastCGI program or as a stand-alone service running behind a proxy server. That last option is my favourite, so that’s what I’ll be talking about here.

Starting a service daemon for a PSGI app is simple enough – just running “plackup app.psgi” is all you really need. But you probably won’t get a particularly useful service daemon out of that. For example, you’ll probably get a non-forking server that will only respond to a single request at a time. It’ll be good enough for testing, but you’ll want something more robust for production. So you’ll want to tell “plackup” to use Starman or something like that.  And you’ll want other options to tell the service which port to run on. You’ll end up with a quite complex start-up command line to start the server. So, if you’re anything like me, you’ll put that all in a script which gets added to the code repo.

But it’s still all a bit amateur. Linux has a flexible and sophisticated framework for starting and stopping service daemons. We should probably look into using that instead. And that’s where my first module recommendation comes into play – Daemon::Control. Daemon::Control makes it easy to create service daemon control scripts that fit in with the standard Linux way of doing things. For example, my Klortho repo contains a file called klortho_service which looks like this:

#!/usr/bin/env perl

use warnings;
use strict;
use Daemon::Control;

use ENV::Util -load_dotenv;
 
use Cwd qw(abs_path);
use File::Basename;
 
Daemon::Control->new({
  name      => ucfirst lc $ENV{KLORTHO_APP_NAME},
  lsb_start => '$syslog $remote_fs',
  lsb_stop  => '$syslog',
  lsb_sdesc => 'Advice from Klortho',
  lsb_desc  => 'Klortho knows programming. Listen to Klortho',
  path      => abs_path($0),
 
  program      => '/usr/bin/starman',
  program_args => [ '--workers', 10, '-l', ":$ENV{KLORTHO_APP_PORT}",
                    dirname(abs_path($0)) . '/app.psgi' ],
 
  user  => $ENV{KLORTHO_OWNER},
  group => $ENV{KLORTHO_GROUP},
 
  pid_file    => "/var/run/$ENV{KLORTHO_APP_NAME}.pid",
  stderr_file => "$ENV{KLORTHO_LOG_DIR}/error.log",
  stdout_file => "$ENV{KLORTHO_LOG_DIR}/output.log",
 
  fork => 2,
})->run;

This code takes my hacked-together service start script and raises it to another level. We now have a program that works the same way as other daemon control programs like “apachectl” that you might have used. It takes command line arguments, so you can start and stop the service (with “klortho_service start”, “klortho_service stop” and “klortho_service restart”) and query whether or not the service is running with “klortho_service status”. There are several other options, which you can see with “klortho_service status”. Notice that it also writes the daemon’s output (including errors) to files under the standard Linux logs directory. Redirecting those to a more modern logging system is a task for another day.

Actually, thinking about it, this is all like the old “System V” service management system. I should see if there’s a replacement that works with “systemd” instead.

And if you look at line 7 in the code above, you’ll see the other CPAN module that’s currently making my life a lot easier – ENV::Util. This is a module that makes it easy to work with “dotenv” files. If you haven’t come across “dotenv” files, here’s a brief explanation – they’re files that are tied to your deployment environments (development, staging, production, etc.) and they contain definitions of environment variables which are used to control how your software acts in the different environments. For example, you’ll almost certainly want to connect to a different database instance in your different environments, so you would have a different “dotenv” file in each environment which defines the connection parameters for the appropriate database in that environment. As you need different values in different environments (and, also, because you’ll probably want sensitive information like passwords in the file) you don’t want to store your “dotenv” files in your source code control. But it’s common to add a file (called something like “.env.sample”) which contains a list of the required environment variables along with sample values.

My Klortho program doesn’t have a database. But it does need a few environment variables. Here’s its “.env.sample” file:

export KLORTHO_APP_NAME=klortho
export KLORTHO_OWNER=someone
export KLORTHO_GROUP=somegroup
export KLORTHO_LOG_DIR=/var/log/$KLORTHO_APP_NAME
export KLORTHO_APP_PORT=9999

And near the top of my service daemon control program, you’ll see the line:

use ENV::Util -load_dotenv;

That looks to see if there’s a “.env” file in the current directory and, if it finds one, it is loaded and the contents are inserted in the “%ENV” hash – from where they can be accessed by the rest of the code.

There’s one piece of the process missing. It’s nothing clever. I just need to generate a configuration file so the proxy server (I use “nginx”) reroutes requests to klortho.perlhacks.com so that they’re processed by the daemon running on whatever port is configured in “KLORTHO_APP_PORT”. But “nginx” configuration is pretty well-understood and I’ll leave that as an exercise for the reader (but feel free to get in touch if you need any help).

So that’s how it works. I have about half a dozen Dancer2 apps running on my new server using this layout. And knowing that I have standardised service daemon control scripts and “dotenv” files makes looking after them all far easier.

And before anyone mentions it, yes, I should rewrite them so they’re all Docker images. That’s a work in progress. And I should run them on some serverless system. I know my systems aren’t completely up to date. But we’re getting there.

If you have any suggestions for improvement, please let me know.

The post Deploying Dancer Apps first appeared on Perl Hacks.

World Uncovered is cool

rjbs forgot what he was saying

Published by Ricardo Signes on Sunday 19 May 2024 12:00

Years ago, I found an iOS app called World Uncovered. I used it for a while, then forgot about it, then started using it again. It’s pretty cool, and I keep telling people about it, so I thought I’d write a post about it.

It’s like this: you let it track your movements the same way that a fitness app like RunKeeper would, and instead of telling you how many steps you’re getting in, it tells you where you’ve ever walked, ever. Keeping in mind that I sometimes have it turned off, and forgot about it for years, check out my map for Philadelphia:

Philly Uncovered

Sometimes I joke with my coworkers that I have never been to West Philadelphia. It’s not true, but… I mean, you can see my usual stomping grounds. You can also see the little excursions up toward Bethlehem, off to Camden, and down… well, I don’t know what I was doing down in South Philly, but that little diagonal below the Italian Market is definitely representative of a few trips to Milk Jawn.

Like I said, I’d forgotten all about the app until February, when I was in Vienna for a few days. For some reason I was flipping through my phone’s screens and saw it and thought, “I should log this trip!” It was great fun, and it made me realize all the trips I had failed to log for the past few years: Norway, Brussels, England, Australia, and others. I like to see new places, and these little marked-up maps are a fun memento. Here’s Vienna:

Vienna Uncovered

A few months later, I was in Lisbon, which you’ll know if you’ve been keeping up with my posts here. Lisbon was fun because I took a train off to the west coast, so I got a map with two hubs of activity with a long straight train ride between them:

Lisbon Uncovered

Lisbon was also where I finally got a “landmark” achievement. The app has its own (somewhat eccentric, I think) collection of landmarks, and by visiting them you can tick it off your list. If you visit enough, you get an achievement. Is this a good way to plan your travel? No. Is it fun if it happens anyway? Well, for me it was. I got Belém Tower.

Weirdly, there are no World Uncovered landmarks in Philly or in Melbourne. Still, I’ll keep an eye out for landmarks on future trips. I also wouldn’t mind getting a few of the “passport” achievements for visiting new countries. That’ll take some time, though!

As for the app itself, it feels sort of dated. It hasn’t had an updated in three or four years, and the UI is kind of clunky. The backup feature constantly needs to be reauthenticated with Dropbox, and unless you knew to turn on GPX trip mapping from day one, most of your backup will be in an encrypted zip file. So, I live in modest fear of losing all this data someday, and might look at some better way to do this. (Maybe log all my GPX data in some other app and then import it here later? I don’t know.)

I did have some email interaction with the developer recently, who told me that the app isn’t dead, just done. I can respect that, given all the software of mine that I feel is just done. I’ll keep enjoying it and not worry about its future.

Oh, and as for how I’m enjoying it, I should talk about one more thing: shorties.

Philadelphia is a grid. I like this, because it makes the city much easier for me to navigate and think about. The weird layout of Boston was sort of charming, but also kind of a total pain. I like the grid. In Center City, the grid runs from 30th Street at the west to Front Street (which is basically 1st Street) at the east. Right in the middle, Broad Street (which is basically 14th) divides the city in half. The other main north/south streets are numbered 2-29. Then the major east/west streets run from the Schuylkill River at the west to the Delaware River at the east. They’re mostly named after trees, and there are few enough that it’s pretty easy to name them all.

The thing is, there are lots of other streets in the grid. For example, between 11th and 12th is Marvine Street, running north/south. It’s not always there, though, just sometimes. It runs from Catharine north to Bainbridge, but then stops. It shows up again way further north at Race, running just one block to Vine, and then showing up again later. These streets have been dubbed “shorties” by my excellent colleague Lacey. They’re great, with lots of character and not much traffic. A hundred years ago, each length of shorty might have its own name, but they were rationalized in the 20th century. The street halfway between 8th and 9th is Darien. If it’s between 8th and 9th, but west of center, it’s Schell. If it’s east of center, it’s Mildred. Some blocks have one, two, or all three between them. (Marvine is a funny one. Between Lombard and Walnut, it’s Quince instead. I bet there’s history.)

Anyway, I would like to walk down every shorty in center city. This is not hard, it’s just going to take a lot of time, and a lot of consultation with the map. World Uncovered makes that easy. I think my next plan for tackling these is to start picking one block between home and work and on the way in, hit all of its shorties. Then another one on the way home. That will only get me about a quarter of the city, tops, but it’s a start.

The real treat is when I stop at a street that’s really rich in shorties, especially interior second-order ones, like this gem:

10th and Locust

Honestly, look at that tiny length of Irving Street, which often connects two numbered streets east to west. Here, it’s a tiny little alley inside the block, only reachable by another shorty. What a city!

(cdxcvi) 6 great CPAN modules released last week

Niceperl

Published by Unknown on Sunday 19 May 2024 09:06

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Devel::CheckOS - a script to package Devel::AssertOS modules with your code.
    • Version: 2.02 on 2024-05-15, with 17 votes
    • Previous CPAN version: 2.01 was 13 days before
    • Author: DCANTRELL
  2. Log::Contextual - Simple logging interface with a contextual log
    • Version: 0.009000 on 2024-05-15, with 13 votes
    • Previous CPAN version: 0.008001 was 6 years, 3 months, 27 days before
    • Author: HAARG
  3. MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
    • Version: 2.032000 on 2024-05-15, with 25 votes
    • Previous CPAN version: 2.031001 was 2 months, 4 days before
    • Author: MICKEY
  4. Mojolicious - Real-time web framework
    • Version: 9.37 on 2024-05-13, with 497 votes
    • Previous CPAN version: 9.36 was 2 months, 5 days before
    • Author: SRI
  5. PDF::API2 - Create, modify, and examine PDF files
    • Version: 2.047 on 2024-05-18, with 30 votes
    • Previous CPAN version: 2.045 was 7 months, 23 days before
    • Author: SSIMMS
  6. Sub::Override - Perl extension for easily overriding subroutines
    • Version: 0.11 on 2024-05-14, with 15 votes
    • Previous CPAN version: 0.10 was 5 months, 9 days before
    • Author: MVSJES

The Perl and Raku Conference (now in its 26th year) would not exist without sponsors. Above, you’ll see a screen shot from Curtis Poe’s Perl’s new object-oriented syntax is coming, but what’s next? talk at last year’s conference in Toronto. You may be wondering how you can add your organization’s logo to this year’s list. In the spirit of transparency, we are making our sponsorship prospectus public. Please share this article freely with friends, colleagues and decision makers so that we can reach as many new sponsors as possible.

The Perl and Raku Conference 2024 Prospectus

This year the Perl and Raku Conference will be held in Las Vegas, Nevada on June 24-28, 2024. Conferences such as this provide tangible benefits to organizations which use Perl. Aside from the transfer of knowledge which attendees bring back to their place of work, on-site hackathons also contribute to the growth and maintenance of the software stack which so many companies have come to rely on. In 2022, for example, the hackathon focused on modernizing and improving support for Perl across various editors, allowing Perl developers to be even more productive than they have been in the past.

There are still opportunities to support this important, grassroots Open Source Software event. Events like these largely depend on sponsors in order to thrive.

This year, we are looking for corporate donations to offset the costs of feeding conference attendees. Each of these sponsorship opportunities comes with the following:

  • your logo, should you provide one, will appear on the banners which are displayed behind speakers and will subsequently appear in speaker videos
  • your logo, a short blurb and a long blurb about your company will appear on the event website
  • you will be listed as a sponsor in the materials handed out at the event
  • we will thank you publicly at the event
  • if you are able to provide some swag, we will gladly distribute it at the conference via our swag bags

Breakfast Sponsor (3 available)

Sponsor a catered breakfast during one of the conference days

Sponsorship commitment: $3,500

Snack Breaks (2 available)

Sponsor a catered snack break during one of the conference days.

Sponsorship commitment: $3,000

Coffee Break Sponsor (2 available)

Sponsor a coffee break during one of the conference days.

Sponsorship commitment: $2,500

Please do let me know at what level you might be interested in contributing and we can begin the process of getting you involved in this very special event.

Deadline

In order to get your logo on the “step and repeat” banner we would need to have finalized sponsorship and received logo assets by June 1st, so we’d love to help you start the process as soon as you’re ready.

Contact

For any questions or to begin the sponsorship process, please contact me via olaf@wundersolutions.com. I’ll be happy to answer any questions and walk you through the process. If you’d like to discuss sponsorship options which are greater or smaller than the offerings listed, I can also work with you on that. If you’re not ready to sponsor this event but would like to be included in future mailings, please reach out to me via email as well. I look forward to hearing from you!

Spaces are Limited

In 2024 we expect to host over 100 attendees, but there is a hard cap of 150. If you’re thinking of attending, it’s best to secure your ticket soon.

About Perl Programming

Perl on Medium

Published by Vcanhelpsu on Wednesday 15 May 2024 07:40

Data::Fake::CPAN (a PTS 2024 supplement)

rjbs forgot what he was saying

Published by Ricardo Signes on Sunday 12 May 2024 12:00

One of the things I wrote at the first PTS (back when it was called the Perl QA Hackathon) was Module::Faker. I wrote about it back then (way back in 2008), and again eleven years later. It’s a library that, given a description of a (pretend) CPAN distribution, produces that distribution as an actual file on disk with all the files the dist should have.

Every year or two I’ve made it a bit more useful as a testing tool, mostly for PAUSE. Here’s a pretty simple sample of how those tests use it:

$pause->upload_author_fake(PERSON => 'Not-Very-Meta-1.234.tar.gz', {
  omitted_files => [ qw( META.yml META.json ) ],
});

This writes out Not-Very-Meta-1.234.tar.gz with a Makefile.PL, a manifest, and other stuff. The package and version (and a magic true value) also appear in lib/Not/Very/Meta.pm. Normally, you’d also get metafiles, but here we’ve told Module::Faker to omit them, so we can test what happens without them. When we were talking about testing the new PAUSE server in Lisbon, we knew we’d have to upload distributions and see if they got indexed. Here, we wouldn’t want to just make the same test distribution over and over, but to quickly get new ones that wouldn’t conflict with the old ones.

This sounded like a job for Module::Faker and a random number generator, so I hot glued the two things together. Before I get into explaining what I did, I should note that this work wasn’t very important, and we really only barely used it, because we didn’t really need that much testing. On the other hand, it was fun. I had fun writing it and seeing what it would generate, and I have plans to have more fun with it. After a long day of carefully reviewing cron job logs, this work was a nice goofy thing to do before dinner.

Data::Fake::CPAN

Data::Fake is a really cool library written by David Golden. It’s really simple, but that simplicity makes it powerful. The ideas are like this:

  1. it’s useful to have a function that, when called, returns random data
  2. to configure that generator, it’s useful to have a function that returns the kind of function discussed in #1
  3. these kinds of functions are useful to compose

So, for example, here’s some sample code from the library’s documentation:

my $hero_generator = fake_hash(
    {
        name      => fake_name(),
        battlecry => fake_sentences(1),
        birthday  => fake_past_datetime("%Y-%m-%d"),
        friends   => fake_array( fake_int(2,4), fake_name() ),
        gender    => fake_pick(qw/Male Female Other/),
    }
);

Each of those fake... subroutine calls returns another subroutine. So, in the end you have $hero_generator as a code reference that, when called, will return a reference to a five-key hash. Each value in the hash will be the result of calling the generators given as values in the fake_hash call.

It takes a little while to get used to working with the code generators this way, but once you do, it comes very easy to snap together generators of random data structures. (When you’re done here, why not check out David Golden’s talk about using higher-order functions to create Data::Fake?) Helpfully, as you can see above, Data::Fake comes with generators for a bunch of data types.

What I did was write a Data::Fake plugin, Data::Fake::CPAN, that provides generators for version strings, package names, CPAN author identities, license types, prereq structures and, putting those all together, entire CPAN distributions. So, this code works:

use Data::Fake qw(CPAN);

my $dist = fake_cpan_distribution()->();

my $archive = $dist->make_archive({ dir => '.' });

When run, this writes out an archive file to disk. For example, I just got this:

$ ./nonsense/module-blaster
Produced archive as ./Variation-20010919.556.tar.gz (cpan author: MDUNN)
- Variation
- Variation::Colorless
- Variation::Conventional
- Variation::Dizzy

There are a few different kinds of date formats that it might pick. This time, it picked YYYYMMDD.xxx. That username, MDUNN, is short for Mayson Dunn. I found out by extracting the archive and reading the metadata. Here’s a sample of the prereqs:

{
  "prereqs" : {
    "build" : {
       "requires" : {
          "Impression::Studio" : "19721210.298"
       }
    },
    "runtime" : {
       "conflicts" : {
          "Writer::Cigarette" : "19830107.752"
       },
       "recommends" : {
          "Error::Membership" : "v5.16.17",
          "Marriage" : "v1.19.6"
       },
       "requires" : {
          "Alcohol" : "v12.16.0",
          "Competition::Economics" : "v19.1.7",
          "People" : "20100228.011",
          "Republic" : "20040805.896",
          "Transportation::Discussion" : "6.069"
       }
    }
  }
}

You’ll see that when I generated this, I ran ./nonsense/module-blaster. That program is in the Module-Faker repo, for your enjoyment. I hope to play with it more in the future, changing the magic true values, maybe adding real code, and just more variation — but probably centered around things that will have real impact on how PAUSE indexes things.

Probably very few people have much use for Module::Faker, let alone Data::Fake::CPAN. I get that! But Data::Fake is pretty great, and pretty useful for lots of testing. Also, generating fun, sort of plausible data makes testing more enjoyable. I don’t know why, but I always like watching my test suite fail more when it’s spitting out fun made-up names at the same time. Try it yourself!

(cdxcv) 8 great CPAN modules released last week

Niceperl

Published by Unknown on Sunday 12 May 2024 09:43

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. DBD::Oracle - Oracle database driver for the DBI module
    • Version: 1.90 on 2024-05-07, with 31 votes
    • Previous CPAN version: 1.83 was 2 years, 3 months, 21 days before
    • Author: ZARQUON
  2. Firefox::Marionette - Automate the Firefox browser with the Marionette protocol
    • Version: 1.57 on 2024-05-06, with 16 votes
    • Previous CPAN version: 0.77 was 4 years, 9 months, 30 days before
    • Author: DDICK
  3. Minion::Backend::mysql - MySQL backend
    • Version: 1.005 on 2024-05-06, with 13 votes
    • Previous CPAN version: 1.004 was 6 months, 6 days before
    • Author: PREACTION
  4. Path::Tiny - File path utility
    • Version: 0.146 on 2024-05-08, with 188 votes
    • Previous CPAN version: 0.144 was 1 year, 5 months, 7 days before
    • Author: DAGOLDEN
  5. PDL - Perl Data Language
    • Version: 2.089 on 2024-05-11, with 52 votes
    • Previous CPAN version: 2.088 was 20 days before
    • Author: ETJ
  6. Perl::Tidy - indent and reformat perl scripts
    • Version: 20240511 on 2024-05-10, with 140 votes
    • Previous CPAN version: 20240202 was 3 months, 9 days before
    • Author: SHANCOCK
  7. Prima - a Perl graphic toolkit
    • Version: 1.73 on 2024-05-09, with 43 votes
    • Previous CPAN version: 1.72 was 3 months, 9 days before
    • Author: KARASIK
  8. SPVM - The SPVM Language
    • Version: 0.990006 on 2024-05-09, with 31 votes
    • Previous CPAN version: 0.990003 was 8 days before
    • Author: KIMOTO

Outreachy Internship 2024 Updates

Perl Foundation News

Published by Makoto Nozaki on Thursday 09 May 2024 19:21

TL;DR We just finished intern selection for this year’s Outreachy program. We got more projects and more applicants than the previous years, which made the selection hard in a good way.

Continuing our annual tradition, The Perl and Raku foundation is involved in the Outreachy program which provides internships to people subject to systemic bias and impacted by underrepresentation.

We have just finished the intern selection process, which turned out to be harder compared to the previous years. I’ll explain the reasons below.

It was harder because we got multiple high quality project proposals

Each year, we call for project ideas from the Perl/Raku community. Project proposer is required to commit to mentoring an intern from May to August. Given the significant commitment involved, it’s not uncommon for us to find suitable projects.

Fortunately, this year, we got two promising project proposals. The Foundation’s financial situation did not allow us to sponsor both projects, so we had to make the tough decision to support only one project.

After careful consideration, the Board has elected to sponsor Open Food Fact’s Perl project, titled “Extend Open Food Facts to enable food manufacturers to open data and improve food product quality.”

It was harder because more people showed up

Having more projects means we were able to attract more intern candidates. Across the two projects, more than 50 people showed interest and initiated contributions. Among them, 21 individuals actually created pull requests before the selection process.

Needless to say, it's hard work for the mentors to help dozens of candidates. They taught these intern candidates how to code and guided them through creating pull requests. On the applicants’ side, I am amazed that they worked hard to learn Perl and became proficient enough to create pull requests and make real improvements to the systems.

And the final selection was harder because we had more applicants

After the contribution process, we got an application from 14 people. It was obviously hard for the mentors to select one from so many good applicants. In the next post, Stéphane Gigandet will introduce our new intern to the community.

I wish all the best to the mentors, Stéphane and Alex, and our new intern.

Voice from the applicants

"In the journey to understand Perl better, I wanted to know what are its most wide applications, one of them being a web scraper. It's because Perl's strong support for regular expressions and built-in text manipulation functions make it well-suited for tasks like web scraping, where parsing and transforming text are essential. I took inspiration from various web scraping projects available on the internet to gain insights into the process and developed a lyrics scraper."

"I'm currently diving into Perl, and I see this as a fantastic chance to enrich my coding skills. I've thoroughly enjoyed immersing myself in it and have had the opportunity to explore various technologies like Docker and more."

"I have had the opportunity to experience Perl firsthand and have come to appreciate its significance in web development, on which I have worked. During my second year, I was searching for popular languages in backend development and found out about Perl, whose syntax was somewhat like C and Python. I didn't have any previous experience working with Perl, but now I have gained a deep understanding of its importance and impact on backend development and data processing."

"In this pull request, I made a significant stride in improving the quality and maintainability of our Perl codebase by integrating Perl::Critic, a powerful static code analysis tool."

"I've learned a whole lot about Perl and some of its frameworks such as Dancer2 (a surprisingly simple framework I've come to fall in love with)."

What's new on CPAN - April 2024

perl.com

Published on Thursday 09 May 2024 19:00

Welcome to “What’s new on CPAN”, a curated look at last month’s new CPAN uploads for your reading and programming pleasure. Enjoy!

APIs & Apps

Config & Devops

Data

Development & Version Control

Science & Mathematics

Web

Other

Maintaining Perl (Tony Cook) February 2024

Perl Foundation News

Published by alh on Monday 06 May 2024 19:42


Tony writes:

``` [Hours] [Activity] 2024/02/01 Thursday

2.50 #21873 fix, testing on both gcc and MSVC, push for CI

2.50

2024/02/02 Friday 0.72 #21915 review, testing, comments

0.25 #21883 review recent updates, apply to blead

0.97

2024/02/05 Monday 0.25 github notifications 0.08 #21885 review updates and approve 0.57 #21920 review and comment 0.08 #21921 review and approve 0.12 #21923 review and approve 0.08 #21924 review and approve 0.08 #21926 review and approve 0.67 #21925 review and comments

2.00 #21877 code review, testing

3.93

2024/02/06 Tuesday 0.23 #21925 comment 0.52 review coverity scan report, reply to email from jkeenan 0.27 #21927 review and comment 0.08 #21928 review and approve

0.08 #21922 review and approve

1.18

2024/02/07 Wednesday 0.25 github notifications 0.52 #21935 review, existing comments need addressing

2.12 #21877 work on fix, push for CI most of a fix

2.89

2024/02/08 Thursday 0.40 #21927 review and approve 0.23 #21935 review, check each comment has been addressed, approve 0.45 #21937 review and approve 0.15 #21938 review and comment 0.10 #21939 review and approve 0.13 #21941 review and approve 0.10 #21942 review and approve 0.08 #21943 review and approve 0.07 #21945 review and approve 0.17 #21877 look into CI failures, think I found problem, push probable fix 0.18 #21927 make a change to improve pad_add_name_pvn() docs, testing, push for CI 2.20 #21877 performance test on cygwin, try to work up a

regression test

4.26

2024/02/12 Monday 0.60 #18606 fix minor issue pointed out by mauke, testing 0.40 github notifications 0.08 #21872 review latest changes and approve 0.08 #21920 review latest changes and approve 1.48 #21877 debugging test 0.30 #21524 comment on downstream ticket

0.27 #21724 update title to match reality and comment

3.21

2024/02/13 Tuesday 0.35 #21915 review, brief comment 0.25 #21983 review and approve 0.03 #21233 close 0.28 #21878 comment 0.08 #21927 check CI results and make PR 21984 0.63 #21877 debug failing CI 0.27 #21984 follow-up 0.58 #21982 review, testing, comments

0.32 #21979 review and approve

2.79

2024/02/14 Wednesday 1.83 #21958 testing, finally reproduce, debugging and comment 0.08 #21987 review discussion and briefly comment 0.08 #21984 apply to blead 0.22 #21977 review and approve 0.12 #21988 review and approve 0.15 #21990 review and approve 0.82 #21550 probable fix, build tests 0.38 coverity scan follow-up 1.27 #21829/#21558 (related to 21550) debugging

0.65 #21829/#21558 more debugging, testing, comment

5.60

2024/02/15 Thursday 0.15 github notifications 0.08 #21915 review updates and approve 2.17 #21958 debugging, research, long comment 0.58 #21958 testing, follow-up

0.12 #21991 review and approve

3.10

2024/02/19 Monday 0.88 #21161 review comment and reply, minor change, testing, force push 0.23 #22001 review and comment 0.30 #22002 review and comment 0.12 #22004 review and comment 0.28 #22005 review and approve 0.32 #21993 testing, review changes 1.95 #21661 review comments on PR and fixes, review code and

history for possible refactor of vFAIL*() macros

4.08

2024/02/20 Tuesday 0.35 github notifications 0.08 #22010 review and approve 0.08 #22007 review and approve with comment 0.60 #22006 review, research and approve with comment 0.08 #21989 review and approve 0.58 #21996 review, testing, comment 0.22 #22009 review and approve 0.50 #21925 review latest updates and approve

1.05 #18606 apply to blead, work on a perldelta, make PR 22011

3.54

2024/02/21 Wednesday 0.18 #22011 fixes 0.80 #21683 refactoring

1.80 #21683 more refactor

2.78

2024/02/22 Thursday 0.38 #22007 review and comment 0.70 #21161 apply to blead, perldelta as PR22017 1.75 smoke report checks: testing win32 gcc failures 0.27 #22007 review updates and approve

1.15 #21661 re-check, research and push for smoke/ci

4.25

2024/02/26 Monday 2.10 look over smoke reports, debug PERLIO=stdio failure on mac

1.38 more debug PERLIO=stdio

3.48

2024/02/27 Tuesday 0.08 #22029 review and apply to blead 0.27 #22024 review and approve 0.33 #22026 review and approve 0.08 #22027 review and approve 0.10 #22028 review and approve 0.08 #22030 review and comment, conditionally approve 0.25 #22033 review, comments and approve 0.08 #22034 review and approve 0.17 #22035 review and comment

0.78 #21877 debugging

2.22

2024/02/28 Wednesday 0.38 github notifications 0.52 #22040 review discussion, research and comment 0.13 #22043 review and approve 0.12 #22044 review and approve 0.72 #22045 review, research, comment and approve 0.13 #22046 review, research and approve

1.55 #21877 more debugging (unexpected leak)

3.55

2024/02/29 Thursday 0.15 #21966 review update and approve 1.18 #21877 debugging

0.13 fix $DynaLoader::VERSION

1.46

Which I calculate is 55.79 hours.

Approximately 70 tickets were reviewed or worked on, and 5 patches were applied. ```

TPRF sponsors Perl Toolchain Summit

Perl Foundation News

Published by Makoto Nozaki on Friday 03 May 2024 19:49

I am pleased to announce that The Perl and Raku Foundation sponsored the Perl Toolchain Summit 2024 as a Platinum Sponsor.

The Perl Toolchain Summit (PTS) is an annual event where they bring together the volunteers who work on the tools and modules at the heart of Perl and the CPAN ecosystem. The PTS gives them 4 days to work together on these systems, with all their fellow volunteers to hand.

The event successfully concluded in Lisbon, Portugal at the end of April 2024.

If you or your company is willing to help the future PTS events, you can get in touch with the PTS team. Alternatively, you can make a donation to The Perl and Raku Foundation, which is a 501(c)(3) organization.

PTS 2024: Lisbon

rjbs forgot what he was saying

Published by Ricardo Signes on Friday 03 May 2024 15:19

Almost exactly a year since the last Perl Toolchain Summit, it was time for the next one, this time in Lisbon. Last year, I wrote:

In 2019, I wasn’t sure whether I would go. This time, I was sure that I would. It had been too long since I saw everyone, and there were some useful discussions to be had. I think that overall the summit was a success, and I’m happy with the outcomes. We left with a few loose threads, but I’m feeling hopeful that they can, mostly, get tied up.

Months later, I did not feel hopeful. They were left dangling, and I felt like some of the best work I did was not getting any value. I was grouchy about it, and figured I was done. Then, though, I started thinking that there was one last project I’d like doing for PAUSE: Upgrading the server. It’s the thing I said I wanted to do last year, but barely even started. This year, I said that if we could get buy-in to do it, I’d go. Since I’m writing this blog post, you know I went, and I’m going to tell you about it.

PAUSE Bootstrap

Last year, Matthew and I wanted to make it possible to quickly spin up a working PAUSE environment, so we could replace the long-suffering “pause2” server. We were excited by the idea of starting from work that Kenichi Ishigaki had done to create a Docker container running a test instance. We only ended up doing a little work on that, partly because we thought we’d be starting from scratch and didn’t know enough Docker to be useful.

This year, we decided it’d be our whole mission. We also said that we were not going to start with Docker. Docker made sense, it was probably a great way to do it, but Matthew and I still aren’t Docker users. We wanted results, and we felt the way to get them was to stick to what we know: automated installation and configuration of an actual VM. We pitched this plan to Robert Spier, one of the operators of the Perl NOC and he was on board. I leaned on him pretty hard to actually come to Lisbon and help, and he agreed. (He also said that a sufficiently straightforward installer would be a good starting point for turning things into Docker containers later, which was reassuring.)

At Fastmail, where Matthew and I work, we can take every other Friday for experimental or out-of-band work, and we decided we’d get started early. If the installer was done by the time we arrived, we’d be in a great position to actually ship. This was a great choice. Matthew and I, with help from another Fastmail colleague, Marcus, wrote a program. It started off life as unpause, but is now in the repo as bootstrap/mkpause. You can read the PAUSE Bootstrap README if you want to skip to “how do I use this?”.

The idea is that there’s a program to run on a fresh Debian 12 box. That installs all the needed apt packages, configures services, sets up Let’s Encrypt, creates unix users, builds a new perl, installs systemd services, and gets everything running. There’s another program that can create that fresh Debian 12 box for you, using the DigitalOcean API. (PAUSE doesn’t run in DigitalOcean, but Fastmail has an account that made it easy to use for development.)

I think Matthew and I worked well together on this. We found different rabbit holes interesting. He fixed hard problems I was (barely) content to suffer with. (There was some interesting nonsense with the state of apt locking and journald behavior shortly after VM “cloud init”.) I slogged through testing exactly whether each cron job ran correctly and got a pre-built perl environment ready for quick download, to avoid running plenv and cpanm during install.

Before we even arrived, we could go from zero to a fully running private PAUSE server in about two and a half minutes! Quick builds meant we could iterate much faster. We also had a script to import all of PAUSE’s data from the live PAUSE. It took about ten minutes to run, but we had it down to one minute by day two.

When we arrived, I took my todo and threw it up on the wall in the form of a sticky note kanban board.

PTS Stickies: Day 1

We spent day one re-testing cron jobs, improving import speed, and (especially) asking Andreas König all kinds of questions about things we’d skipped out of confusion. More on those below, but without Andreas, we could easily have broken or ignored critical bits of the system.

By the end of day two, we were confident that we could deploy the next day. I’d hoped we could deploy on day two, but there were just too many bits that were not quite ready. Robert had spent a bunch of time running the installer on the VM where he intended to run the new production PAUSE service, known at the event as “pause3”. There were networking things to tweak, and especially storage volume management. This required the rejiggering of a bunch of paths, exposing fun bugs or incorrect assumptions.

The first thing we did on day three was start reviewing our list of pre-deploy acceptance tests. Did everything on the list work? We thought so. We took down pause2 for maintenance at 10:00, resynchronized everything, watched a lot of logs, and did some uploads. We got some other attendees to upload things to pause3. Everything looked good, so we cut pause.perl.org over to pause3. It worked! We were done! Sort of.

We had some more snags to work through, but it was just the usual nonsense. A service was logging to the wrong place. The new MySQL was stricter about data validation than the old one. An accidental button-push took down networking on the VM. Everything got worked out in the end. I’ll include some “weird stuff that happened” below, but the short version is: it went really smoothly, for this kind of work.

On day four, we got to work on fit and finish. We cleaned up logging noise, we applied some small merge requests that we’d piled up while trying to ship. We improved the installer to move more configuration into files, instead of being inlined in the installer. Also, we prepared pull requests to delete about 20,000 lines of totally unused files. This is huge. When trying to learn how an older codebase works, it can be really useful to just grep the code for likely variable names or known subroutines. When tens of thousands of lines in the code base are unused, part of the job becomes separating live code out from dead code, instead of solving a problem.

We also overhauled a lot of documentation. It was exciting to replace the long and bit-rotted “how to install a private PAUSE” with something that basically said “run this program”. It doesn’t just say that, though, and now it’s accurate and written from the last successful execution of the process. You can read how to install PAUSE yourself.

Matthew, Robert, and I celebrated a successful PTS by heading off to Belém Tower to see the sights and eat pastéis.

I should make clear, finally, that the PAUSE team was five people. Andreas König and Kenichi Ishigaki were largely working on other goals not listed here. It was great to have them there for help on our work, but they got other things done, not documented in this post!

Here’s our kanban board from the end of day four:

PTS Stickies: Day 4

Specifics of Note

run_mirrors.sh

This was one of the two mirroring-related scripts we had to look into. It was bananas. Turns out that PAUSE had a list of users who ran their own FTP servers. It would, four times a day, connect to those servers and retrieve files from them directly into the users’ home directories on PAUSE.

Beyond the general bananas-ness of this, the underlying non-PAUSE program in /usr/bin/mirror no longer runs, as it uses $*, eliminated back in v5.30. Rather than fix it and keep something barely used and super weird around, we eliminated this feature. (I say “barely used”, but I found no evidence it was used at all.)

make-mirror-yaml.pl

The other mirror program! This one updated the YAML file that exposes the CPAN mirror list. Years ago, the mirror list was eliminated, and a single name now points to a CDN. Still, we were diligently updating the mirror list every hour. No longer.

rrrsync

You can rsync from CPAN, but it’s even better to use rrr. With rrr, the PAUSE server is meant to maintain a few lists of “files that changed in a given time window”. Other machines can then synchronize only files that have changed since they last checked, with occasionally full-scan reindexes.

We got this working pretty quickly, but it seemed to break at the last minute. What had happened? We couldn’t tell, everything looked great, and there were no errors. Eventually, I found myself using strace against perl. It turned out that during our reorganization of the filesystem, we’d moved where the locks live. We put in a symlink for the old name, and that’s what rrr was using… but it didn’t follow symlinks when locking. Once we updated the configuration to use the canonical name and not the link, everything worked.

Matthew said, “You solved a problem with strace!” I said, “I know!” Then we high fived and got back to work.

I was never happy with the symlinks we introduced during the filesystem reorganization, but I was happy when I eliminated the last one during day four cleanup!

the root partition filled up

We did all this work to keep the data partition capable of growth, and then / filled up. Ugh.

It turned out it was logs. This wasn’t too much of a surprise, but it was annoying. It was especially annoying because we decided early on that we’d just accept using journald for all our logging, and that should’ve kept us from going over quota.

It turned out that on the VM, something had installed a service I’d never heard of. Its job was to notice when something wanted to use the named syslog socket, and then start rsyslogd. Once that happened, we were double-logging a ton of stuff, and there was no log rotation configured. We killed it off.

We did other tuning to make sure we’d keep enough logs without running out of space, but this was the interesting part.

Future Plans

We have some. If nothing else, I’m dying to see my pull request 405 merged. (It’s the thing I wrote last year.) I have a bunch of half-done work that will be easier to finish after that. But the problem was: would this wait another year?

We finished our day — just before heading off to Belém — by talking about development between now and then. I said, “Look, I feel really demotivated and uninterested if I can’t continuously ship and review real improvements.” Andreas said, “I don’t want to see things change out from under me without understanding what happened.”

The five of us agreed to create a private PAUSE operations mailing list where we’d announce (or propose) changes and problems. We all joined, along with Neil Bowers, who is an important part of the PAUSE team but couldn’t attend Lisbon. With that, we felt good about keeping improvements flowing through the year. Robert has been shipping fixes to log noise. I’ve got a significant improvement to email handling in the wings. It’s looking like an exciting year ahead for PAUSE! (That said, it’s still PAUSE. Don’t expect miracles, okay?)

Thanks to our sponsors and organizers

The Perl Toolchain Summit is one of the most important events in the year for Perl. A lot of key projects have folks get together to get things done. Some of them are working all year, and use this time for deep dives or big lifts. Others (like PAUSE) are often pretty quiet throughout the year, and use this time to do everything they need to do for the year.

Those of us doing stuff need a place to work, and we need to a way to get there and sleep, and we’re also pretty keen on having a nice meal or two together. Our sponsors and organizers make that possible. Our sponsors provide much-needed money to the organizers, and the organizers turn that money into concrete things like “meeting rooms” and “plane tickets”.

I offer my sincere thanks to our organizers: Laurent Boivin, Philippe Bruhat, and Breno de Oliveira, and also to our sponsors. This year, the organizers have divided sponsors into those who handed over cash and those who provided in-kind donation, like people’s time or paying attendee’s airfare and hotel bills directly. All these organizations and people are helping to keep Perl’s toolchain operational and improving. Here’s the breakdown:

Monetary sponsors: Booking.com, The Perl and Raku Foundation, Deriv, cPanel, Inc Japan Perl Association, Perl-Services, Simplelists Ltd, Ctrl O Ltd, Findus Internet-OPAC, Harald Joerg, Steven Schubiger.

In kind sponsors: Fastmail, Grant Street Group, Deft, Procura, Healex GmbH, SUSE, Zoopla.

Breno especially should get called out for organizing this from five thousand miles away. You never could’ve guessed, and it ran exceptionally smoothly. Also, it meant I got to see Lisbon, which was a terrific city that I probably would not have visited any time soon otherwise. Thanks, Breno!