Sunday, March 26, 2017

How to prevent Wine from auto-running .exe files in Fedora

After installing Wine on Fedora 25 (for a reason that isn't terribly important here) I've noticed that I can run Window executables straight from the bash command line:

$ file -b ~/.wine/drive_c/Program\ Files/Internet\ Explorer/iexplore.exe
PE32+ executable (GUI) x86-64, for MS Windows
$ !$

That iexplore.exe file is clearly not a ELF, yet the kernel successfully executes it.

Since Windows ecosystems is known for its total absence of malware & ransomware; & Wine, in turn, is known for a military grade, bulletproof sandbox (i.e., it provides no protection whatsoever), I find the notion of auto-running very difficult to reconcile w/ common sence.

How does it work

If a kernel was compiled w/ CONFIG_BINFMT_MISC option, execve(2) (I'm simplifying here a little) obtains an ability to delegate the execution of unknown binaries to external processes. The subsystem that manages the format → interpreter association table is called binfmt_misc.

binfmt_misc maintains its ephemeral database in the procfs. At the boot time you mount /proc/sys/fs/binfmt_misc directory & feed /proc/sys/fs/binfmt_misc/register file w/ specially crafted text lines to create association entries (or rules as systemd docs like to call them).

E.g., in the good old days before systemd, we would have put

echo :win:M:0:MZ::/usr/bin/wine: > /proc/sys/fs/binfmt_misc/register

somewhere in /etc/rc.local.

Such a command creates /proc/sys/fs/binfmt_misc/win file:

enabled
interpreter /usr/bin/wine
flags:
offset 0
magic 4d5a

where the offset & the magic 4d5a (MZ) means the 1st 2 bytes of a typical Windows executable:

$ hexdump -C -n10 iexplore.exe
00000000  4d 5a 40 00 01 00 00 00  06 00                    |MZ@.......|
0000000a

We can be even more fancier & make an extension → interpreter association, e.g.:

# echo :xv:E::gif::/usr/bin/xv: > /proc/sys/fs/binfmt_misc/register
$ chmod +x slave-ship-daily-schedules.gif
$ ./!$

creates .gif → xv entry, & starts /usr/bin/xv slave-ship-daily-schedules.gif under the hood when we try to "run" the .gif image.

To delete all the entries, we write -1 to /proc/sys/fs/binfmt_misc/status file or if we want to delete only a particular entry:

# echo -1 > /proc/sys/fs/binfmt_misc/xv

The systemd way

systemd doesn't allow us to mingle w/ binfmt_misc subsystem directly. We ought to write the text lines in the same format binfmt_misc undestands but put them in special .conf files, where a separate program systemd-binfmt expects to find them.

If we provide an rpm for our app, we should put my-app.conf file in /usr/lib/binfmt.d/ directory during the installation & run

/usr/lib/systemd/systemd-binfmt my-app.conf

in the post-install hook.

The algo systemd-binfmt uses is fairly straightforward. When run w/o any command line args, it deletes all the existing entries & recreates the ones specified in the .conf files. If provided w/ the name of the .conf file (w/o a directory prefix!), it scans the file to add new entries.

An excerpt from src/binfmt/binfmt.c (commit 1539a65):

if (argc > optind) {
  int i;

  for (i = optind; i < argc; i++) {
    k = apply_file(argv[i], false);
    if (k < 0 && r == 0)
      r = k;
  }
} else {
  _cleanup_strv_free_ char **files = NULL;
  char **f;

  r = conf_files_list_nulstr(&files, ".conf", NULL, conf_file_dirs);
  if (r < 0) {
    log_error_errno(r, "Failed to enumerate binfmt.d files: %m");
    goto finish;
  }

  /* Flush out all rules */
  write_string_file("/proc/sys/fs/binfmt_misc/status", "-1", 0);

  STRV_FOREACH(f, files) {
    k = apply_file(*f, true);
    if (k < 0 && r == 0)
      r = k;
  }
}

It sounds fine, until we remember that a typical user (a) doesn't know much about binfmt-misc kernel module, (b) doesn't know anything about the standalone systemd-binfmt program, for he uses systemd-binfmt.service unit, as in

# systemctl restart systemd-binfmt

That unit (/usr/lib/systemd/system/systemd-binfmt.service) incorporates a handful of preconditions. The relevant ones:

ConditionDirectoryNotEmpty=|/lib/binfmt.d
ConditionDirectoryNotEmpty=|/usr/lib/binfmt.d
ConditionDirectoryNotEmpty=|/usr/local/lib/binfmt.d
ConditionDirectoryNotEmpty=|/etc/binfmt.d
ConditionDirectoryNotEmpty=|/run/binfmt.d

When we install Wine, it makes 2 entries in the systemd-binfmt DB. If we (a) remove the offending package wine-systemd that contains the evil /usr/lib/binfmt.d/wine.conf file & (b) dutifully restart systemd-binfmt service--no binfmt-misc association entries gets removed!

# rpm -qf /lib/binfmt.d/wine.conf
wine-systemd-2.3-1.fc25.noarch
# rpm --nodeps -e wine-systemd
# systemctl restart systemd-binfmt
# ls -l /proc/sys/fs/binfmt_misc/
total 0K
--w------- 1 root root 0 Mar 24 20:49 register
-rw-r--r-- 1 root root 0 Mar 25 20:48 status
-rw-r--r-- 1 root root 0 Mar 25 20:58 windows
-rw-r--r-- 1 root root 0 Mar 25 20:58 windowsPE

because systemd will forbid systemd-binfmt to run at all, because all the conf directories are empty. The remedy is to run systemd-binfmt by hand:

# /usr/lib/systemd/systemd-binfmt
# ls -l /proc/sys/fs/binfmt_misc/
total 0K
--w------- 1 root root 0 Mar 24 20:49 register
-rw-r--r-- 1 root root 0 Mar 25 20:48 status

You should not manually delete the rpm w/ the evil .conf file for dnf will reinstall wine-systemd package during the next Wine update. The recommended systemd-style solution is to create a symlink in /etc/binfmt.d/ to /dev/null named wine.conf & then restart systemd-binfmt service:

# ln -s /dev/null /etc/binfmt.d/wine.conf
# systemctl restart systemd-binfmt

Saturday, March 4, 2017

Bloatware comes when nobody's lookin'

If you write a Chrome extension for distribution outside of Google Webstore, you will inevitably need to pack the extension into a .crx file. It can be done either using the Chrome UI (the "Pack extension..." button) or "manually" via creating a .zip file & prepending a specially crafted header to it.

Obviously, the 2nd variant can be easily automated. If you have several extensions to maintain, a part of your build system that's responsible for the .crx generation could be extracted to some kind of a "plugin" that can be shared between all the extensions you maintain. In my case, the "plugin" consists of 2 files: a small sh script & a tiny makefile (that can be safely included into another makefile).

I was thinking about uploading those 2 files to npm (& hesitating a little for if a package wouldn't contain any JS code soever, should it be on npm?) but then decided to check the registry first.

The npm registry indeed contains multiple packages for dealing w/ .crx files. One of the most popular is called "crx" (at the time of writing it had 192 ★ on Github). Glancing through its code I failed not to notice the unfortunate difference between the classic Unix approach to such a problem & the JavaScript one. Perhaps, the latter is everything you may do ironically, unless you're a grand, hardcore troll.

Before I explain myself further, let's digress for a bit to explain what .crx files are & how to create them.

CRX Package Format

In the ideal world, Alice would package her extension (a set of .js & .json files) into a .zip file, would upload the file to her http://cs.a-nice-university.edu/~alice/ page & would tell her dear friend Bob about it.

In reality, the extension must be protected from tampering. If Eve, the evil sysadmin of cs.a-nice-university.edu zone, will entertain an idea of mingling w/ Alice's extension via adding a hot Bitcoin miner to it, Bob should be able to detect that before the extension gets intalled.

One way of doing it is to generate a pair of public & private keys. You sign the extension w/ your private key & users check the downloaded .zip w/ you public key. The trouble is there is no standard way to "sign" a .zip file for neither its metadata supports such a thing as an embedded signature, not any of "archive managers" would know what to do w/ such an upgraded metadata.

This is where .crx files come in: they contain a copy of an RSA public key + an encrypted SHA1 of a .zip archive in question. This metadata simply gets prepended to the original .zip file.

http://gromnitsky.users.sourceforge.net/articles/crx.svg

E.g., Alice, after testing her Chrome extension, zips it to a file:

$ touch manifest.json
$ zip foo.zip !$

Next, she generates a private 1024-bit RSA key:

$ openssl genrsa 1024 > private.pem

Then prepends the aforementioned block to foo.zip. As it's a multiple step process, she writes it down in zip2crx script (this is a modified version of the script from developer.chrome.com):

#!/bin/sh

[ $# -ne 2 ] && {
    echo "Usage: `basename $0` file.zip private_key" 1>&2
    exit 1
}

zip=$1
key=$2

name=${zip%.zip}
crx="$name.crx"
pub="$name.pub"
sig="$name.sig"
zip="$name.zip"
trap 'rm -f "$pub" "$sig"' EXIT

# signature
openssl sha1 -sha1 -binary -sign "$key" < "$zip" > "$sig"

# public key
openssl rsa -pubout -outform DER < "$key" > "$pub" 2>/dev/null

# Take "abcdefgh" and return it as "ghefcdab"
byte_swap() {
    echo "${1:6:2}${1:4:2}${1:2:2}${1:0:2}"
}

crmagic_hex="4372 3234" # Cr24
version_hex="0200 0000" # 2
pub_len_hex=$(byte_swap $(printf '%08x\n' $(stat -c %s "$pub")))
sig_len_hex=$(byte_swap $(printf '%08x\n' $(stat -c %s "$sig")))

(
    echo "$crmagic_hex $version_hex $pub_len_hex $sig_len_hex" | xxd -r -p
    cat "$pub" "$sig" "$zip"
) > "$crx"

Her script signs the .zip, automatically derives a public key from private.pem, combines together all the necessary data for a .crx file consumer (like the length of the private key, &c) & concatenates the obtained header w/ the .zip

$ ./zip2crx foo.zip private.pem
$ file foo.crx
foo.crx: Google Chrome extension, version 2

The resulting .crx is ready to be used in Chrome. She uploads it to her web page & sends to Bob (via email) her public key (possibly encrypted w/ his GPG public key, but we won't get into that).

Bob, having obtained Alice's public key, downloads foo.crx, extracts from it the embedded public key, the signature & foo.zip & checks (w/ Alice's public key) the validity of foo.zip.

(The whole verification process is a little too convoluted to present it here, but if you're interested, download this script & run it against the provided key:

$ ./crx2zip foo.crx alice.pub
RSA key                 1024-bit
Total header size:      306 bytes
Public key:             alice.pub
Signature status:       Verified OK

where alice.pub would be a public key in the DER-format.)

If Eve, the evil sysadmin, did indeed modify foo.crx in any way, Bob is able to detect that, for despite that Eve can do all the same operations as Bob did, she doesn't have Alice's private key & thus is unable to re-sign the tampered foo.zip properly in an undetectable way. All she can hope for is that Bob, being a lazybones, won't bother to do the necessary checks before installing foo.crx into his browser.

The example is somewhat contrived, for if Alice decides to upload foo.zip to Google Webstore instead, by this virtue she exempts herself from managing the crypto keys. The Webstore makes the key pair for her, does all the checks of all the sub-sequential updates of foo.zip & generates the correct .crx. The final extension is being delivered to Bob via HTTPS, thus leaving Eve, the evil sysadmin, out of luck.

The JavaScript way

Back to our findings from the npm registry.

crx package has dual nature: it's a library & a CLI util. The distinguishing feature of this program is that "It is written purely in JavaScript and does not require OpenSSL!".

The author states it as if the mere act of depending on OpenSSL code is somehow inconvenient or morally repugnant. I must say he's not alone in his view. I may sympatise for the overly cautious approach in the case of maintaining a big farm of cloud services like Amazon does, but I cannot reconcile w/ this stance in the case of a local developer machine.

A user of a .crx file doesn't have to interfere w/ OpenSSL, it's solely the developer's job to create the proper .crx. To find a developer machine that doesn't have OpenSSL intalled already is a challenge that only a few of us can achieve. (Certainly, we may be so bold; we may think of some poor souls who have to use old versions of Windows but even they can always download Cygwin.)

Then there is the size. Our zip2crx example is 37 lines long. Anybody can read it (even indolent Bob) & in a minute understand what the script is doing. crx package on the other hand

$ find node_modules/crx/{bin,src} -type f | xargs wc -l
  150 node_modules/crx/bin/crx.js
  294 node_modules/crx/src/crx.js
   47 node_modules/crx/src/resolver.js
  491 total

is ~13 times bigger.

The package also provides some kind of an artisan build system surrogate. It "packs" the source code into different places depending on its command line options. It generates RSA keys.

I know that it's quite fashionable in the JavaScript world to write a replacement for Make every year, but it still amazes me why would anyone do the job in the courageous crx package way, instead of reusing available, well-tested tools on your machine.

The Unix way

Everything except zip2crx is already here. Suppose you have foobar extension:

foobar/
├── Makefile
├── src/
│   ├── hello.js
│   └── manifest.json
└── zip2crx*

src directory contains the source code for the extension. Makefile is a primitive 18-lines long set of shell instructions:

pkg.name := foobar
out := _build
src := $(shell find src -type f)

mkdir = @mkdir -p $(dir $@)     # a canned recipe

.PHONY: crx
crx: $(out)/$(pkg.name).crx

$(out)/$(pkg.name).zip: $(src)
        $(mkdir)
        cd $(dir $<) && zip -qr $(CURDIR)/$@ *

%.crx: %.zip private.pem
        ./zip2crx $< private.pem

private.pem:
        openssl genrsa 2048 > $@

If you type make it'll generate in _build/ directory foobar.zip & foobar.crx. If you don't have an RSA key, Make will generate it for you too.

Why would you need JavaScript for that?

I'll conclude by quoting Doug McIlroy:

"A first engineering question to ask is: how often is one likely to have to do this exact task? Not at all often, I contend. It is plausible, though, that similar, but not identical, problems might arise. A wise engineering solution would produce--or better, exploit--reusable parts."

Wednesday, February 1, 2017

A JavaScript client for dreamwidth.org

Following the recent exodus from a hostile LiveJournal, I've noticed that I'm unable to perform 2 things from the command line in the good old DreamWidth:

  • Posting in markdown
  • Uploading the image

Turns out, DW still uses a subset of ancient LJ-like xmlrpc APIs. Instead of reusing some of the available LJ clients, I've decided to hack my own.

The API has a weird auth scheme. The 1st step is to obtain a "challenge", then md5 it w/ a password (yes, you've read it correctly: it's md5, folks! Sonja Henie's tutu.) & send it over w/ all the subsequent requests. The weird part comes when you realise that such a token lives only 60 sec & for a long-lived session you need to obtain another generated token that doesn't actually work w/ the xmlrpc API & useful only for manual scraping/posting purposes.

The CLI client hides everything under 1 user-visible command dreamwidth-js. To upload an image to the DW cloud, type:

  $ dreamwidth-js img-upload < cat.jpg  

To make a new post:

$ dreamwidth-js entry-post-md
---
subject: On today's proceedings
tags: meeting, an exciting waste of time
security: friends
---

## Agenda for Mon. budget meeting

It was a dark and stormy night; the rain fell in torrents
^D

Wednesday, December 14, 2016

A MITM attack in the reign of Elizabeth I

This is what you end up w/ when you have an encryption but no message authentication code:

"Babington and his associates, having laid such a plan [of the assassination of Elizabeth], as, they thought, promised infallible success, were impatient to communicate the design to the queen of Scots [...]

For this service, they employed Gifford, who immediately applied to Walsingham [Sir Francis, the Secretary of State], that the interest of that minister might forward his secret correspondence with Mary. Walsingham proposed the matter to Paulet [...] The letters, by Paulet's connivance, were thrust through a chink in the wall; and answers were returned by the same conveyance.

Babington informed Mary of the design laid for a foreign invasion, the plan of an insurrection at home, the scheme for her deliverance, and the conspiracy for assassinating the usurper, by 6 noble gentlemen [...] Mary replied, that she approved highly of the design; [...]

These letters [...] were carried by Gifford to secretary Walsingham; were decyphered by the art of Philips, his clerk; and copies taken of them.

Walsingham employed another artifice, in order to obtain full insight into the plot: He subjoined to a letter of Mary's a postscript in the same cipher; in which he made her desire Babington to inform her of the names of the conspirators. The indiscretion of Babington furnished Walsingham with still another means of detection, as well as of defence. That gentleman had caused a picture to be drawn, where he himself was represented standing amidst the six assassins; and a motto was subjoined, expressing that their common perils were the band of their confederacy."

(From The History of England by David Hume.)

On her trial, Mary denied the charges of the insurrection & the assassination, stating that she personally did not write those letters in such a form, for all her correspondence was controlled by 2 secretaries, who did the tedious process of (de|en)cryption on her behalf.

Saturday, October 29, 2016

BOM & exec

Abstract

Recently, I’ve stumbled upon a post about an accidental BOM in a shell script file. tl;dr for those who don’t read Ukrainian:

  1. A guy had a typical shell script that got corrupted by some Windows editor by prefixing the first line of the file (the shebang line) with the BOM.
  2. The shell was trying to execute the script.
  3. Everybody got upset.

I got curious why bash tries to run scripts w/ BOM in the first place. I’ve looked into the latest bash-4.3 & tcsh-6.19.00 on Fedora 24. Everywhere in the text below we draw the BOM w/ the replacement character (codepoint U+FFFD): �.

Some findings:

  • I was wrong about the bloody shebang lines for I thought that no shell ever reads them.
  • bash & tcsh don’t use libc properly & both invent their own rigmarole instead of using the provided routine.
  • bash is a mess! (Which is hardly a discovery.)

With shebang

If a file contains a valid shebang line, everything is easy: when you pass the file name to any of execv, execve, execlp, etc. functions, the kernel steps in, reads the shebang line and executes the interpreter, that was mentioned in the shebang, with the file in question as its argument.

This picture falls to pieces, when the file contains the petty BOM, for the kernel fails to recognize that �#!/omg/lol should be (in our naïve mind) an equivalent to #!/omg/lol.

Both tcsh & bash have a backup plan for systems w/o the shebang support in the kernel. Besides the obvious win32 candidate, tcsh lists 2 other systems: os390 & bs2000 (I wonder who on earth still have them). bash uses autoconf & therefore doesn’t have a pre hard-coded build configuration set. Unfortunately, I believe the autoconf test for the shebang line support is bogus:

$ cat ac_sys_interpreter
#! /bin/cat
exit 69

Presumably, the thinking was: if you run it on any modern system, the kernel will run /bin/cat ac_sys_interpreter which will just print the file, but on prehistoric time-sharing machines a simple-minded /bin/sh will execute it as a shell script & then you can test if the exit code == 69. (For why it would do so–read the next section.) The trouble is, that the old system may very well have /bin/sh that does its own shebang processing in case kernel doesn’t, alas rendering the test useless, & henceforth compiling bash w/o shebang support.

Without shebang

As long as the kernel flops at the invalid first line, the whole commotion becomes the case of a file w/o the shebang.

This is how we were all taught about interpreter files back in the day:

“the shell reads the command and tries to execlp the filename. Because the shell script is an executable file but isn’t a machine executable, an error is returned and execlp assumes that the file is a shell script (which it is). Then /bin/sh is executed with the pathname of the shell script as its argument.”

(from APUE, the 3rd ed)

E.g. suppose we have

$ cat demo2.sh
echo Діти, це їжачок!
ps -p $$                # print the shell the script is running under

If we run it, the shell

  1. checks if the script has executable bits (suppose it has)
  2. tries to exec the file
  3. which fails with ENOEXEC, for it’s not a ELF
  4. [a tcsh/bash dance]
  5. exec again but this time it’s /bin/sh with demo2.sh as an argument

The last item is important & may be not quite apparent, for if you have a csh-script

$ cp demo2.sh demo2.csh

you may expect that tcsh will not run it as sh-one:

$ tcsh -f
> ./demo2.csh
Діти, це їжачок!
   PID TTY          TIME CMD
102213 pts/21   00:00:00 sh

which is false, for tcsh follows the standards here.

Expectations vs. reality

APUE says a shell is ought to use execlp that in turn is supposed to do all the dirty work for us. As it happens execlp does exactly that, at least in Linux glibc. Of course, both bash/tcsh ignore the advice & use their own scheme.

tcsh does a plain execv then, after failure, peeks into the first 2 bytes to see (w/ the help of iswprint(3)) if they are “printable”. Here, if tcsh (a) finds the file “acceptable” & (b) tries to run the script with the shebang line in it on a system w/o kernel support for such a line, it processes that line by itself.

If we poison our script with the BOM:

$ uconv --add-signature demo2.sh > demo2.bom.sh
$ chmod +x !$
$ head -c 37 !$ | hexdump -c
0000000 357 273 277   e   c   h   o     320 224 321 226 321 202 320 270
0000010   ,     321 206 320 265     321 227 320 266 320 260 321 207 320
0000020 276 320 272   !  \n                                            
0000025

tcsh doesn’t try to re-execv & aborts:

> ./demo2.bom.sh
./demo2.bom.sh: Exec format error. Wrong Architecture.

bash, on the other hand, tries to be more clever, failing spectacularly. After execve it goes into a journey of figuring out why the exec has failed. It:

  1. opens the file & analyses the shebang line! In the example above we didn’t have one, but if we did, bash would have produced a message:

    $ cat demo3.invalid.awk
    #!/usr/bin/awwwwwwwk -f
    BEGIN { print "this is awk" }
    
    $ ./demo3.invalid.awk
    sh: ./demo3.invalid.awk: /usr/bin/awwwwwwwk: bad interpreter: No such file or directory
    

    tcsh won’t do anything like that & will print ./demo3.invalid.awk: Command not found..

  2. checks if the file has an ELF header & tries to find out what is wrong w/ it;

  3. reports the “success” of the execution, if the file has the length of 0.

  4. checks if the file is “binary”. I use quotes here, for this is an example of how the good intentions don’t always turn into reality. Instead of a simple 2 bytes check, like it’s done in tcsh, bash reads 80 bytes & calls a certain check_binary_file() function that is a good example of why you should not blindly trust the comments in the code:

    /* Return non-zero if the characters from SAMPLE are not all valid
       characters to be found in the first line of a shell script.  We
       check up to the first newline, or SAMPLE_LEN, whichever comes first.
       All of the characters must be printable or whitespace. */
       
    int
    check_binary_file (sample, sample_len)
         char *sample;
         int sample_len;
    {
      register int i;
      unsigned char c;
       
      for (i = 0; i < sample_len; i++)
        {
          c = sample[i];
          if (c == '\n')
            return (0);
          if (c == '\0')
            return (1);
        }
       
      return (0);
    }
    

    Despite of the resolution for all of the characters must be printable or whitespace, the function returns 1 only in case when sample contains the NULL character. Our BOM-example doesn’t have one, thus the script runs, albeit with a somewhat cryptic error if you have no idea about the existence of the BOM in the file:

    $ ./demo2.bom.sh
    ./demo2.bom.sh: line 1: �echo: command not found
       PID TTY          TIME CMD
    115569 pts/26   00:00:00 sh
    

    What if we do have the NULL character?

    $ hexdump -c demo4.null.sh
    0000000   e   c   h   o      \0  \n   e   c   h   o     320 224 321 226
    0000010 321 202 320 270   ,     321 206 320 265     321 227 320 266 320
    0000020 260 321 207 320 276 320 272   !  \n   p   s       -   p       $
    0000030   $  \n                                                        
    0000032
    

    Here NULL is an argument to echo command, which should be totally legal, but not w/ bash!

    $ ./demo4.null.sh
    sh: ./demo4.null.sh: cannot execute binary file: Exec format error
    

    Which of course wouldn’t be an issue had the file had the shebang line.

  5. If bash finds the file “acceptable” on a system w/o kernel support for the shebang line when the file indeed contains one, it does the same thing tcsh does: tries to process it by itself.

Conclusion

The most popular shells are too bloated, bizarre & have many undocumented features.

Some hints:

  • The shebang line isn’t necessary if you target /bin/sh, but the shell does less work if you provide it.
  • To view BOMs, use less(1) or hexdump(1).
  • To test for the BOM, use file(1).
  • To remove the BOM manually, use M-x find-file-literally in Emacs.

Sunday, October 2, 2016

How GNU Make's patsubst Function Really Works

Abstract

$(patsubst) is a GNU Make internal function that deals with text processing such as file names transformations. Despite of having a very simple idea behind it, the peculiar way of its implementation leads to confusion & uncertainty for novice Make users. The function doesn’t return any errors or signal any warnings. It uses its own wildcard mechanism that doesn’t have any resemblance with the usual glob or regexp patterns.

For example, why this transformation doesn’t work?

$(patsubst src/%.js, build/%.js, ./src/foo.js)

We expect ./src/foo.js to be converted to build/foo.js, but patsubst leaves the file name untouched.

Extract method

Before we begin, we need a quick way of inspecting the results of patsubst evaluations. GNU Make doesn’t have a REPL. There are primitive hacks around it like ims:

$ rlwrap -S 'ims> ' ./ims
ims> . $(words will cost ten thousand lives this day)
7

that allow you to play with Make functions interactively, but they won’t help you to examine Make’s internals, for there is no way to view the source code of a particular function like you do it in irb + method_source gem, for example.

I’ve extracted patsubst function from Make 4.2.90 into a separate command gmake-patsubst. After you compile it, just run it from the shell as:

$ ./gmake-patsubst src/%.js build/%.js ./src/foo.js
./src/foo.js

providing exactly 3 arguments as you would do in makefiles, only using the shell quoting/splitting rules instead of Make’s (i.e., use a space as an argument separator instead of a comma).

(A side note about the extract: it’s ≈ 520 lines of an imperative code! This is what you get when you program in C.)

If you want to read the algo itself, start from patsubst_expand_pat().

patsubst explained

Let’s recap first what patsubst does.

$(patsubst PATTERN, REPLACEMENT, TEXT)

The majority of its use is to tranform a list of file names. It operates like a map() on an iterable in JavaScript:

TEXT
    .split(/\s+/)
    .map( (file) => magic_transform(PATTERN, REPLACEMENT, file) )
    .join(' '))

It’s a pure function that returns a new result, leaving its arguments untouched. It works with supplied file names in TEXT as strings–it doesn’t do any IO.

The first thing to remember is that it splits TEXT into chunks before doing any substantial work further. All transforming is being done by individually applying PATTERN to each chunk.

For example, we have a list of .jsx file that we want to tranform into the list of .js files. You may think that the simplest way of doing it with patsubst would look like this:

$ ./gmake-patsubst .jsx .js "foo.jsx bar.jsx"
foo.jsx bar.jsx

Well, that didn’t work!

The problem here is that in this case patsubst checks if each chunk matches PATTERN exacly as a full word byte-to-byte. In regex terms this would look as ^\.jsx$. To prove this, we modify our pattern to be exactly foo.jsx:

$ ./gmake-patsubst foo.jsx .js "foo.jsx bar.jsx"
.js bar.jsx

Which works as we described but isn’t much of a help in real makefiles.

Thus patsubst has a wildcard support. It is similar to the character % in Make pattern rules, that mathes any non-empty string. For example, % in %.jsx pattern could match foo against foo.jsx text. The substring that % matches (foo in the example) is called a stem1.

There could be only one % in a pattern. If you have several of them, only the first one would be the wildcard, all others would be treated as regular characters.

To return to our example with .jsx files, using % in both PATTERN & REPLACEMENT arguments yields to desired result:

$ ./gmake-patsubst %.jsx %.js "foo.jsx bar.jsx"
foo.js bar.js

When REPLACEMENT contains a % character, it is replaced by the stem that matched the % in PATTERN.

Using the character % only in patterns is rarely useful, unless you want to replicate Make’s $(filter-out) function:

$ ./gmake-patsubst %.jsx "" "foo.jsx bar.js"
bar.js

Which is the equivalent of

$(filter-out %.jsx, foo.jsx bar.js)

If there is no % in PATTERN but there is % in REPLACEMENT, patsubst resorts to the case of a simple, exact substitution that we saw before.

$ ./gmake-patsubst foo.jsx % "foo.jsx bar.jsx"
% bar.jsx

Now, to return to our first example from Abstract:

$(patsubst src/%.js, build/%.js, ./src/foo.js)

Why didn’t it work out?

Putting together all we’ve learned so far, here is the high-level algorithm of what patsubst does:

  1. It searches for the % in PATTERN & REPLACEMENT. If found, it cuts off everything before %. Let’s call such a cut-out part pattern-prefix (src/) & replacement-prefix (build/). It leaves us with .js & (again) .js correspondingly. Let’s call those parts pattern-suffix & replacement-suffix.

  2. Splits TEXT into chunks. In our case there is nothing to split, for we have only 1 file name (a string w/o spaces): ./src/foo.js.

  3. If there is no % in PATTERN it does a simple substitution for each chunk & returns the result.

  4. If there indeed was % in PATTERN, it (for each chunk):

    4.1. (a) Makes sure that pattern-prefix is a substring of the chunk. In JavaScript it would look like:

     CHUNK.slice(0, PATTERN_PREFIX.length) === PATTERN_PREFIX
    

    It’s false in our example, for src/ != ./src/.

    (b) Makes sure that pattern-suffix is a substring of the chunk. In JavaScript it would look like:

     CHUNK.slice(-PATTERN_SUFFIX.length) === PATTERN_SUFFIX
    

    It’s true in our example, for .js == .js.

    4.2. If the subitem #4.1 is false (our case!) it just returns an unmodified copy of the original chunk.

    4.3. Iff2 both (a) & (b) in the subitem #4.1 were indeed true, it cuts-out pattern-prefix & pattern-suffix from the chunk, transforming it to a stem.

    4.4. Concatenates replacement-prefix + stem + replacement-suffix.

  5. Joins all the chunks (modified of unmodified) with a space & returns the result.

As you see, the algo is simple enough, but probably is not exactly similar to what you may have imagined after reading the Make documentation.

In conclusion, hopefully now you can explain the result of patsubst evaluation below (why only src/baz.js was transformed correctly):

$ ./gmake-patsubst src/%.js build/%.js "./src/foo.js src/bar.jsx src/baz.js"
./src/foo.js src/bar.jsx build/baz.js

The nodejs version of the patsubst can be found here. Note that it’s a simple example & it must not be held as a reference.

PS. Here is an alternate version of this post that can be more readable on your phone.

  1. (For non-English speakers like yours trully) The noun stem means several things: 1) (in linguistics) a form of a word after all affixes are removed; 2) (in botany) a slender structure that supports a plant.

  2. A quote from the Emacs manual: ‘“Iff” means “if and only if”. […] Try to avoid using this term in documentation, since many are unfamiliar with it and mistake it for a typo.

Tuesday, September 20, 2016

Emacs 25.1 & hunspell

In Emacs 25 you don't ever need to modify ispell-dictionary-alist variable explicitly. Ispell package reads, during its initialization, the hunspell's .aff files & automatically fills the variable w/ parsed values.

If you have at least hunspell + hunspell-en-US dictionary installed, the minimum configuration, that works regardless of the underlying OS is:

(setenv "LANG" "en_US.UTF-8")
(setq ispell-program-name "hunspell")
(setq ispell-dictionary "en_US")

hunspell cuts out en_US part from LANG env variable and uses it as a default dictionary. To check it outside of Emacs, run:

$ hunspell -D

If it says

LOADED DICTIONARY:
/usr/share/myspell/en_US.aff
/usr/share/myspell/en_US.dic

& waits for the user input from the stdin, then perhaps hunspell was configured properly.

ispell-dictionary on the other hand, is an Ispell-only setting. It uses it to start a hunspell session.

Iff the Ispell package has been initialized correctly, ispell-hunspell-dict-paths-alist variable should contain pairs like

("american" "/usr/share/myspell/en_US.aff")
("british" "/usr/share/myspell/en_GB.aff")

& ispell-dictionary-alist--the parsed values from the corresponding .aff files.

If ispell-hunspell-dict-paths-alist is nil, that means Ispell is either has failed to parse the output of a `hunspell -D` invocation or has failed to read the .aff files. The latter could occur if you use a native Windows version of Emacs w/ hunspell from Cygwin. If that is the case, you can always set the pairs manually:

(setq ispell-program-name "c:/cygwin64/bin/hunspell.exe")
(setq ispell-hunspell-dict-paths-alist
      '(("en_US" "C:/cygwin64/usr/share/myspell/en_US.aff")
        ("ru_RU" "C:/cygwin64/usr/share/myspell/ru_RU.aff")
        ("uk_UA" "C:/cygwin64/usr/share/myspell/uk_UA.aff")
        ("en_GB" "C:/cygwin64/usr/share/myspell/en_GB.aff")))

You'll need to restart Emacs after that.

The Apostrophe

If you can, try up update the hunspell dictionaries alongside the spell checker itself. The old versions lack the proper WORDCHARS setting inside the .aff files which results in wrong results (haha) for words that contain the ' sign. For example, if your dictionaries are up to date, the word isn't must not confuse the spell checker:

$ echo isn\'t | hunspell
Hunspell 1.3.3
*

If you get this instead:

$ echo isn\'t | hunspell
Hunspell 1.3.3
& isn 9 0: sin, ins, ism, is, in, inn, ion, isl, is n
*

the dictionaries are no good & no Emacs will fix that.

Switching Dictionaries On The Fly

If you find yourself switching dictionaries depending on the Emacs input mode, use the Mule hooks to set the right dictionary automatically:

(setq ispell-dictionary "en_GB")

(defun my-hunspell-hook()
  "Set a local hunspell dictionary based on the current input method."
  (setq ispell-local-dictionary
        (cond
         ((null current-input-method)
          ispell-dictionary)
         ((string-match-p "ukrainian" current-input-method)
          "uk_UA")
         ((string-match-p "russian" current-input-method)
          "ru_RU")
         (t
          (user-error "input method %s is not supported"
                      current-input-method)))
        ))

(defun my-hunspell-hook-reset()
  (setq ispell-local-dictionary ispell-dictionary))

(add-hook 'input-method-activate-hook 'my-hunspell-hook)
(add-hook 'input-method-deactivate-hook 'my-hunspell-hook-reset)

Saturday, September 3, 2016

inn 2.6.0, Injection-Info & .POSTED

I usually refrain myself from writing anything in the style of "How to setup foo", for it's quite silly, but my recent adventures in gmane forced me to break my rule for there is practically 0 info about the inn+newsstar "stack".

The setup is:

  1. Fedora 24.
  2. We create newsgroup gmane.test in an INN installed on a localhost (this INN is lonely & isn't connected to any other NNTP servers out there).
  3. We use newsstar (grab the .spec file from here) to connect to news.gmane.org machine, download articles from remote gmane.test group & post them to local gmane.test newsgroup.
  4. We use our favourite newsreader (Mutt + nntp patch) to read local gmane.test newsgroup.
  5. We post our message to local gmane.test newsgroup. Then we run newsstar again & it uses INN's spool of messages prepared to be sent away. newsstar grabs our message, connects to news.gmane.org machine & posts it.

In reality, the last step could be the hardest one, for after running newsstar we get the response that out article was rejected by news.gmane.org w/ the cryptic reason:

441 Can't set system Injection-Info: header

Let's begin w/ the

Step 1, creating the local newsgroup

After dnf install inn, run:

# systemctl enable innd
# systemctl start !$
# /usr/libexec/news/ctlinnd newgroup gmane.test

Step 2, configure newsstar

If you have built an rpm from the .spec above, copy a sample config

# mkdir /etc/newsstar
# cp /usr/share/doc/newsstar/sample_config/main.cf.sample !$/main.cf

& uncomment the lines corresponding to the default INN paths:

spool_dir       /var/spool/news
active_file     /var/lib/news/active
outgoing_dir    /var/spool/news/outgoing
articles_dir    /var/spool/news/articles

Next, create /var/lib/newsstar/newsrc.news.gmane.org file, add the desired remote newsgroup name & set the proper file ownership:

# echo 'gmane.test -1' > /var/lib/newsstar/newsrc.news.gmane.org
# chown news:news !$

Run newsstar under news user:

$ sudo -u news newsstar

It should download the last article from remote gmane.test group & post it to local gmane.test newsgroup.

In case of errors, look into journalctl & run newsstar w/ -vv CL options.

Steps 3-4, posting

Open /etc/news/newsfeeds & add the following lines to it:

news.gmane.org\
      :gmane.*,!junk,!control*\
      :Tf,Wnm:

Restart INN (systemctl restart innd).

Open your newsreader, post an article to gmane.test. It obviously immidiately appears in the local INN installation. Now we need to push it to the remote gmane server.

$ sudo -u news newsstar

& boom--newsstar says it moved the "bad" article to its graveyard, for gmane didn't like it. You may open the buried article & examine its contents. There is 2 things in it that forbid us from pushing it upstream.

  1. Injection-Info header.
  2. Path header that has something like .POSTED.localhost in the middle of it.

To make life more enjoyable, INN doesn't provide any obvious way to either not to set Injection-Info header nor to edit Path properly. The only way I found is to use INN perl (yes, it's that bad) filters. It's a little more challenging to do it under Fedora, for the maintainer of inn package had simultaneously decided (a) to compile INN w/ perl support & (b) to turn it off by not including a sample (filter_innd.pl) filter in the package (this is why INN cries in the logs that perl filters are disabled).

Grab the INN tarball, extract filter_innd.pl file to /usr/libexec/news/filter/ (just creating an empty file won't do) & mark it executable. Then open filter_nnrpd.pl (be careful, this is not the same file we've extracted from the tarball) & add to filter_post subroutine:

$hdr{'Injection-Info'} = undef;
$modify_headers = 1;

return $rval;

Restart INN. At this point, this change to the article generation is enough for such servers as news.eternal-september.org, but if you post another article to gmane.test & run newsstar, the reply from gmane still fails to comfort:

441 Path: header shows a previous injection of the article

To satisfy gmane, we need to change Path header from:

Path: my-machine.example.com!.POSTED.localhost!not-for-mail

to:

Path: my-machine.example.com!not-for-mail

There is a setting for /etc/news/inn.conf, called addinjectionpostinghost that reduces .POSTED.localhost to .POSTED but it's still not enough. Again, edit filter_post subroutine in /usr/libexec/news/filter/filter_nnrpd.pl to add:

$hdr{'Path'} = 'not-for-mail';

Restart INN, repost the message, rerun newsstar & go jogging in the park, because, congratulations, dude! you've wasted an hour of your life for nothing.

Sunday, July 31, 2016

Sexp navigation in `js-mode`

The sexp navigation in js-mode that comes w/ Emacs 25.1.1 is broken, so there is a quite popular smartparens minor mode. All I need from it is (sp-up-sexp) w/ which it's possible to write "move me to the beginning of the current expression", like

(define-key js-mode-map [(meta up)] (lambda () (interactive) (sp-up-sexp -1)))

or even

(defun my-js-expr-start ()
  "Requires smartparens."
    (while (sp-up-sexp -1)))

You don't have to turn smartparens-mode on to use that.

Being curious of the internals of smartparens, I was "severely shocked" at its size. The main part the mode 327KB or, according to Github, 7133 SCLO.

7133 lines for a facility of auto-closing quotes, parens & whatnot!

This world is doomed for it'll collapse of its own bloat.

Monday, July 18, 2016

Generating Dependencies Automatically with GNU Make & Browserify

Abstract

For any one to be required to use more force than is absolutely necessary for the job in hand is waste.
— Henry Ford

In the previous post about the example of a build system for JavaScript SPAs, we didn’t cover the topic of auto-discovering dependencies. While not being the most complex one, it oftentimes leads to a rather frustrating expirience for the novice user.

In this post we’ll examine several ways of dependency management to aid Make to properly construct its dependency trees.

We’ll use a simple “app” consisting of 3 .js files:

foobar
├── bar.js
├── foo.js
└── main.js

where we’ll compile them from ES2015 to ES5 w/ Babel & will combine them in 1 bundle w/ Browserify. The dependency tree for main.js looks very simple:

i.e., foo.js & bar.js are commonjs modules, main.js requires bar that in turn requres foo.

The makefile that we’ll write will do 2 things:

  1. compile all .js files into a separate tree directory;
  2. create a bundle from the files in the separate tree directory.

The dependency problem arises when we modify, say, foo.js. Our build system should automatically recognize that the bundle from the step 2 became outdated & needs to be recreated.

The compilation

As usual we want to support a single source three with multiple builds (development & production). Thus it’s inconvinient to put the results of the compilation in the source directory. The simplest way of achieving this is to run Make from the output directory that != source directory. For example:

the-plan
├── foobar/
│   ├── bar.js
│   ├── foo.js
│   ├── main.js
│   └── main.mk
└── _out/
    └── development/
        ├── .ccache/
        │   ├── bar.js
        │   ├── foo.js
        │   └── main.js
        └── main.js

where foobar is out source directory, _out is the output directory where we run Make, _out/development/main.js is the bundle.

Let’s start with compiling .js files first. For simplicity we’ll assume that all the npm packages we need are installed in the global mode.

# npm -g i babel-cli babel-preset-es2015 browserify
$ cat ../foobar/main.mk
.DELETE_ON_ERROR:

src := $(dir $(lastword $(MAKEFILE_LIST)))
NODE_ENV ?= development
out := $(NODE_ENV)

.PHONY: compile
compile:

js.src := $(shell find $(src) -name '*.js' -type f)
js.dest := $(patsubst $(src)%.js, $(out)/.ccache/%.js, $(js.src))

ifeq ($(NODE_ENV), development)
BABEL_OPT := -s inline
endif
_BABEL_OPT := --preset $(shell npm -g root)/babel-preset-es2015 $(BABEL_OPT)

$(js.dest): $(out)/.ccache/%.js: $(src)/%.js
»   @mkdir -p $(dir $@)
»   babel $(_BABEL_OPT) $< -o $@

compile: $(js.dest)

If we run it in _out directory:

$ make -f ../foobar/main.mk
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//bar.js -o development/.ccache/bar.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//foo.js -o development/.ccache/foo.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//main.js -o development/.ccache/main.js

$ make -f ../foobar/main.mk
make: Nothing to be done for 'compile'.

To recap what we wrote here:

  • The empty .DELETE_ON_ERROR: target tells Make to remove the produced target, for example, development/.ccache/foo.js in case of the compilation failure. You should always include this line into your makefiles, otherwise, in our case, it’s possible to end up with invalid development/.ccache/foo.js if Babel terminates unexpectedly due to a bug, user signal, etc. Recall that Make thinks about the success in terms of the exit status of a shell command.

  • We collected the names of our source files in js.src; js.dest contains the transformed paths so that

      ../foobar//foo.js
    

    becomes

      development/.ccache/foo.js
    
  • Notice how we wrote the header of the patter rule:

      $(js.dest): $(out)/.ccache/%.js: $(src)/%.js
    

    by prepending it with $(js.dest) we limited the scope of it.

  • The default output build is ‘development’. We make sure that in the development mode we include source maps for the output .js files. I do not discuss here the command line options for Babel (& the kludge to force Babel pick up a globaly installed preset), for they are irrelevant to the topic.

Bundling

As we transpile the .js files into a mundane ES5, the bundle should be created from the results of the compilation, not from the original files.

$ awk '/bundle/,0' ../foobar/main.mk
bundles.src := $(filter %/main.js, $(js.dest))
bundles.dest := $(patsubst $(out)/.ccache/%.js, $(out)/%.js, $(bundles.src))

ifeq ($(NODE_ENV), development)
BROWSERIFY_OPT := -d
endif
$(bundles.dest): $(out)/%.js: $(out)/.ccache/%.js
»   @mkdir -p $(dir $@)
»   browserify $(BROWSERIFY_OPT) $< -o $@

compile: $(bundles.dest)

Again, if we run it in the output directory, the expected development/main.js appears:

$ make -f ../foobar/main.mk
browserify -d development/.ccache/main.js -o development/main.js

but the makefile falls short of detecting whether the bundle needs to be updated:

$ touch ../foobar/foo.js

$ make -f ../foobar/main.mk
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//foo.js -o development/.ccache/foo.js

Despite of the fact that foo.js was indeed recompiled, our bundle remained intact because we didn’t specify any additional dependency relationships except a forlorn $(out)/main.js$(out)/.cache/main.js in the pattern rule.

There are several ways to ameliorate this. We’ll start with

Method 1: The Manual

The addition of a single line to main.mk:

$(out)/main.js: $(js.src)

seems to be able to solve the problem. If you run Make again it sees that one of the prerequisites (foo.js) is newer than the bundle target.

Pros Cons
Fast
Easy to maintain in small projects Unmanageable in projects w/ a lot of small modules
No dependencies on external tools

The biggest impediment here is that the method doesn’t scale. Essentially you resort yourself to doubling the amount of work of the dependency management: the 1st time you do it when you write your code, the 2nd time–during the reconstruction of the same dependency graph in the Makefile. This is waste.

It’s also prone to errors. For example, if you have several bundles:

example02/many-foobars
├── one/
│   └── main.js
├── two/
│   └── main.js
├── bar.js
├── foo.js
└── main.mk

then adding the same naïve lines:

$(out)/one/main.js: $(js.src)
$(out)/two/main.js: $(js.src)

to main.mk will lead you to the recompilation of 2 bundles even if you make a change only to 1 of them:

$ make -f ../many-foobars/main.mk
[...]

$ make -f ../many-foobars/main.mk -W ../many-foobars/one/main.js -tn
touch development/one/main.js
touch development/two/main.js

(-W options means “pretend that the target has been modified”.)

Method 2: Automatic make depend

Instead of specifying prerequisites manually we can use an external tool that returns the dependency list, in the Make-compatible format, for each file. One of such tools is make-commonjs-depend.

# npm -g i make-commonjs-depend
[...]
$ make-commonjs-depend development/.ccache/main.js
development/.ccache/main.js: \
  development/.ccache/bar.js
development/.ccache/bar.js: \
  development/.ccache/foo.js
development/.ccache/foo.js:
Pros Cons
Could be slow
Easy to maintain
Requires an external tool
May rebuilt already up to date targets

We can write a phony target “depend” & run make depend every time after we add/remove/rename any .js file & include the generated file into our Makefile.

We can also write a special target $(out)/.ccache/depend.mk, the recipe of which creates its target by running make-commonjs-depend command. In this case, if we include $(out)/.ccache/depend.mk & Make sees that the target is out of date, it remakes $(out)/.ccache/depend.mk & then immidiately restarts itself.

$ awk '/depend/,0' ../foobar/main.mk
$(out)/.ccache/depend.mk: $(js.dest)
»   make-commonjs-depend $^ > $@
»   @echo ========== RESTARTING MAKE ==========

include $(out)/.ccache/depend.mk

Here depend.mk file has all compiled .js files as prerequisites thus when any of them needs to be updated Make recompiles such .js files & reruns make-commonjs-depend.

$ rm -rf development
$ make -f ../foobar/main.mk
../foobar/main.makedepend.mk:41: development/.ccache/depend.mk: No such file or directory
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//bar.js -o development/.ccache/bar.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//foo.js -o development/.ccache/foo.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//main.js -o development/.ccache/main.js
make-commonjs-depend development/.ccache/bar.js development/.ccache/foo.js development/.ccache/main.js > development/.ccache/depend.mk
========== RESTARTING MAKE ==========
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//bar.js -o development/.ccache/bar.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//main.js -o development/.ccache/main.js
make-commonjs-depend development/.ccache/bar.js development/.ccache/foo.js development/.ccache/main.js > development/.ccache/depend.mk
========== RESTARTING MAKE ==========
browserify -d development/.ccache/main.js -o development/main.js

Although it works fine the unnecessary rebuilds could be a pain in big projects. For example, Make doesn’t understand that transpiling main.js in not needed in case of bar.js update, but because make-commonjs-depend gives Make a preconfigured graph which states that main.jsbar.js, it dutifully rebuilds main.js.

$ touch ../foobar/bar.js

$ make -f ../foobar/main.mk
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//bar.js -o development/.ccache/bar.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//main.js -o development/.ccache/main.js
make-commonjs-depend development/.ccache/bar.js development/.ccache/foo.js development/.ccache/main.js > development/.ccache/depend.mk
========== RESTARTING MAKE ==========
browserify -d development/.ccache/main.js -o development/main.js

On the other hand, if you don’t mind such remakes you may think it’s a small price to pay for having a fully automated dependency graph available after adding only 5 lines of code to the makefile.

Method 3: Variation of Tromey’s Way

The invention of another, more clever way of auto-discovering dependencies is generally attributed to Tom Tromey, who invented it while working on automake project in the second half of the 90s.

Instead of having name.mk targets that Make uses to restart itself, every file that needs dependency traction writes its dependency tree after the compilation step, as a side effect of it.

Pros Cons
Fast
Easy to maintain
No dependencies on external tools (it uses Browserify)

For example,

$(out)/%.js: $(out)/.ccache/%.js
»   mkdir -p $(dir $@)
»   browserify $< -o $@
»   a-magic-command-to-generate-a-dependency-list > $(basename $<).d

The key here is to generate the prerequisite lists only for the bundles, not for every .js file & keep those prerequisite lists in .d files alongside the main.js file in $(out)/.ccache directory. (.d extension means nothing special, it’s just a name convention.)

During the 1st run when there is no .d files, Make knows nothing about them so it compiles .js files, then compiles bundles. The rule that creates a bundle also produces a corresponding .d file with the list of all the dependencies the bundle depends on.

At this stage we’re as at the point as if we didn’t have any dependencies for the bundles at all, but we can instruct Make to read those .d files at startup later on. In the next run, Make scans .d files, looks into the provided dependency lists & sees if any of the bundles needs to be updated. After each update the corresponding .d file updates as well.

The beauty of the method is that it doesn’t care if we reshuffle our code into a completely different set of .js files as long as we don’t remove any files in $(out)/.ccache directory & if we do remove that directory completely–it still doesn’t matter, for it’ll be the same as doing the clean build from the scratch.

$ awk '/bundle/,0' ../foobar/main.mk
bundles.src := $(filter %/main.js, $(js.dest))
bundles.dest := $(patsubst $(out)/.ccache/%.js, $(out)/%.js, $(bundles.src))

define make-depend
@echo Generating $(basename $<).d
@printf '%s: ' $@ > $(basename $<).d
@browserify --no-bundle-external --list $< \
»   | sed s%.\*$<%% | sed s%$(CURDIR)/%% | tr '\n' ' ' \
»   >> $(basename $<).d
endef

ifeq ($(NODE_ENV), development)
BROWSERIFY_OPT := -d
endif
$(bundles.dest): $(out)/%.js: $(out)/.ccache/%.js
»   @mkdir -p $(dir $@)
»   browserify $(BROWSERIFY_OPT) $< -o $@
»   $(make-depend)

compile: $(bundles.dest)

-include $(bundles.src:.js=.d)

Before explaining the new code, let’s see it in action. We clean up $(out) & run make:

$ rm -rf development
$ make -f ../foobar/main.mk
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//bar.js -o development/.ccache/bar.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//foo.js -o development/.ccache/foo.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//main.js -o development/.ccache/main.js
browserify -d development/.ccache/main.js -o development/main.js
Generating development/.ccache/main.d

The generated file development/.ccache/main.d should contain a new rule (a oneliner, w/o a recipe):

$ cat development/.ccache/main.d
development/main.js: development/.ccache/foo.js development/.ccache/bar.js  

Now if we update bar.js:

$ touch ../foobar/bar.js
$ make -f ../foobar/main.mk
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//bar.js -o development/.ccache/bar.js
browserify -d development/.ccache/main.js -o development/main.js
Generating development/.ccache/main.d

Volia! Make accurately recompiles only those files that needs to be recompiled: bar.js & the bundle.

Looking into the body of the pattern rule we see a line that contains $(make-depend) string. It looks like we’re injecting a value of the variable make-depend into the recipe. This trick is called a canned recipe. make-depend is a multi-line REV (recursively expanded variable) which means that Make expands it value every time it has a need to. You may think of make-depend variable as a macro or a function with a dynamic scope.

The purpose of the make-depend REV is to write a .d file that should contain a valid Make syntax.

If we run Browserify by hand on a compiled main.js file with --list command line option, Browserify prints a newline-separated list of main.js dependencies:

$ browserify --no-bundle-external --list development/.ccache/main.js
/home/alex/lib/writing/articles/data/gmake-autodeps/_out.blogger/s06/example01/_out/development/.ccache/foo.js
/home/alex/lib/writing/articles/data/gmake-autodeps/_out.blogger/s06/example01/_out/development/.ccache/bar.js
/home/alex/lib/writing/articles/data/gmake-autodeps/_out.blogger/s06/example01/_out/development/.ccache/main.js

This is obviously not a valid Make syntax. We ought to:

  1. remove main.js from the list, otherwise we get a circular dependency problem;

  2. transform absolute paths to relative ones, for our pattern rules expect the latter.

This is what make-depend macro does, not counting a pattern rule header generation.

Of course nothing prevents you from writing a small script that runs Browserify by internally & formats the output accordingly. You can even take make-commonjs-depend & write a custom printer for it if you’re feeling brave.

Finally, as we’re generating .d files we should give Make a chance to read them in the next run. This is what

-include $(bundles.src:.js=.d)

line does. :.js=.d suffix means “in every file name substitute .js extension with .d”, e.g. the expanded result looks like

-include development/.ccache/main.d

A minus sign prevents Make from printing a warning if development/.ccache/main.d is not found.

What if we rename foo.js into fool.js (& do the corresponding changes in the code)? In a poorly written build system it could break the build & could require users manually remove .d files.

$ mv ../foobar/foo.js ../foobar/fool.js
$ sed -i "s,'./foo','./fool'," ../foobar/bar.js
$ tree ../foobar/ --noreport
../foobar/
├── bar.js
├── fool.js
├── main.js
└── main.mk

$ make -f ../foobar/main.mk
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//bar.js -o development/.ccache/bar.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//fool.js -o development/.ccache/fool.js
browserify -d development/.ccache/main.js -o development/main.js
Generating development/.ccache/main.d

There was no errors of any kind because foo.js leftover happily resides in $(out)/.ccache directory.

PS. Here is an alternate version of this post that can be more readable on your phone.

Sunday, June 26, 2016

Dump/restore

npm eats inodes for breakfast. A brand-new Angular2 project downloads > 40K files in node_modules just to get started (this includes babel).

Nobody counts inodes unless for some reason they use a previous generation filesystem (ext4) where inodes may suddenly become a scarce resource. The symptoms are rather common: there is a plenty of free space but you cannot create a new file.

So I decided to outwit myself via dump(8)ing /home to a network drive, reformating /home using a smaller inode_ratio value to make sure inodes would be abundant, then restore(8)ing from the dump file.

It went fine, except for 1 strange thing. The 1st time I launched Chromium it complained that “Your preference file is corrupted or invalid”. Was it because I was dumping a live fs? It seems that everything else has been restored correctly.

Wednesday, June 8, 2016

An unhealthy tweaking

Being in a state of horror because of discovering that perhaps in the next version of FVWM there will me no FvwmWharf module any more, I did something long overdue: switched to FvwmButtons.

Being more or less satisfied w/ the result,

I nevertheless feel that such an activity is a primary example of wasting time for nothing.

Thursday, May 26, 2016

enquire.c

Hey, look what I've found in the archives of comp.sources.misc!

Enquire: Everything you wanted to know about your C Compiler and Machine, but didn't know who to ask

One day Richard Stallman passed by, and mentioned that they needed such a program for GCC.

http://homepages.cwi.nl/~steven/enquire.html

Saturday, May 21, 2016

Creative Marketing

From Stevens' Portals in 4.4BSD paper:

"Ideas similar to portals have appeared in numerous operating systems over the past decade.

The 4.2BSD manual [Joy et al. 1983] defined the portal system call, with seven arguments, and a footnote that it was not implemented in 4.2BSD."

On a side note: what a beautiful idea Portals was. It's a shame that Linux has never caught up with BSD on it.

Thursday, April 7, 2016

Sunrise/Sunset Algo

If you need to implement sunrise/sunset calculations having only a latitude/longitude (& a particular date), go here.

If found that w/ zenith = 90.79 it gives the same rise/set numbers as googling for "<location> sunrise".

Also be careful w/ defining your sin/asin et al. that should take degrees & return degrees. For example:

let sin = (d) => Math.sin(d * (Math.PI / 180))
let asin = (d) => Math.asin(d) * (180/Math.PI)

I had to do the same while reviving an old timezone viewer tktz to force it to work again on Fedora 23. Of course I forgot that asin() returns radians & was scratching my head over why I was getting phoney baloney numbers.

Monday, March 28, 2016

A State of Tcl

If you write a generator that gives a user several choices, like 'npm init', would you chose a GUI based approach instead? Judging by the amount & the state of lightweight gui libs for such a task, GUI was popular in 1990s & since then everyone has been sticking to cli mytool --opt1 --foo=bar solutions, for they are easy to write & support.

I thought that today, maybe, it's better to spin off a tiny node server & xdg-open a browser, where user would click, clack & submit the form. If you think about GUI--do exactly that.

But then I remembered that once upon a time (many years ago) I loved Tcl!

Well. After playing w/ 8.6.4 for a day I say it's a complete disaster. I don't get why I ever thought of it as a nice language.

The idea was very simple: draw a dialog, user clicks, presses OK, the dialog spits some json & quits. Then another tool reads that json & does all the work that the generator should do.

I won't write about Ttk widgets, they are practically the same & have not been changed a bit through all this years. 8.6.4 has fixed an annoying issue w/ HiDPI screens but X11 version of it contains a scaling bug, when everything scales properly except the fonts--they stay tiny, as if you have 75dpi monitor. The only remedy I've found it to inject this manual trigger:

if {[tk windowingsystem] == "x11"} {
    # force all fonts to have a platform-dependent default size
    # according to the DPI
    foreach idx [font names] { font configure $idx -size 0 }
}

The main problem w/ modern Tcl is (please don't laugh) its innate inability to properly deal w/ JSON. If you have a checkbox that sets its binded variable to 0 or 1, how would you represent that value in json? As a number? A string? How do you know that it's indeed a number? It says 1--I say it's a digit! But to Tcl it's a string. If you have an entry widget where user can enter "1" would you leave it in json as a string or would you auto-convert it to an integer? If user have entered "no" would you auto-convert it to false? What about nulls?

The sub-problem of a JSON representation nightmare is a total absence of any standard lib for converting Tcl dicts into JSON. There is tcllib [json::dict2json] that is undocumented & it's undocumented for a good reason for it doesn't work at all. Tcl wiki contains a handful of inadequate snippets that are tied to a particular dataset & are not useful as general converters. The only one half-working solution I've found is DKF's [tcl2json]. Try to get null w/ it, though.

tl;dr: forget about Tcl.