Monday, July 18, 2016

Generating Dependencies Automatically with GNU Make & Browserify


For any one to be required to use more force than is absolutely necessary for the job in hand is waste.
— Henry Ford

In the previous post about the example of a build system for JavaScript SPAs, we didn’t cover the topic of auto-discovering dependencies. While not being the most complex one, it oftentimes leads to a rather frustrating expirience for the novice user.

In this post we’ll examine several ways of dependency management to aid Make to properly construct its dependency trees.

We’ll use a simple “app” consisting of 3 .js files:

├── bar.js
├── foo.js
└── main.js

where we’ll compile them from ES2015 to ES5 w/ Babel & will combine them in 1 bundle w/ Browserify. The dependency tree for main.js looks very simple:

i.e., foo.js & bar.js are commonjs modules, main.js requires bar that in turn requres foo.

The makefile that we’ll write will do 2 things:

  1. compile all .js files into a separate tree directory;
  2. create a bundle from the files in the separate tree directory.

The dependency problem arises when we modify, say, foo.js. Our build system should automatically recognize that the bundle from the step 2 became outdated & needs to be recreated.

The compilation

As usual we want to support a single source three with multiple builds (development & production). Thus it’s inconvinient to put the results of the compilation in the source directory. The simplest way of achieving this is to run Make from the output directory that != source directory. For example:

├── foobar/
│   ├── bar.js
│   ├── foo.js
│   ├── main.js
│   └──
└── _out/
    └── development/
        ├── .ccache/
        │   ├── bar.js
        │   ├── foo.js
        │   └── main.js
        └── main.js

where foobar is out source directory, _out is the output directory where we run Make, _out/development/main.js is the bundle.

Let’s start with compiling .js files first. For simplicity we’ll assume that all the npm packages we need are installed in the global mode.

# npm -g i babel-cli babel-preset-es2015 browserify
$ cat ../foobar/

src := $(dir $(lastword $(MAKEFILE_LIST)))
NODE_ENV ?= development
out := $(NODE_ENV)

.PHONY: compile

js.src := $(shell find $(src) -name '*.js' -type f)
js.dest := $(patsubst $(src)%.js, $(out)/.ccache/%.js, $(js.src))

ifeq ($(NODE_ENV), development)
BABEL_OPT := -s inline
_BABEL_OPT := --preset $(shell npm -g root)/babel-preset-es2015 $(BABEL_OPT)

$(js.dest): $(out)/.ccache/%.js: $(src)/%.js
»   @mkdir -p $(dir $@)
»   babel $(_BABEL_OPT) $< -o $@

compile: $(js.dest)

If we run it in _out directory:

$ make -f ../foobar/
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//bar.js -o development/.ccache/bar.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//foo.js -o development/.ccache/foo.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//main.js -o development/.ccache/main.js

$ make -f ../foobar/
make: Nothing to be done for 'compile'.

To recap what we wrote here:

  • The empty .DELETE_ON_ERROR: target tells Make to remove the produced target, for example, development/.ccache/foo.js in case of the compilation failure. You should always include this line into your makefiles, otherwise, in our case, it’s possible to end up with invalid development/.ccache/foo.js if Babel terminates unexpectedly due to a bug, user signal, etc. Recall that Make thinks about the success in terms of the exit status of a shell command.

  • We collected the names of our source files in js.src; js.dest contains the transformed paths so that



  • Notice how we wrote the header of the patter rule:

      $(js.dest): $(out)/.ccache/%.js: $(src)/%.js

    by prepending it with $(js.dest) we limited the scope of it.

  • The default output build is ‘development’. We make sure that in the development mode we include source maps for the output .js files. I do not discuss here the command line options for Babel (& the kludge to force Babel pick up a globaly installed preset), for they are irrelevant to the topic.


As we transpile the .js files into a mundane ES5, the bundle should be created from the results of the compilation, not from the original files.

$ awk '/bundle/,0' ../foobar/
bundles.src := $(filter %/main.js, $(js.dest))
bundles.dest := $(patsubst $(out)/.ccache/%.js, $(out)/%.js, $(bundles.src))

ifeq ($(NODE_ENV), development)
$(bundles.dest): $(out)/%.js: $(out)/.ccache/%.js
»   @mkdir -p $(dir $@)
»   browserify $(BROWSERIFY_OPT) $< -o $@

compile: $(bundles.dest)

Again, if we run it in the output directory, the expected development/main.js appears:

$ make -f ../foobar/
browserify -d development/.ccache/main.js -o development/main.js

but the makefile falls short of detecting whether the bundle needs to be updated:

$ touch ../foobar/foo.js

$ make -f ../foobar/
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//foo.js -o development/.ccache/foo.js

Despite of the fact that foo.js was indeed recompiled, our bundle remained intact because we didn’t specify any additional dependency relationships except a forlorn $(out)/main.js$(out)/.cache/main.js in the pattern rule.

There are several ways to ameliorate this. We’ll start with

Method 1: The Manual

The addition of a single line to

$(out)/main.js: $(js.src)

seems to be able to solve the problem. If you run Make again it sees that one of the prerequisites (foo.js) is newer than the bundle target.

Pros Cons
Easy to maintain in small projects Unmanageable in projects w/ a lot of small modules
No dependencies on external tools

The biggest impediment here is that the method doesn’t scale. Essentially you resort yourself to doubling the amount of work of the dependency management: the 1st time you do it when you write your code, the 2nd time–during the reconstruction of the same dependency graph in the Makefile. This is waste.

It’s also prone to errors. For example, if you have several bundles:

├── one/
│   └── main.js
├── two/
│   └── main.js
├── bar.js
├── foo.js

then adding the same naïve lines:

$(out)/one/main.js: $(js.src)
$(out)/two/main.js: $(js.src)

to will lead you to the recompilation of 2 bundles even if you make a change only to 1 of them:

$ make -f ../many-foobars/

$ make -f ../many-foobars/ -W ../many-foobars/one/main.js -tn
touch development/one/main.js
touch development/two/main.js

(-W options means “pretend that the target has been modified”.)

Method 2: Automatic make depend

Instead of specifying prerequisites manually we can use an external tool that returns the dependency list, in the Make-compatible format, for each file. One of such tools is make-commonjs-depend.

# npm -g i make-commonjs-depend
$ make-commonjs-depend development/.ccache/main.js
development/.ccache/main.js: \
development/.ccache/bar.js: \
Pros Cons
Could be slow
Easy to maintain
Requires an external tool
May rebuilt already up to date targets

We can write a phony target “depend” & run make depend every time after we add/remove/rename any .js file & include the generated file into our Makefile.

We can also write a special target $(out)/.ccache/, the recipe of which creates its target by running make-commonjs-depend command. In this case, if we include $(out)/.ccache/ & Make sees that the target is out of date, it remakes $(out)/.ccache/ & then immidiately restarts itself.

$ awk '/depend/,0' ../foobar/
$(out)/.ccache/ $(js.dest)
»   make-commonjs-depend $^ > $@
»   @echo ========== RESTARTING MAKE ==========

include $(out)/.ccache/

Here file has all compiled .js files as prerequisites thus when any of them needs to be updated Make recompiles such .js files & reruns make-commonjs-depend.

$ rm -rf development
$ make -f ../foobar/
../foobar/ development/.ccache/ No such file or directory
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//bar.js -o development/.ccache/bar.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//foo.js -o development/.ccache/foo.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//main.js -o development/.ccache/main.js
make-commonjs-depend development/.ccache/bar.js development/.ccache/foo.js development/.ccache/main.js > development/.ccache/
========== RESTARTING MAKE ==========
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//bar.js -o development/.ccache/bar.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//main.js -o development/.ccache/main.js
make-commonjs-depend development/.ccache/bar.js development/.ccache/foo.js development/.ccache/main.js > development/.ccache/
========== RESTARTING MAKE ==========
browserify -d development/.ccache/main.js -o development/main.js

Although it works fine the unnecessary rebuilds could be a pain in big projects. For example, Make doesn’t understand that transpiling main.js in not needed in case of bar.js update, but because make-commonjs-depend gives Make a preconfigured graph which states that main.jsbar.js, it dutifully rebuilds main.js.

$ touch ../foobar/bar.js

$ make -f ../foobar/
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//bar.js -o development/.ccache/bar.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//main.js -o development/.ccache/main.js
make-commonjs-depend development/.ccache/bar.js development/.ccache/foo.js development/.ccache/main.js > development/.ccache/
========== RESTARTING MAKE ==========
browserify -d development/.ccache/main.js -o development/main.js

On the other hand, if you don’t mind such remakes you may think it’s a small price to pay for having a fully automated dependency graph available after adding only 5 lines of code to the makefile.

Method 3: Variation of Tromey’s Way

The invention of another, more clever way of auto-discovering dependencies is generally attributed to Tom Tromey, who invented it while working on automake project in the second half of the 90s.

Instead of having targets that Make uses to restart itself, every file that needs dependency traction writes its dependency tree after the compilation step, as a side effect of it.

Pros Cons
Easy to maintain
No dependencies on external tools (it uses Browserify)

For example,

$(out)/%.js: $(out)/.ccache/%.js
»   mkdir -p $(dir $@)
»   browserify $< -o $@
»   a-magic-command-to-generate-a-dependency-list > $(basename $<).d

The key here is to generate the prerequisite lists only for the bundles, not for every .js file & keep those prerequisite lists in .d files alongside the main.js file in $(out)/.ccache directory. (.d extension means nothing special, it’s just a name convention.)

During the 1st run when there is no .d files, Make knows nothing about them so it compiles .js files, then compiles bundles. The rule that creates a bundle also produces a corresponding .d file with the list of all the dependencies the bundle depends on.

At this stage we’re as at the point as if we didn’t have any dependencies for the bundles at all, but we can instruct Make to read those .d files at startup later on. In the next run, Make scans .d files, looks into the provided dependency lists & sees if any of the bundles needs to be updated. After each update the corresponding .d file updates as well.

The beauty of the method is that it doesn’t care if we reshuffle our code into a completely different set of .js files as long as we don’t remove any files in $(out)/.ccache directory & if we do remove that directory completely–it still doesn’t matter, for it’ll be the same as doing the clean build from the scratch.

$ awk '/bundle/,0' ../foobar/
bundles.src := $(filter %/main.js, $(js.dest))
bundles.dest := $(patsubst $(out)/.ccache/%.js, $(out)/%.js, $(bundles.src))

define make-depend
@echo Generating $(basename $<).d
@printf '%s: ' $@ > $(basename $<).d
@browserify --no-bundle-external --list $< \
»   | sed s%.\*$<%% | sed s%$(CURDIR)/%% | tr '\n' ' ' \
»   >> $(basename $<).d

ifeq ($(NODE_ENV), development)
$(bundles.dest): $(out)/%.js: $(out)/.ccache/%.js
»   @mkdir -p $(dir $@)
»   browserify $(BROWSERIFY_OPT) $< -o $@
»   $(make-depend)

compile: $(bundles.dest)

-include $(bundles.src:.js=.d)

Before explaining the new code, let’s see it in action. We clean up $(out) & run make:

$ rm -rf development
$ make -f ../foobar/
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//bar.js -o development/.ccache/bar.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//foo.js -o development/.ccache/foo.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//main.js -o development/.ccache/main.js
browserify -d development/.ccache/main.js -o development/main.js
Generating development/.ccache/main.d

The generated file development/.ccache/main.d should contain a new rule (a oneliner, w/o a recipe):

$ cat development/.ccache/main.d
development/main.js: development/.ccache/foo.js development/.ccache/bar.js  

Now if we update bar.js:

$ touch ../foobar/bar.js
$ make -f ../foobar/
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//bar.js -o development/.ccache/bar.js
browserify -d development/.ccache/main.js -o development/main.js
Generating development/.ccache/main.d

Volia! Make accurately recompiles only those files that needs to be recompiled: bar.js & the bundle.

Looking into the body of the pattern rule we see a line that contains $(make-depend) string. It looks like we’re injecting a value of the variable make-depend into the recipe. This trick is called a canned recipe. make-depend is a multi-line REV (recursively expanded variable) which means that Make expands it value every time it has a need to. You may think of make-depend variable as a macro or a function with a dynamic scope.

The purpose of the make-depend REV is to write a .d file that should contain a valid Make syntax.

If we run Browserify by hand on a compiled main.js file with --list command line option, Browserify prints a newline-separated list of main.js dependencies:

$ browserify --no-bundle-external --list development/.ccache/main.js

This is obviously not a valid Make syntax. We ought to:

  1. remove main.js from the list, otherwise we get a circular dependency problem;

  2. transform absolute paths to relative ones, for our pattern rules expect the latter.

This is what make-depend macro does, not counting a pattern rule header generation.

Of course nothing prevents you from writing a small script that runs Browserify by internally & formats the output accordingly. You can even take make-commonjs-depend & write a custom printer for it if you’re feeling brave.

Finally, as we’re generating .d files we should give Make a chance to read them in the next run. This is what

-include $(bundles.src:.js=.d)

line does. :.js=.d suffix means “in every file name substitute .js extension with .d”, e.g. the expanded result looks like

-include development/.ccache/main.d

A minus sign prevents Make from printing a warning if development/.ccache/main.d is not found.

What if we rename foo.js into fool.js (& do the corresponding changes in the code)? In a poorly written build system it could break the build & could require users manually remove .d files.

$ mv ../foobar/foo.js ../foobar/fool.js
$ sed -i "s,'./foo','./fool'," ../foobar/bar.js
$ tree ../foobar/ --noreport
├── bar.js
├── fool.js
├── main.js

$ make -f ../foobar/
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//bar.js -o development/.ccache/bar.js
babel --preset /opt/lib/node_modules/babel-preset-es2015 -s inline ../foobar//fool.js -o development/.ccache/fool.js
browserify -d development/.ccache/main.js -o development/main.js
Generating development/.ccache/main.d

There was no errors of any kind because foo.js leftover happily resides in $(out)/.ccache directory.

PS. Here is an alternate version of this post that can be more readable on your phone.

Sunday, June 26, 2016


npm eats inodes for breakfast. A brand-new Angular2 project downloads > 40K files in node_modules just to get started (this includes babel).

Nobody counts inodes unless for some reason they use a previous generation filesystem (ext4) where inodes may suddenly become a scarce resource. The symptoms are rather common: there is a plenty of free space but you cannot create a new file.

So I decided to outwit myself via dump(8)ing /home to a network drive, reformating /home using a smaller inode_ratio value to make sure inodes would be abundant, then restore(8)ing from the dump file.

It went fine, except for 1 strange thing. The 1st time I launched Chromium it complained that “Your preference file is corrupted or invalid”. Was it because I was dumping a live fs? It seems that everything else has been restored correctly.

Wednesday, June 8, 2016

An unhealthy tweaking

Being in a state of horror because of discovering that perhaps in the next version of FVWM there will me no FvwmWharf module any more, I did something long overdue: switched to FvwmButtons.

Being more or less satisfied w/ the result,

I nevertheless feel that such an activity is a primary example of wasting time for nothing.

Thursday, May 26, 2016


Hey, look what I've found in the archives of comp.sources.misc!

Enquire: Everything you wanted to know about your C Compiler and Machine, but didn't know who to ask

One day Richard Stallman passed by, and mentioned that they needed such a program for GCC.

Saturday, May 21, 2016

Creative Marketing

From Stevens' Portals in 4.4BSD paper:

"Ideas similar to portals have appeared in numerous operating systems over the past decade.

The 4.2BSD manual [Joy et al. 1983] defined the portal system call, with seven arguments, and a footnote that it was not implemented in 4.2BSD."

On a side note: what a beautiful idea Portals was. It's a shame that Linux has never caught up with BSD on it.

Thursday, April 7, 2016

Sunrise/Sunset Algo

If you need to implement sunrise/sunset calculations having only a latitude/longitude (& a particular date), go here.

If found that w/ zenith = 90.79 it gives the same rise/set numbers as googling for "<location> sunrise".

Also be careful w/ defining your sin/asin et al. that should take degrees & return degrees. For example:

let sin = (d) => Math.sin(d * (Math.PI / 180))
let asin = (d) => Math.asin(d) * (180/Math.PI)

I had to do the same while reviving an old timezone viewer tktz to force it to work again on Fedora 23. Of course I forgot that asin() returns radians & was scratching my head over why I was getting phoney baloney numbers.

Monday, March 28, 2016

A State of Tcl

If you write a generator that gives a user several choices, like 'npm init', would you chose a GUI based approach instead? Judging by the amount & the state of lightweight gui libs for such a task, GUI was popular in 1990s & since then everyone has been sticking to cli mytool --opt1 --foo=bar solutions, for they are easy to write & support.

I thought that today, maybe, it's better to spin off a tiny node server & xdg-open a browser, where user would click, clack & submit the form. If you think about GUI--do exactly that.

But then I remembered that once upon a time (many years ago) I loved Tcl!

Well. After playing w/ 8.6.4 for a day I say it's a complete disaster. I don't get why I ever thought of it as a nice language.

The idea was very simple: draw a dialog, user clicks, presses OK, the dialog spits some json & quits. Then another tool reads that json & does all the work that the generator should do.

I won't write about Ttk widgets, they are practically the same & have not been changed a bit through all this years. 8.6.4 has fixed an annoying issue w/ HiDPI screens but X11 version of it contains a scaling bug, when everything scales properly except the fonts--they stay tiny, as if you have 75dpi monitor. The only remedy I've found it to inject this manual trigger:

if {[tk windowingsystem] == "x11"} {
    # force all fonts to have a platform-dependent default size
    # according to the DPI
    foreach idx [font names] { font configure $idx -size 0 }

The main problem w/ modern Tcl is (please don't laugh) its innate inability to properly deal w/ JSON. If you have a checkbox that sets its binded variable to 0 or 1, how would you represent that value in json? As a number? A string? How do you know that it's indeed a number? It says 1--I say it's a digit! But to Tcl it's a string. If you have an entry widget where user can enter "1" would you leave it in json as a string or would you auto-convert it to an integer? If user have entered "no" would you auto-convert it to false? What about nulls?

The sub-problem of a JSON representation nightmare is a total absence of any standard lib for converting Tcl dicts into JSON. There is tcllib [json::dict2json] that is undocumented & it's undocumented for a good reason for it doesn't work at all. Tcl wiki contains a handful of inadequate snippets that are tied to a particular dataset & are not useful as general converters. The only one half-working solution I've found is DKF's [tcl2json]. Try to get null w/ it, though.

tl;dr: forget about Tcl.

Friday, February 26, 2016

Run Debian Chromium on Fedora

Just a quickie. If you have a bunch of Fedora 32bit VMs, then starting from March there won't be any new Chrome for them. Instead of ditching all those precious VMs, I thought of using a pre-compiled Chromium provided by Debian.

It actually works if you're willing to put up w/ a regular rigmarole of (a) finding out "what's the curren version of Chromium?" & (b) proper deb → rpm conversions. Here is a makefile that automates all that.

Tuesday, February 23, 2016

JavaScript Tools with GNU Make

In the beginning

I speak what appears to me the general opinion; and where an opinion is general, it is usually correct.
— Mansfield Park

… there was no transcompiling in JavaScript world whatsoever & everyone who was programming back then for the front-side portion of the web was greatly admiring the fact of an instant gratification.

One day Sass appeared from the direction of Rails campfire site, where folks were having a good time singing Kumbaya. Many looked closely at Sass & thought that adding a new level of abstraction was never a bad thing & quickly joined the movement. Humble designers tried to held a convention but their feeble voices were swamped by the music.

Shortly after, CoffeeScript came along. Although compiling it in the browser on-the-fly was possible, rarely somebody did that for it was considered uncivil & rude. The sites were lean & jQuery was still a King.

Then a guy who was competing with TJ for a number of pushed packages to the npm registry, wrote Browserify. It became possible to write isomorphic code before everyone realized that such a word exists.

Starting a project started to mean a process of thinking of a build system.

$ mkdir ~/projects/money-printer
$ cd !$
$ touch █

Some transferred their Rake skills to their brave new SPA world, some tried to employ tiny shell scripts, the majority, though, was unsure what to do & how to behave properly.

Something had to happen because however it used to be, it used to be somehow, it never happened yet that is was no-how, thus several nice tools materialized. Although most of them work fine, occasionally I catch myself thinking that, perhaps, those tools are a little unnecessary for my needs.

Theories of galaxies

It turns out, people in different industries had similar problems for years. For instance, sometime during the end of the Middle Ages, Bell Labs engineers were bitterly complaining to each other how they kept making the classic mistake of debugging an incorrect program because they would forget to compile the change.

One day Steve Johnson (the author of yacc) came storming into the office of his colleague. The colleague name was Stuart Feldman. After they pitied each other on the miscompilation misfortune, they sketched up on the board a general idea of how to prevent this kind of errors in the future. The result of the sketching & the vigorous one-night coding is know by a program called Make.

Many years later Feldman will say “One of the reasons Make was so simple was that it was only supposed to solve my problem, and help Steve too.”

Why use Make today? Or more importantly, why use Make as a build tool for writing SPAs?

I remember the first time I was forced to read a makefile. It was circa 2003, when the FreeBSD port I was installing failed to compile properly. While trying to resolve the problem, I’ve discovered, to my surprise, that the whole FreeBSD ports system was written in a dialect of Make language. I didn’t like its syntax & the whole construction seemed overly complex, unintuitive, weird.

A modern JavaScript developer meets Make only if he tries to install some peace of software that is absent in a collection of packages for his favourite OS. Such software is usually written in old languages like C & uses autoconf system to generate a bunch of makefiles. The process looks foreign, too exotic, ancient, outdated, uninteresting, not relevant.

Despite its alien nature, Make has managed to become a happy witness to the first mass-produced personal computers, to the eradication of Smallpox, to the invention of WWW, to the collapse of USSR, to the end of apartheid in South Africa, to the introduction of €, to DHH’s “blog in 15 minutes” video, to the end of Great Recession & to SpaceX drone ship landing.

Make today is typically twice old then a typical web developer. If you learned Make, say, in 1986 (when the first edition of O’Reilly’s Managing Projects with GNU Make came out), you can still employ that knowledge to this very day, usually being the only 1 person in the whole building who can fix some random broken makefile.

“This rubbish doesn’t compile, man.”
“What does it say?”
“Something about a missing rule for a target.”
(covers his face with hands)
“Call Jane.”

The only communities that rejected Make completely were Java & Go. For the former you could use Make in theory but in practice no one except you then could parse & maintain your makefile. For the latter, Rob Pike’s idea of build conditions so rigorously constrained that no external tool is necessary, proved to be a winner. Unfortunately, this is not the case for JavaScript world.

Over the years, there were many attempts to “fix” Make via either enhancing it syntax, introducing incompatible features, or rethinking the whole idea. The majority of this this attempts if not failed, withal didn’t acquire much advocacy. Even such cosy tools as Rake have never gained popularity beyond the language domain they are written in & belong to.

JavaScript community is not unique. It follows the same waves of “it’s own way”, where the introduction of new revolutionary tools every 6 months inevitably leads to the psychological state called “tools fatigue”.

A dull speaker always talks long

Instead of a yet another reintroduction to a particulars of some Make implementation, I’ll try to show how to use GNU Make in a small but a real web application. There will be some shotrcuts & simplifications along the way; I did them not because of Make limitations but to keep this text short.

A small notice: the text implies that a reader is very comfortable with the command line. If it’s not you to a t (for example, you come from a designer’s background), please stop reading now, go read The Unix Programming Environment book, practice for a month, then return. It’s the only book you will ever need to read to become a jolly good Unix user.

Our example is a web-based RSS feeds filter. Suppose you want to subscribe to Back to Work podcast but only listen to episodes where hosts do not talk about Apple (or vice versa, Apple is your only interest). Or to find all the great shows where the guest was John Roderick? The b2w feed contains > 250 episodes; each episode contains extensive metadata. It’s not hard to programatically search through the XML & generate a smaller feed for the RSS reader of your choice.

The app consists of 2 parts: (a) a web component, where a user specifies a URI for the feed alongside a couple of filtering patterns & (b) a small server proxy, required mainly because of a same-origin policy.

The app source tree looks like this:

├── cli/
│   └── grepfeed*
├── client/
│   ├── index.html
│   ├── main.jsx
│   ├── moomintroll.svg
│   └── style.sass
├── lib/
│   ├── dom.js
│   ├── feed.js
│   └── u.js
├── mk/
│   ├──*
│   └── watchman.json
├── server/
│   └── index.js*
├── test/
│   ├── data/
│   │   ├── back2work.xml
│   │   ├── irishhistorypodcast.xml
│   │   └── simple.xml
│   └── test_feed.js
├── Makefile
├── package.json

lib directory contains a shared code, used both by a server component & a “client” side. The HTTP server doesn’t require any additional build step to function, but the web app is all about transcompiling:

  • it is written in a subset of ES2015 & we use Babel to transform the code to ES5;

  • instead of CSS we use a mix of hand-written Sass & plain CSS from NProgress npm package. The result will be in 1 style.css file.

  • the browser-facing part of the code is written in JSX thus requires both an additional compilation step to ES5 before ES2015 (i.e., ES5 → ES2015 → JSX) and React libraries at runtime.

  • to employ some Node.js libraries in the browser & to automatically manage the dependencies we use Browserify. The resulting app will be squeezed in 1 file main.js.

  • to be able to focus on the coding, instead of typing over & over again the same commands in the terminal, we use Watchman to automatically run Make for us whenever any of our files that needs recompiling change.

In real life you always have at least 2 different builds: one is for a development phase only, another for a production deployment. In the ‘devel’ version we use source maps, for the production version–the code minification.

It is not enough to have an ability to produce 2 builds, the goal is to have those 2 build at the same time.

For example, depending on the value of NODE_ENV environment variable we can decide how to compile & where to put the output files. There is a whole separate history of having a single source tree but multiple builds per “platform”, in which I won’t get into here. We are going to separate the source tree with the compiled output to a point that you may mark the source three as a read-only & don’t worry of ever accumulating random junk there over time. As an additional bonus this eliminates the need of any ‘clean’ operations that never work properly anyway.

This is how the result looks like:

├── development/
│   ├── client/
│   │   ├── index.html
│   │   ├── main.browserify.js
│   │   ├── moomintroll.svg
│   │   ├── style.css
│   │   └──
│   └── lib/
│       ├── dom.js
│       ├── feed.js
│       └── u.js
├── node_modules/ [438 entries exceeds filelimit, not opening dir]
├── production/
│   ├── client/
│   │   ├── index.html
│   │   ├── main.browserify.js
│   │   ├── moomintroll.svg
│   │   └── style.css
│   └── lib/
│       ├── dom.js
│       ├── feed.js
│       └── u.js
└── package.json

The contents of development/client directory is what our server uses to serve to the end-user. Files in development/lib can be safely ignored; they are temporal & placed there for Browserify, that uses them for producing development/client/main.browserify.js bundle. You may notice that the contents of development/client & production/client directories is different. As a matter of fact, the size is quite different too:

$ du -h --max-depth=1
3.3M ./development
444K ./production
62M ./node_modules
65M .

This is what you get after leaving out embedded source maps from main.browserify.js & an aggressive JavaScript minification.

You can read the app source code at It won’t hurt if, before carrying on, you clone it & try to reproduce one of the builds by yourself.

Static assets

We have files in our app that don’t require any transformation. It’s an .svg image of a rather happy Moomintroll & index.html. If we were living in the past & were compiling non-static assets in the same directory where their sources were, our .svg & .html files would have required no attention from a build system. But we chose the different route–to move everything outside of the source tree directory.

Our first steps are:

  1. Grab a list of files in the source tree.
  2. Chose a destination for selected files.
  3. Write a rule that specifies how to copy data.
  4. Invoke the rule.

In the root of our source tree we create a file named Makefile. When you run Make, it searches the current directory for a file with that name & starts parsing it.

NODE_ENV ?= development
out := $(NODE_ENV)
src.mkf := $(lastword $(MAKEFILE_LIST))
src := $(dir $(src.mkf))

We defined 4 variables. If you have NODE_ENV variable already defined in your environment, Make grabs it value, otherwise we set it to development string. out variable gets the value of NODE_ENV. We will use $(out) everywhere in our Makefile later on where we will need to prefix the file destination.

src.mkf gets the (relative) path to the Makefile itself. Ignore for now how it manages to do that. During the definition of src variable, we invoke Make internal $(dir) function to cut of the file name portion of src.mkf value. $(dir) is very similar to dirname(1) or Node path.dirname().

Steps 1-2: grab source & destination
static.src := $(wildcard $(src)/client/*.html $(src)/client/*.svg)
static.dest := $(subst $(src), $(out), $(static.src))

Here we define another 2 variables: the source of our static assets & its destination.

$(wildcard glob1 glob2 ...) is a Make function that internally uses fnmatch(2) to get a list of files. If it doesn’t match anything it returns an empty string. Think of $(wildcard) as primitive analogue of ls(1); it’s not recursive and uses glob patterns instead of regexps.

$(subst FROM, TO, TEXT) returns a new string with all matches of FROM replaced by TO. E.g. the next line in Make language

$(subst lamb,lambda,Mary had a little lamb)

is equivalent to JavaScript

"Mary had a little lamb".replace(/lamb/g, "lambda")

The only difference that $(subst) doesn’t support any regexp.

What we did in static.dest is got /foo/bar/grepfeed/client/index.html replaced with development/client/index.html.

Step 3: a rule

Now we can write a custom pattern rule that copies source files to their destination:

$(out)/%: $(src)/%
» mkdir -p $(dir $@)
» cp -a $< $@

Make language is all about rules. You may think of rules as functions (or rather, procedures) that take 2 parameters: target & source. A body of a “function” is any number of shell commands prefixed by a TAB character (marked by » above and everywhere below in this text).

target: source1 source2 ...
» body

Terms “source” & “body” are non-standard. I use them in this section only for clarity. The official GNU Make terms are “prerequisites” (also “dependency”) & “recipe”, e.g.

target: prerequisite1 prerequisite2 ...
» recipe

% character in the target & in the source is a wildcard. Cryptic $@ & $< inside of the body (recipe) of the rule are automatic variables. When (& only when) Make invokes the rule, it substitutes $@ with a target name & $< with a source name. There could be several sources (prerequisites, dependencies); $< means the first one, thus Make have another autovar $^ that means “the whole list”.

The exact meaning of $@, $<, $^ often escapes from newcomers. Here is the picture to help you to remember which autovar corresponds to what

The problem with our rule that it’s too broad. % in $(src)/% can match anything, not only client/index.html but client/main.jsx too. It’s possible to limit severely the applicability of a pattern rule by prefixing it with an explicit list of targets:

$(static.dest): $(out)/%: $(src)/%
» mkdir -p $(dir $@)
» cp -a $< $@
Step 4: invoking the rule

There is no way to explicitly invoke a pattern rule. But if we ask Make to create a target that match some rule, Make checks for the match & internally transforms the pattern rule to several simple file-based rules.

If we run in some temporal directory

$ make -f ../grepfeed/Makefile development/client/index.html

(-f CLO tells Make what file to read instead of Makefile in the current directory.)

Make automatically creates this rule on-the-fly:

development/client/index.html: /foo/bar/grepfeed/client/index.html
»  mkdir -p development/client
»  cp -a /foo/bar/grepfeed/client/index.html development/client/index.html

Or in an action:

$ make -f ../grepfeed/Makefile development/client/index.html development/client/moomintroll.svg
mkdir -p development/client/
cp -a ../grepfeed//client/index.html development/client/index.html
mkdir -p development/client/
cp -a ../grepfeed//client/moomintroll.svg development/client/moomintroll.svg

but it we provide a file name that do not match any pattern, Make aborts:

$ make -f ../grepfeed/Makefile foo/index.html bar/moomintroll.svg
make: *** No rule to make target 'foo/index.html'.  Stop.

It’s a little inconvenient to pass a list of file names to Make directly, for the list can be huge. We can write another rule that has a target with an arbitrary name but in sources has an actual list of the desired targets.

compile: development/client/index.html development/client/moomintroll.svg

or even better:

compile: $(static.dest)

This rule doesn’t have to have any recipe (body). Then we can execute Make as

$ make -f ../grepfeed/Makefile compile

One of the most lucrative Make features is that if you write your rules with caution, prudence & tact, it won’t rebuild targets that are up-to-date.

$ rm -rf development
$ make -f ../grepfeed/Makefile compile
mkdir -p development/client/
cp -a ../grepfeed//client/index.html development/client/index.html
mkdir -p development/client/
cp -a ../grepfeed//client/moomintroll.svg development/client/moomintroll.svg

$ make -f ../grepfeed/Makefile compile
make: Nothing to be done for 'compile'.

$ touch ../grepfeed/client/moomintroll.svg
$ make -f ../grepfeed/Makefile compile
mkdir -p development/client/
cp -a ../grepfeed//client/moomintroll.svg development/client/moomintroll.svg

How Make decides which target is up-to-date? By the simplest possible way: via checking last modification time for a target & its source. Over the years there were multiple attempts to enhance this algo by looking, for example, into a message digest of a file, but nobody had bothered to actually implement them efficiently enough to be included in GNU Make.

The next Make appeal comes from realization that by writing makefiles you’re constructing an acyclic graph of targets & their dependencies. So far we wrote 1 node with 2 leaves, each of which is generated via a pattern rule.

The arrow means “depends on.”


Debug facilities is where the original GNU Make distribution falls short. Partially it comes from the dynamic Make nature, for it is impossible to fully answer what would happen without doing it.

There is no REPL of any kind. Some primitive hacks exists, for example,, that could help mainly to experience internal Make functions like $(filter) or $(patsubst) without manually creating a makefile & running it.

ims> .pwd
ims> src = ../grepfeed/
ims> out = development
ims> static.src = $(wildcard $(src)/client/*.html $(src)/client/*.svg)
ims> . $(static.src)
../grepfeed//client/index.html ../grepfeed//client/moomintroll.svg
ims> . $(subst $(src), $(out), $(static.src))
  development/client/index.html  development/client/moomintroll.svg

There is also a forked version of GNU Make called remake, that can show an additional information about targets, plus it contains a real debugger.


If you don’t want to install any additional tools, prepare to grieve.

The most annoying GNU Make misconduct is inability to print the variable value without modifying makefiles. A clever trick, popularized by John Graham-Cumming, comprises of adding a special patter rule to a makefile. A modified version of the rule splits a variable value into separate lines for I find the trick most useful for displaying a list of files:

» @echo "$(strip $($*))" | tr ' ' \\n

Then, we can print the value of static.dest

$ make -f ../grepfeed/Makefile pp-static.dest

When you want to print what targets will be remade, try -n & -t options together:

$ make -f ../grepfeed/Makefile -tn compile
touch development/client/index.html
touch development/client/moomintroll.svg


Before transforming any of sass/js/jsx files we need to make sure we have all the installed tools. In package.json, among other things, we have:

  • node-sass
  • babel-cli
  • babel-preset-es2015
  • babel-preset-react
  • browserify
  • uglify-js

We can switch the responsibility to the user by asking in a readme to “install those in global mode” or we can be more polite & write a simple rule that checks if our packages are installed after each change in package.json.

export NODE_PATH = $(realpath node_modules)

node_modules: package.json
»   npm install --loglevel=error --depth=0 $(NPM_OPT)
»   touch $@

package.json: $(src)/package.json
»   cp -a $< $@

We have here 2 new simple file-based rules.

We need to copy package.json from the source code directory because if our output directory isn’t a descendant of the $(src), npm fails to find package.json at all, or picks up a wrong one.

Explicitly setting NODE_PATH is required for Babel because when the source code is in a different subtree, Babel searches for packages in node_modules directory downwards starting from a particular .js file.

Notice that we are telling Make to export NODE_PATH to child processes, such as for a CLI wrapper of Babel & that it’s not a regular variable, but a macro.

Regular Make variables, distinguished in their definition by := (as in foo := bar) have nothing interesting about them; they work exactly as you expect, by setting the value of a left-hand side immediately. Macros on the other hand, create only a stub, that is not evaluated until macro is accessed.

When we run Make for the first time, the directory node_modules may not exist yet, thus had we defined NODE_PATH as a regular variable, its value would have been an empty string & Babel would have failed to find any Node.js modules. But when it’s a macro, Make evaluates it when somebody (a child process) tries to read it. At that point node_modules definitely exists, & Make expands it to a full path with the help of its internal $(realpath) function.

To test all this, run Make in the $(out) directory:

$ make -f ../grepfeed/Makefile node_modules
cp -a ../grepfeed//package.json package.json
npm install --loglevel=error --depth=0 --cache-min 99999999

> node-sass@3.4.2 install /home/alex/lib/writing/articles/javascript-tools-with-gnu-make/_out.blogger/s06/grepfeed/_out/node_modules/node-sass
> node scripts/install.js

Binary downloaded and installed at /home/alex/lib/writing/articles/javascript-tools-with-gnu-make/_out.blogger/s06/grepfeed/_out/node_modules/node-sass/vendor/linux-ia32-47/binding.node

> spawn-sync@1.0.15 postinstall /home/alex/lib/writing/articles/javascript-tools-with-gnu-make/_out.blogger/s06/grepfeed/_out/node_modules/spawn-sync
> node postinstall

> node-sass@3.4.2 postinstall /home/alex/lib/writing/articles/javascript-tools-with-gnu-make/_out.blogger/s06/grepfeed/_out/node_modules/node-sass
> node scripts/build.js

` /home/alex/lib/writing/articles/javascript-tools-with-gnu-make/_out.blogger/s06/grepfeed/_out/node_modules/node-sass/vendor/linux-ia32-47/binding.node ` exists. 
 testing binary.
Binary is fine; exiting.
grepfeed@0.0.1 /home/alex/lib/writing/articles/javascript-tools-with-gnu-make/_out.blogger/s06/grepfeed/_out
├── babel-cli@6.5.1 
├── babel-polyfill@6.5.0 
├── babel-preset-es2015@6.5.0 
├── babel-preset-react@6.5.0 
├── browserify@13.0.0 
├── node-sass@3.4.2 
└── uglify-js@2.6.2 

touch node_modules

and again, to check if we indeed wrote 2 rules properly:

$ make -f ../grepfeed/Makefile node_modules
make: 'node_modules' is up to date.

Npm gets slower & slower with every release so this could take a while. There is NPM_OPT in the node_modules recipe; use it to pass any additional options to npm, for example:

$ make -f ../grepfeed/Makefile node_modules NPM_OPT="--cache-min 99999999"


To compile .sass files to .css we employ the same algo we have used for static assets.

node-sass := node_modules/.bin/node-sass
SASS_OPT := -q --output-style compressed
ifeq ($(NODE_ENV), development)
SASS_OPT := -q --source-map true
sass.src := $(wildcard $(src)/client/*.sass)
sass.dest := $(patsubst $(src)/%.sass, $(out)/%.css, $(sass.src))

$(out)/client/%.css: $(src)/client/%.sass
»   @mkdir -p $(dir $@)
»   $(node-sass) $(SASS_OPT) --include-path node_modules -o $(dir $@) $<

$(sass.dest): node_modules

compile: $(sass.dest)

Here we use NODE_ENV for the first time to modify the compiler behaviour: to include source maps when we are in a developer mode & to turn on the minification for a production mode.

Output .css files depend on node_module target, which depends on package.json which means if we modify the latter, Make considers our .css files outdated & remakes them.

You may not like a useless rebuilding every time you fix a typo in package.json, but it ensures that after updating a version string of a css package (like Nprogress), you won’t end up with an old code in $(out).

$ grep import ../grepfeed/client/style.sass
@import "nprogress/nprogress"

The usage of NODE_ENV for turning on/off minification through modifying node-sass command line options seems easy & convenient, but it’s inherently un-Unix: the minification step should be done by a separate command.

A more civilized way to do conversions would be to use chains of implicit rules. In a production mode

.css → .uncompressed_css → .sass

& in a development mode

.css → .sass

I’ll live this to you as a homework. For tips of how to achieve that, read the section about .js files transformation below.

ES2015 & Browserify

As our app is written in a subset of ES2015 we need another pattern rule to convert .js files to a ES5. lib directory is the location of the ES2015 code.

babel := node_modules/.bin/babel
ifeq ($(NODE_ENV), development)
BABEL_OPT := -s inline
js.src := $(wildcard $(src)/lib/*.js)
js.dest := $(patsubst $(src)/%.js, $(out)/%.js, $(js.src))

$(js.dest): node_modules

$(out)/%.js: $(src)/%.js
»   @mkdir -p $(dir $@)
»   $(babel) --presets es2015 $(BABEL_OPT) $< -o $@

There is no sign of minification because the destination files in $(out)/lib are for temporal purposes.

The same goes for .jsx files (of which we have only 1) in client directory.

jsx.src := $(wildcard $(src)/client/*.jsx)
jsx.dest := $(patsubst $(src)/%.jsx, $(out)/%.js, $(jsx.src))

$(jsx.dest): node_modules
# we use .jsx files only as input for browserify
.INTERMEDIATE: $(jsx.dest)

$(out)/client/%.js: $(src)/client/%.jsx
»   @mkdir -p $(dir $@)
»   $(babel) --presets es2015,react $(BABEL_OPT) $< -o $@

One thing is different here: the temporal output is placed in $(out)/client, that is the directory that our server uses for its root for static files. After all compilation steps are finished there should be no temporal files. Make doesn’t know that the result of .jsx transcompiling is a temporal link in the chain thus we mark such targets as intermediates, by adding them as dependencies to a special .INTERMEDIATE target. You’ll see shortly what happens to such targets.

This rule contains another shortcut: the transformation from JSX to ES5 is done in 1 step. Ideologically this is Not Right because after JSX conversion we should get plain ES6 which we could then convert to ES5 code.

To get a final bundle with the name $(out)/client/main.browserify.js we write a simple file-based rule:

browserify := node_modules/.bin/browserify
browserify.dest.sfx := .es5
ifeq ($(NODE_ENV), development)
browserify.dest.sfx := .js

bundle1 := $(out)/client/main.browserify$(browserify.dest.sfx)
$(bundle1): $(out)/client/main.js $(js.dest)
»   @mkdir -p $(dir $@)
»   $(browserify) $(BROWSERIFY_OPT) $< -o $@


Notice how we manually add to bundle dependencies a list of files in $(out)/lib & that our modest js target is empty for now.

This rules contains a catch: if it’s a development mode, the chain is simple

main.browserify.js → .js deps

where everything is compiled with source maps. For a production mode, there is an additional link:

main.browserify.js → main.browserify.es5 → .js deps

where main.browserify.js, despite its name, is created not by browserify but by a separate uglifyjs program.

# will be empty in development mode
es5.dest := $(patsubst %.es5, %.js, $(bundle1))

UGLIFYJS_OPT := --screw-ie8 -m -c
%.js: %.es5
»   node_modules/.bin/uglifyjs $(UGLIFYJS_OPT) -o $@ -- $<

ifneq ($(browserify.dest.sfx), .js)
js: $(es5.dest)
# we don't need .es5 files around
.INTERMEDIATE: $(bundle1)
js: $(bundle1)

compile: js

js target gets its prerequisites depending on NODE_ENV value.

The whole dependency graph of JavaScript files look like this (ellipse-shaped nodes are intermediates; dashed are production-mode only):

Now we can test the production mode:

$ NODE_ENV=production make -f ../grepfeed/Makefile js
node_modules/.bin/babel --presets es2015  ../grepfeed//lib/feed.js -o production/lib/feed.js
node_modules/.bin/babel --presets es2015  ../grepfeed//lib/dom.js -o production/lib/dom.js
node_modules/.bin/babel --presets es2015  ../grepfeed//lib/u.js -o production/lib/u.js
node_modules/.bin/babel --presets es2015,react  ../grepfeed//client/main.jsx -o production/client/main.js
node_modules/.bin/browserify  production/client/main.js -o production/client/main.browserify.es5
node_modules/.bin/uglifyjs --screw-ie8 -m -c -o production/client/main.browserify.js -- production/client/main.browserify.es5
rm production/client/main.browserify.es5 production/client/main.js

$ NODE_ENV=production make -f ../grepfeed/Makefile js
make: Nothing to be done for 'js'.
$ touch ../grepfeed/lib/u.js
$ NODE_ENV=production make -f ../grepfeed/Makefile js
node_modules/.bin/babel --presets es2015  ../grepfeed//lib/u.js -o production/lib/u.js
node_modules/.bin/babel --presets es2015,react  ../grepfeed//client/main.jsx -o production/client/main.js
node_modules/.bin/browserify  production/client/main.js -o production/client/main.browserify.es5
node_modules/.bin/uglifyjs --screw-ie8 -m -c -o production/client/main.browserify.js -- production/client/main.browserify.es5
rm production/client/main.browserify.es5 production/client/main.js

Albeit we didn’t explicitly put in any of the recipes a rm command, Make automatically removes targets that were prerequisites in .INTERMEDIATE target.

A watched pot never boils

2 of 4 stand-alone utils we use in our makefile have a built-in “watch” feature & browserify has a separate (& quite popular) watchify wrapper. It’s hard to imagine a project that needs to recompile only those files that watchify watches or the project that only uses node-sass. It’s also hard to imagine a reason why would you include the “watch” feature into your CLI program in the first place, knowing very well that your tool will never be the only one third-party tool in a project.

The only valid reason I can think of is a conscientious endeavour to set a world benchmark for software bloat, but we won’t get into that.

Instead of running 3 separate processes in parallel we will use watchman. It will look out for files we specify & run Make on every change automatically. If we have wrote Makefile properly, only files that are outdated (in $(out) directory) will be recompiled.

To make this more developer-friendly we can play a confirmation sound when Make finishes without errors; run a separate terminal window for watchman output & raise the window in case of a compilation error.

watchman configuration is not exactly the easiest one for it has 2 modes for reading it: from stdin in a json format or via command line options. The latter is more limited & the former is too verbose. We’ll use the json only to escape from the shell quoting issues.

We add another target to the makefile:

»   watchman trigger-del $(src) assets
»   @mkdir -p $(out)
»   m4 -D_SRC="$(src)" -D_TTY=`tty` \
»   »   -D_OUT_PARENT=`pwd` \
»   »   -D_MAKE="$(MAKE)" -D_MK="$(src.mkf)" \
»   »   $(src)/mk/watchman.json | watchman -n -j

m4 macro processor is absolutely not required, you can replace it with any other you like; we use it only to transform $(src)/mk/watchman.json file:

        "expression": [
            ["pcre", "^(client|lib)/[^#]+$", "wholename"],
            ["pcre", "^package.json$", "wholename"]
        "name": "assets",
        "command": [
            "_MAKE", "-C", "_OUT_PARENT", "-f", "_MK", "compile"
        "append_files": false,
        "stderr": ">_TTY",
        "stdout": ">_TTY"

I won’t describe what all this means, please refer to watchman manual for the particulars. What we need to note is that on each file change, watchman will run $(src)/mk/ script that in turn will run Make as

make -C $(src) -f $(src)/Makefile compile

where -C instruct Make to chdir before parsing a makefile provided in the -f CLO.

$(src)/mk/ is closely tied to my Fedora installation, so you will need to modify it for your machine:


# See `watch` target in Makefile.

# clear xterm history
printf "\033c"
# what is to be done
printf "\033[0;33m%s\033[0;m\n" "$*"

# run make


if [ $ec -eq 0 ]; then
    play $media/message.oga 2> /dev/null
    play $media/bell.oga 2> /dev/null
    # raise xterm window
    printf "\033[05t"

exit $ec

If everything works fine, you open a new xterm window, run make -f ../grepfeed/Makefile watch there & forget about it. On any compilation error, that xterm window pops up, alerting us that build has failed.


As we see, with a little help of shell scripting & a little knowledge of Make language it is possible to construct a build system for a SPA that uses all the latest JavaScript tools under the hood. There is 0 magic in it & no dependencies on any “plugins”. For why would you need a “plugin” to use a program that is already capable of transforming input?

Much more could be said about the Make language itself. We wrote our build system in 1 big makefile only to stay simple. You don’t have to be such a simpleton in your projects. There were no talks about what is a list in terms of Make, nothing about scoping rules, user-defined functions, canned recipes, etc.

I didn’t cover giant topics like auto-discovering dependencies for .js files (we have cheated by explicitly stating the dependencies for the output browserify bundle) or parallel jobs.

If you’re interested in GNU Make & want to know more, start with its official manual that covers most of the Make language details. After that, read Robert Mecklenburg’s Managing Projects with GNU Make book that will feed you with many ideas that you might otherwise be missing out. If that will be not enough, read The GNU Make Book by John Graham-Cumming. There is nothing to read about Make beyond that book for it contains maximum hardcore staff you will ever extract about the topic.

I want only to remind you that it doesn’t matter what toolchain you chose for a project (based on Make or not). If you fail to deliver a working app, no build system in the world will save you. Nobody cares about your polished infrastructure, for it’s the app that is important to the end user.


PS. Here is an alternate version of this post that can be more readable on your phone.

Monday, February 15, 2016

Pandoc MathJax Self-contained

If you've ever used MathJax, you've probably noticed that for everything it does it injects script tags w/ various modules, loads fonts on-demand, etc. This is the reason for why pandoc, for example, is unable to produce a truly stand-alone .html file w/ MathJax, where all formulas are pre-rendered or rendered on-the-fly but w/o any external requests.

At 1st I've tried to monkey patch MathJax.Ajax.Require() for dependency discovery & have generated 1 big file w/ all the required modules for PreviewHTML output format, like:

<% nm = ENV['MATHJAX_SRC'] || "node_modules/mathjax" -%>
<%= File.join nm, "MathJax.js" %>
<%= File.join nm, "jax/input/TeX/config.js" %>

It worked, served its purpose, but was a rough piece of horseplay.

What I really wanted is something like `pandoc -t html5 -o - | mathjax-embed` that would dump a pre-rendered html suitable for the offline use.

Then I remembered that we can always render html (w/ the mathjax script tag) in phantomjs and save the modified DOM. The process should be quite simple: load html, inject a peace of JS w/ the mathjax config, inject a script tag w/ src=mathjax-entry-point, wait until it finishes transforming DOM, print.

Here is a small phantomjs-script that does that:

Here is a rendered example (no JS required & no external resources).

1 caveat: it doesn't embed fonts, thus CommonHTML & HTML-CSS mathjax output formats won't look good. But it works fine for SVG & PreviewHTML ones.

Monday, February 8, 2016

Ruby mail & Base64 Content Transfer Encoding

If you need to parse emails that for some reason still use prehistoric charsets (like koi8-u), mail gem fails to decode bodies of such messages properly.

$ cat message.koi8u.mbox
From Mon Feb  8 22:26:51 2016
Subject: Kings
Date: Mon, 08 Feb 2016 20:26:51 +0000
MIME-Version: 1.0
Message-Id: <>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset=koi8-u

LCDT1c3VpCC2pNLV 08HMyc0uCvcgy8XE0s/Xycgg0MHMwdTByCwgzc/XIM7
$ irb
2.1.3 :001 > require 'mail'
2.1.3 :002 > m = 'message.koi8u.mbox'
2.1.3 :003 > m.body.decoded
"\xEE\xC1\xC4\xD7\xCF\xD2\xA6 [...]\n"
2.1.3 :004 > m.body.decoded.encoding

I.e., the result is total garbage.

But as we can obtain a charset name from Mail::Message#charset method, we can just manually convert the string to UTF-8:

2.1.3 :005 > m.body.decoded.force_encoding(m.charset).encode 'utf-8'
"Надворі вже смеркло, і, тьмою повитий,\n
Дрімає, сумує Ієрусалим.\n
В кедрових палатах, мов несамовитий,\n
Давид походжає і, о цар неситий,\n
Сам собі говорить: \"Я... Ми повелим!\n"

Sunday, January 3, 2016

An Oral History of Unix as an epub

During the summer-fall of 1989, Professor Michael S. Mahoney (of Princeton University) recorded a series of interviews w/ Bell Labs people who were involved in the creation of Unix. For example, dmr or McIlroy (Alan Turing always wanted to win a McIlroy Award, but didn't qualify).

This interview project was called An Oral History of Unix. Until the last week I had no idea of its existence. Judging from the text length (& comments in the transcriptions like "end of side A"), each conversation was an hour-long or more.

Unfortunately, the format that transcriptions are in, is an ancient version of MS Word & html version of it contains this hilarious lines:

<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=windows-1252">
<META NAME="Generator" CONTENT="Microsoft Word 97">

I don't know about you, but the last time I saw similarly crafted pages was more than 15 years ago.

Of course as you may guess an encoding in the content type header doesn't match the encoding of the file:

$ curl -sI | grep Content-Type
Content-Type: text/html; charset=UTF-8

It's like 1999 all over again!

Ok, enough w/ that. We can't write to Professor because he passed away in 2008. What we can do is to fix the presentation of the pages or, what I chose to do, to make them more readable on Kindle. I.e. if we generate a TOC & feed the (fixed) html to Calibre, it generates a valid epub file that we then can convert to .mobi or .azw3. The build scripts can be found here. The final result (epub, mobi, pdf):

Enjoy the reading!

Sunday, December 20, 2015

Dynamic PATH in GNU Make

Sometimes you may have several targets, where the 1st one creates a new directory, puts some files in it & the 2nd target expects newly created directory to be added to PATH. For example:

$ make -v | head -1
GNU Make 4.0

$ cat
PATH := toolchain:$(PATH)

src/.configure: | toolchain src
        cd src && ./
        touch $@

        mkdir $@
        printf "#!/bin/sh\necho foo" > $@/
        chmod +x $@/

        mkdir $@
        cp $@

toolchain target here creates the directory w/ new executables. src target emulates unpacking a tarball w/ script in it that runs, expecting it to be in PATH:

$ cat

echo PATH: $PATH

If we run this example, will unfortunately fail:

$ make -f 2>&1 | cut -c -72
mkdir toolchain
printf "#!/bin/sh\necho foo" > toolchain/
chmod +x toolchain/
mkdir src
cp src
cd src && ./
PATH: toolchain:/home/alex/.rvm/gems/ruby-2.1.3/bin:/home/alex/.rvm/gems

./ line 5: command not found recipe for target 'src/.configure' failed
make: *** [src/.configure] Error 127

The error is in the line where is invoked:

cd src && ./

As soon as we chdir to src, toolchain directory in the PATH becomes unreachable. If we try in use $(realpath) it won't help because when PATH variable is set there is no toolchain directory yet & $(realpath) will expand to an empty string.

What if PATH was an old school macro that was reevaluated every time it was accessed? If we change PATH := to:

path.orig := $(PATH)
PATH = $(warning $(shell echo PWD=`pwd`))$(realpath toolchain):$(path.orig)

Then PATH becomes a recursively expanded variable & a handy $(warning) function will print to the stderr the current working directory exactly in the moment PATH is being evaluated (it won't mangle the PATH value because $(warning) always expands to an empty string).

$ rm -rf toolchain src ; make -f 2>&1 | cut -c -100
mkdir toolchain PWD=/home/alex/lib/writing/
printf "#!/bin/sh\necho foo" > toolchain/
chmod +x toolchain/
mkdir src PWD=/home/alex/lib/writing/
cp src
cd src && ./ PWD=/home/alex/lib/writing/
PATH: /home/alex/lib/writing/

touch src/.configure

As we see, PATH was accessed 3 times: before printf/cp invocations & after ./ (because for ./ there is no need to consult PATH).

Saturday, December 12, 2015


While upgrading to Fedora 23, I've discovered New Horizons of Awesomeness in gtk3. (I think it should be the official slogan for all the new gtk apps in general.)

If you don't use a compositor & select ubuntu-style theme:

  $ grep theme-name ~/.config/gtk-3.0/settings.ini
  gtk-theme-name = Ambiance  

modern apps start looking very indie in fvwm:

Granted, it's not 1997 anymore, we all have big displays w/ a lot of lilliputian pixels, but such a waste of a screen estate seems a little unnecessary to me.

Turns out it's an old problem that has no solution, except for the "use Gnome" handy advice. There is a hack but I don't think I'm in a such a desparate position to employ it. A quote from the README:

  I use $LD_PRELOAD to override several gdk and glib/gobject APIs to
  intercept related calls gtk+ 3 uses to setup CSD.  

I have no words. All we can do to disable the gtk3 decoration is to preload a custom library that mocks some rather useful part of gtk3 api. All praise Gnome!

In seeking of a theme that has contrast (e.g. !gray text on gray backgrounds) I've found that (a) an old default theme looks worse than Motif apps from 1990s:

  $ GTK_THEME=Raleigh gtk3-demo

Which is a pity because gtk2 Raleigh theme was much prettier:

& (b) my favourite GtkPaned widget renders equaly horrific everywhere. Even a highly voted Clearlooks-Phenix theme manages to make it practically imperceptible by the eye:

A moral of the story: don't write desktop apps (but all kids know this already), ditch gtk apps you run today for they all will become unusable tomorrow (but what do I know? I still use xv as a photo viewer).

Sunday, November 8, 2015

Why Johnny Still Can't Encrypt

Before reading "Why Johnny Still Can't Encrypt" I'd read "Why Johnny Can't Encrypt". Boy it was hilarious!

In the original paper they asked 12 people to send en encrypted message to 5 people. In the process the participants had to stumble upon several traps like a need to distinguish a key algo type because 1 of the recipients used an 'old' style RSA key.

The results were funny to read:

'One of the 12 participants (P4) was unable to figure out how to encrypt at all. He kept attempting to find a way to "turn on" encryption, and at one point believed that he had done so by modifying the settings in the Preferences dialog in PGPKeys.'

'P1, P7 and P11 appeared to develop an understanding that they needed the team members' public keys, but still did not succeed at correctly encrypting email. P2 never appeared to understand what was wrong, even after twice receiving feedback that the team members could not decrypt his email.'

'(P5) so completely misunderstood the model that he generated key pairs for each team member rather than for himself, and then attempted to send the secret in an email encrypted with the five public keys he had generated. Even after receiving feedback that the team members were unable to decrypt his email, he did not manage to recover from this error.'

'P6 generated a test key pair and then revoked it, without sending either the key pair or its revocation to the key server. He appeared to think he had successfully completed the task.'

'P11 expressed great distress over not knowing whether or not she should trust the keys, and got no further in the remaining ten minutes of her test session.'

The new paper "Why Johnny Still Can't Encrypt" is uninspiring. They used a JS OpenPGP implementation (Mailvelope), avalivible as a Chrome/Firefox plugin. Before reading the sequel I'd installed the plugin to judge it by myself.

Mailvelope is fine if you understand that it operates just on a arbitual block of text; it doesn't (& cannot) 'hook' into GMail in any way except for trying to parse encoded text blocks & looking for editible DIVs. It can be confusing if you don't get that selecting the recipeint in the GMail compose window has nothing to do the encrypting: it's easy to sent a mail to where you encoded the message with PK.

In other aspects I've found Mailvelope pretty obvious.

Having 'achieved' the grandiose task of exchanging public keys between 2 emails & sending encrypting messages, I finally read the paper.

Boy it was disappointing.

In contrast w/ the original PGP study, they resorted to the most simplest possible tasks: user A should generate a key pair; ask user B for his PK; send an encrypted email. They got 20 pairs of A-B users. Only 1 pair successfully send/read a message.

The 1 pair.

This is why humanity is doomed.

Monday, September 14, 2015

wordnet & wordnut

Here is a tiny new Emacs major mode for browsing local WordNet lexical database:

I was very surprised not to find an abundance of similar modes in the wild.

Its most useful features are:

  • Completions. For example, do M-x wordnut-search RET arc TAB.
  • Pressing Enter in *WordNut* buffer inside any word. In this way you can browse the WordNet db indefinitely.
  • History. Keybindings are very usual: `l' to go back, `r' to go forward.