Sunday, June 26, 2016

Dump/restore

npm eats inodes for breakfast. A brand-new Angular2 project downloads > 40K files in node_modules just to get started (this includes babel).

Nobody counts inodes unless for some reason they use a previous generation filesystem (ext4) where inodes may suddenly become a scarce resource. The symptoms are rather common: there is a plenty of free space but you cannot create a new file.

So I decided to outwit myself via dump(8)ing /home to a network drive, reformating /home using a smaller inode_ratio value to make sure inodes would be abundant, then restore(8)ing from the dump file.

It went fine, except for 1 strange thing. The 1st time I launched Chromium it complained that “Your preference file is corrupted or invalid”. Was it because I was dumping a live fs? It seems that everything else has been restored correctly.

Wednesday, June 8, 2016

An unhealthy tweaking

Being in a state of horror because of discovering that perhaps in the next version of FVWM there will me no FvwmWharf module any more, I did something long overdue: switched to FvwmButtons.

Being more or less satisfied w/ the result,

I nevertheless feel that such an activity is a primary example of wasting time for nothing.

Thursday, May 26, 2016

enquire.c

Hey, look what I've found in the archives of comp.sources.misc!

Enquire: Everything you wanted to know about your C Compiler and Machine, but didn't know who to ask

One day Richard Stallman passed by, and mentioned that they needed such a program for GCC.

http://homepages.cwi.nl/~steven/enquire.html

Saturday, May 21, 2016

Creative Marketing

From Stevens' Portals in 4.4BSD paper:

"Ideas similar to portals have appeared in numerous operating systems over the past decade.

The 4.2BSD manual [Joy et al. 1983] defined the portal system call, with seven arguments, and a footnote that it was not implemented in 4.2BSD."

On a side note: what a beautiful idea Portals was. It's a shame that Linux has never caught up with BSD on it.

Thursday, April 7, 2016

Sunrise/Sunset Algo

If you need to implement sunrise/sunset calculations having only a latitude/longitude (& a particular date), go here.

If found that w/ zenith = 90.79 it gives the same rise/set numbers as googling for "<location> sunrise".

Also be careful w/ defining your sin/asin et al. that should take degrees & return degrees. For example:

let sin = (d) => Math.sin(d * (Math.PI / 180))
let asin = (d) => Math.asin(d) * (180/Math.PI)

I had to do the same while reviving an old timezone viewer tktz to force it to work again on Fedora 23. Of course I forgot that asin() returns radians & was scratching my head over why I was getting phoney baloney numbers.

Monday, March 28, 2016

A State of Tcl

If you write a generator that gives a user several choices, like 'npm init', would you chose a GUI based approach instead? Judging by the amount & the state of lightweight gui libs for such a task, GUI was popular in 1990s & since then everyone has been sticking to cli mytool --opt1 --foo=bar solutions, for they are easy to write & support.

I thought that today, maybe, it's better to spin off a tiny node server & xdg-open a browser, where user would click, clack & submit the form. If you think about GUI--do exactly that.

But then I remembered that once upon a time (many years ago) I loved Tcl!

Well. After playing w/ 8.6.4 for a day I say it's a complete disaster. I don't get why I ever thought of it as a nice language.

The idea was very simple: draw a dialog, user clicks, presses OK, the dialog spits some json & quits. Then another tool reads that json & does all the work that the generator should do.

I won't write about Ttk widgets, they are practically the same & have not been changed a bit through all this years. 8.6.4 has fixed an annoying issue w/ HiDPI screens but X11 version of it contains a scaling bug, when everything scales properly except the fonts--they stay tiny, as if you have 75dpi monitor. The only remedy I've found it to inject this manual trigger:

if {[tk windowingsystem] == "x11"} {
    # force all fonts to have a platform-dependent default size
    # according to the DPI
    foreach idx [font names] { font configure $idx -size 0 }
}

The main problem w/ modern Tcl is (please don't laugh) its innate inability to properly deal w/ JSON. If you have a checkbox that sets its binded variable to 0 or 1, how would you represent that value in json? As a number? A string? How do you know that it's indeed a number? It says 1--I say it's a digit! But to Tcl it's a string. If you have an entry widget where user can enter "1" would you leave it in json as a string or would you auto-convert it to an integer? If user have entered "no" would you auto-convert it to false? What about nulls?

The sub-problem of a JSON representation nightmare is a total absence of any standard lib for converting Tcl dicts into JSON. There is tcllib [json::dict2json] that is undocumented & it's undocumented for a good reason for it doesn't work at all. Tcl wiki contains a handful of inadequate snippets that are tied to a particular dataset & are not useful as general converters. The only one half-working solution I've found is DKF's [tcl2json]. Try to get null w/ it, though.

tl;dr: forget about Tcl.

Friday, February 26, 2016

Run Debian Chromium on Fedora

Just a quickie. If you have a bunch of Fedora 32bit VMs, then starting from March there won't be any new Chrome for them. Instead of ditching all those precious VMs, I thought of using a pre-compiled Chromium provided by Debian.

It actually works if you're willing to put up w/ a regular rigmarole of (a) finding out "what's the curren version of Chromium?" & (b) proper deb → rpm conversions. Here is a makefile that automates all that.

Tuesday, February 23, 2016

JavaScript Tools with GNU Make

In the beginning

I speak what appears to me the general opinion; and where an opinion is general, it is usually correct.
— Mansfield Park

… there was no transcompiling in JavaScript world whatsoever & everyone who was programming back then for the front-side portion of the web was greatly admiring the fact of an instant gratification.

One day Sass appeared from the direction of Rails campfire site, where folks were having a good time singing Kumbaya. Many looked closely at Sass & thought that adding a new level of abstraction was never a bad thing & quickly joined the movement. Humble designers tried to held a convention but their feeble voices were swamped by the music.

Shortly after, CoffeeScript came along. Although compiling it in the browser on-the-fly was possible, rarely somebody did that for it was considered uncivil & rude. The sites were lean & jQuery was still a King.

Then a guy who was competing with TJ for a number of pushed packages to the npm registry, wrote Browserify. It became possible to write isomorphic code before everyone realized that such a word exists.

Starting a project started to mean a process of thinking of a build system.

$ mkdir ~/projects/money-printer
$ cd !$
$ touch █

Some transferred their Rake skills to their brave new SPA world, some tried to employ tiny shell scripts, the majority, though, was unsure what to do & how to behave properly.

Something had to happen because however it used to be, it used to be somehow, it never happened yet that is was no-how, thus several nice tools materialized. Although most of them work fine, occasionally I catch myself thinking that, perhaps, those tools are a little unnecessary for my needs.

Theories of galaxies

It turns out, people in different industries had similar problems for years. For instance, sometime during the end of the Middle Ages, Bell Labs engineers were bitterly complaining to each other how they kept making the classic mistake of debugging an incorrect program because they would forget to compile the change.

One day Steve Johnson (the author of yacc) came storming into the office of his colleague. The colleague name was Stuart Feldman. After they pitied each other on the miscompilation misfortune, they sketched up on the board a general idea of how to prevent this kind of errors in the future. The result of the sketching & the vigorous one-night coding is know by a program called Make.

Many years later Feldman will say “One of the reasons Make was so simple was that it was only supposed to solve my problem, and help Steve too.”

Why use Make today? Or more importantly, why use Make as a build tool for writing SPAs?

I remember the first time I was forced to read a makefile. It was circa 2003, when the FreeBSD port I was installing failed to compile properly. While trying to resolve the problem, I’ve discovered, to my surprise, that the whole FreeBSD ports system was written in a dialect of Make language. I didn’t like its syntax & the whole construction seemed overly complex, unintuitive, weird.

A modern JavaScript developer meets Make only if he tries to install some peace of software that is absent in a collection of packages for his favourite OS. Such software is usually written in old languages like C & uses autoconf system to generate a bunch of makefiles. The process looks foreign, too exotic, ancient, outdated, uninteresting, not relevant.

Despite its alien nature, Make has managed to become a happy witness to the first mass-produced personal computers, to the eradication of Smallpox, to the invention of WWW, to the collapse of USSR, to the end of apartheid in South Africa, to the introduction of €, to DHH’s “blog in 15 minutes” video, to the end of Great Recession & to SpaceX drone ship landing.

Make today is typically twice old then a typical web developer. If you learned Make, say, in 1986 (when the first edition of O’Reilly’s Managing Projects with GNU Make came out), you can still employ that knowledge to this very day, usually being the only 1 person in the whole building who can fix some random broken makefile.

“This rubbish doesn’t compile, man.”
“What does it say?”
“Something about a missing rule for a target.”
(covers his face with hands)
“Call Jane.”

The only communities that rejected Make completely were Java & Go. For the former you could use Make in theory but in practice no one except you then could parse & maintain your makefile. For the latter, Rob Pike’s idea of build conditions so rigorously constrained that no external tool is necessary, proved to be a winner. Unfortunately, this is not the case for JavaScript world.

Over the years, there were many attempts to “fix” Make via either enhancing it syntax, introducing incompatible features, or rethinking the whole idea. The majority of this this attempts if not failed, withal didn’t acquire much advocacy. Even such cosy tools as Rake have never gained popularity beyond the language domain they are written in & belong to.

JavaScript community is not unique. It follows the same waves of “it’s own way”, where the introduction of new revolutionary tools every 6 months inevitably leads to the psychological state called “tools fatigue”.

A dull speaker always talks long

Instead of a yet another reintroduction to a particulars of some Make implementation, I’ll try to show how to use GNU Make in a small but a real web application. There will be some shotrcuts & simplifications along the way; I did them not because of Make limitations but to keep this text short.

A small notice: the text implies that a reader is very comfortable with the command line. If it’s not you to a t (for example, you come from a designer’s background), please stop reading now, go read The Unix Programming Environment book, practice for a month, then return. It’s the only book you will ever need to read to become a jolly good Unix user.

Our example is a web-based RSS feeds filter. Suppose you want to subscribe to Back to Work podcast but only listen to episodes where hosts do not talk about Apple (or vice versa, Apple is your only interest). Or to find all the great shows where the guest was John Roderick? The b2w feed contains > 250 episodes; each episode contains extensive metadata. It’s not hard to programatically search through the XML & generate a smaller feed for the RSS reader of your choice.

The app consists of 2 parts: (a) a web component, where a user specifies a URI for the feed alongside a couple of filtering patterns & (b) a small server proxy, required mainly because of a same-origin policy.

The app source tree looks like this:

/home/alex/lib/software/alex/grepfeed/grepfeed
├── cli/
│   └── grepfeed*
├── client/
│   ├── index.html
│   ├── main.jsx
│   ├── moomintroll.svg
│   └── style.sass
├── lib/
│   ├── dom.js
│   ├── feed.js
│   └── u.js
├── mk/
│   ├── build.sh*
│   └── watchman.json
├── server/
│   └── index.js*
├── test/
│   ├── data/
│   │   ├── back2work.xml
│   │   ├── irishhistorypodcast.xml
│   │   └── simple.xml
│   └── test_feed.js
├── Makefile
├── package.json
└── README.md

lib directory contains a shared code, used both by a server component & a “client” side. The HTTP server doesn’t require any additional build step to function, but the web app is all about transcompiling:

  • it is written in a subset of ES2015 & we use Babel to transform the code to ES5;

  • instead of CSS we use a mix of hand-written Sass & plain CSS from NProgress npm package. The result will be in 1 style.css file.

  • the browser-facing part of the code is written in JSX thus requires both an additional compilation step to ES5 before ES2015 (i.e., ES5 → ES2015 → JSX) and React libraries at runtime.

  • to employ some Node.js libraries in the browser & to automatically manage the dependencies we use Browserify. The resulting app will be squeezed in 1 file main.js.

  • to be able to focus on the coding, instead of typing over & over again the same commands in the terminal, we use Watchman to automatically run Make for us whenever any of our files that needs recompiling change.

In real life you always have at least 2 different builds: one is for a development phase only, another for a production deployment. In the ‘devel’ version we use source maps, for the production version–the code minification.

It is not enough to have an ability to produce 2 builds, the goal is to have those 2 build at the same time.

For example, depending on the value of NODE_ENV environment variable we can decide how to compile & where to put the output files. There is a whole separate history of having a single source tree but multiple builds per “platform”, in which I won’t get into here. We are going to separate the source tree with the compiled output to a point that you may mark the source three as a read-only & don’t worry of ever accumulating random junk there over time. As an additional bonus this eliminates the need of any ‘clean’ operations that never work properly anyway.

This is how the result looks like:

/home/alex/lib/software/alex/grepfeed/_out
├── development/
│   ├── client/
│   │   ├── index.html
│   │   ├── main.browserify.js
│   │   ├── moomintroll.svg
│   │   ├── style.css
│   │   └── style.css.map
│   └── lib/
│       ├── dom.js
│       ├── feed.js
│       └── u.js
├── node_modules/ [438 entries exceeds filelimit, not opening dir]
├── production/
│   ├── client/
│   │   ├── index.html
│   │   ├── main.browserify.js
│   │   ├── moomintroll.svg
│   │   └── style.css
│   └── lib/
│       ├── dom.js
│       ├── feed.js
│       └── u.js
└── package.json

The contents of development/client directory is what our server uses to serve to the end-user. Files in development/lib can be safely ignored; they are temporal & placed there for Browserify, that uses them for producing development/client/main.browserify.js bundle. You may notice that the contents of development/client & production/client directories is different. As a matter of fact, the size is quite different too:

$ du -h --max-depth=1
3.3M ./development
444K ./production
62M ./node_modules
65M .

This is what you get after leaving out embedded source maps from main.browserify.js & an aggressive JavaScript minification.

You can read the app source code at https://github.com/gromnitsky/grepfeed. It won’t hurt if, before carrying on, you clone it & try to reproduce one of the builds by yourself.

Static assets

We have files in our app that don’t require any transformation. It’s an .svg image of a rather happy Moomintroll & index.html. If we were living in the past & were compiling non-static assets in the same directory where their sources were, our .svg & .html files would have required no attention from a build system. But we chose the different route–to move everything outside of the source tree directory.

Our first steps are:

  1. Grab a list of files in the source tree.
  2. Chose a destination for selected files.
  3. Write a rule that specifies how to copy data.
  4. Invoke the rule.

In the root of our source tree we create a file named Makefile. When you run Make, it searches the current directory for a file with that name & starts parsing it.

NODE_ENV ?= development
out := $(NODE_ENV)
src.mkf := $(lastword $(MAKEFILE_LIST))
src := $(dir $(src.mkf))

We defined 4 variables. If you have NODE_ENV variable already defined in your environment, Make grabs it value, otherwise we set it to development string. out variable gets the value of NODE_ENV. We will use $(out) everywhere in our Makefile later on where we will need to prefix the file destination.

src.mkf gets the (relative) path to the Makefile itself. Ignore for now how it manages to do that. During the definition of src variable, we invoke Make internal $(dir) function to cut of the file name portion of src.mkf value. $(dir) is very similar to dirname(1) or Node path.dirname().

Steps 1-2: grab source & destination
static.src := $(wildcard $(src)/client/*.html $(src)/client/*.svg)
static.dest := $(subst $(src), $(out), $(static.src))

Here we define another 2 variables: the source of our static assets & its destination.

$(wildcard glob1 glob2 ...) is a Make function that internally uses fnmatch(2) to get a list of files. If it doesn’t match anything it returns an empty string. Think of $(wildcard) as primitive analogue of ls(1); it’s not recursive and uses glob patterns instead of regexps.

$(subst FROM, TO, TEXT) returns a new string with all matches of FROM replaced by TO. E.g. the next line in Make language

$(subst lamb,lambda,Mary had a little lamb)

is equivalent to JavaScript

"Mary had a little lamb".replace(/lamb/g, "lambda")

The only difference that $(subst) doesn’t support any regexp.

What we did in static.dest is got /foo/bar/grepfeed/client/index.html replaced with development/client/index.html.

Step 3: a rule

Now we can write a custom pattern rule that copies source files to their destination:

$(out)/%: $(src)/%
» mkdir -p $(dir $@)
» cp -a $< $@

Make language is all about rules. You may think of rules as functions (or rather, procedures) that take 2 parameters: target & source. A body of a “function” is any number of shell commands prefixed by a TAB character (marked by » above and everywhere below in this text).

target: source1 source2 ...
» body

Terms “source” & “body” are non-standard. I use them in this section only for clarity. The official GNU Make terms are “prerequisites” (also “dependency”) & “recipe”, e.g.

target: prerequisite1 prerequisite2 ...
» recipe

% character in the target & in the source is a wildcard. Cryptic $@ & $< inside of the body (recipe) of the rule are automatic variables. When (& only when) Make invokes the rule, it substitutes $@ with a target name & $< with a source name. There could be several sources (prerequisites, dependencies); $< means the first one, thus Make have another autovar $^ that means “the whole list”.

The exact meaning of $@, $<, $^ often escapes from newcomers. Here is the picture to help you to remember which autovar corresponds to what

The problem with our rule that it’s too broad. % in $(src)/% can match anything, not only client/index.html but client/main.jsx too. It’s possible to limit severely the applicability of a pattern rule by prefixing it with an explicit list of targets:

$(static.dest): $(out)/%: $(src)/%
» mkdir -p $(dir $@)
» cp -a $< $@
Step 4: invoking the rule

There is no way to explicitly invoke a pattern rule. But if we ask Make to create a target that match some rule, Make checks for the match & internally transforms the pattern rule to several simple file-based rules.

If we run in some temporal directory

$ make -f ../grepfeed/Makefile development/client/index.html

(-f CLO tells Make what file to read instead of Makefile in the current directory.)

Make automatically creates this rule on-the-fly:

development/client/index.html: /foo/bar/grepfeed/client/index.html
»  mkdir -p development/client
»  cp -a /foo/bar/grepfeed/client/index.html development/client/index.html

Or in an action:

$ make -f ../grepfeed/Makefile development/client/index.html development/client/moomintroll.svg
mkdir -p development/client/
cp -a ../grepfeed//client/index.html development/client/index.html
mkdir -p development/client/
cp -a ../grepfeed//client/moomintroll.svg development/client/moomintroll.svg

but it we provide a file name that do not match any pattern, Make aborts:

$ make -f ../grepfeed/Makefile foo/index.html bar/moomintroll.svg
make: *** No rule to make target 'foo/index.html'.  Stop.

It’s a little inconvenient to pass a list of file names to Make directly, for the list can be huge. We can write another rule that has a target with an arbitrary name but in sources has an actual list of the desired targets.

compile: development/client/index.html development/client/moomintroll.svg

or even better:

compile: $(static.dest)

This rule doesn’t have to have any recipe (body). Then we can execute Make as

$ make -f ../grepfeed/Makefile compile
Aftermath

One of the most lucrative Make features is that if you write your rules with caution, prudence & tact, it won’t rebuild targets that are up-to-date.

$ rm -rf development
$ make -f ../grepfeed/Makefile compile
mkdir -p development/client/
cp -a ../grepfeed//client/index.html development/client/index.html
mkdir -p development/client/
cp -a ../grepfeed//client/moomintroll.svg development/client/moomintroll.svg

$ make -f ../grepfeed/Makefile compile
make: Nothing to be done for 'compile'.

$ touch ../grepfeed/client/moomintroll.svg
$ make -f ../grepfeed/Makefile compile
mkdir -p development/client/
cp -a ../grepfeed//client/moomintroll.svg development/client/moomintroll.svg

How Make decides which target is up-to-date? By the simplest possible way: via checking last modification time for a target & its source. Over the years there were multiple attempts to enhance this algo by looking, for example, into a message digest of a file, but nobody had bothered to actually implement them efficiently enough to be included in GNU Make.

The next Make appeal comes from realization that by writing makefiles you’re constructing an acyclic graph of targets & their dependencies. So far we wrote 1 node with 2 leaves, each of which is generated via a pattern rule.

The arrow means “depends on.”

Debug

Debug facilities is where the original GNU Make distribution falls short. Partially it comes from the dynamic Make nature, for it is impossible to fully answer what would happen without doing it.

There is no REPL of any kind. Some primitive hacks exists, for example, https://github.com/gromnitsky/ims, that could help mainly to experience internal Make functions like $(filter) or $(patsubst) without manually creating a makefile & running it.

ims> .pwd
[...]/grepfeed/_out
ims> src = ../grepfeed/
ims> out = development
ims> static.src = $(wildcard $(src)/client/*.html $(src)/client/*.svg)
ims> . $(static.src)
../grepfeed//client/index.html ../grepfeed//client/moomintroll.svg
ims> . $(subst $(src), $(out), $(static.src))
  development/client/index.html  development/client/moomintroll.svg
ims>

There is also a forked version of GNU Make called remake, that can show an additional information about targets, plus it contains a real debugger.

Bare-bone

If you don’t want to install any additional tools, prepare to grieve.

The most annoying GNU Make misconduct is inability to print the variable value without modifying makefiles. A clever trick, popularized by John Graham-Cumming, comprises of adding a special patter rule to a makefile. A modified version of the rule splits a variable value into separate lines for I find the trick most useful for displaying a list of files:

pp-%:
» @echo "$(strip $($*))" | tr ' ' \\n

Then, we can print the value of static.dest

$ make -f ../grepfeed/Makefile pp-static.dest
development/client/index.html
development/client/moomintroll.svg

When you want to print what targets will be remade, try -n & -t options together:

$ make -f ../grepfeed/Makefile -tn compile
touch development/client/index.html
touch development/client/moomintroll.svg

npm

Before transforming any of sass/js/jsx files we need to make sure we have all the installed tools. In package.json, among other things, we have:

  • node-sass
  • babel-cli
  • babel-preset-es2015
  • babel-preset-react
  • browserify
  • uglify-js

We can switch the responsibility to the user by asking in a readme to “install those in global mode” or we can be more polite & write a simple rule that checks if our packages are installed after each change in package.json.

export NODE_PATH = $(realpath node_modules)

node_modules: package.json
»   npm install --loglevel=error --depth=0 $(NPM_OPT)
»   touch $@

package.json: $(src)/package.json
»   cp -a $< $@

We have here 2 new simple file-based rules.

We need to copy package.json from the source code directory because if our output directory isn’t a descendant of the $(src), npm fails to find package.json at all, or picks up a wrong one.

Explicitly setting NODE_PATH is required for Babel because when the source code is in a different subtree, Babel searches for packages in node_modules directory downwards starting from a particular .js file.

Notice that we are telling Make to export NODE_PATH to child processes, such as for a CLI wrapper of Babel & that it’s not a regular variable, but a macro.

Regular Make variables, distinguished in their definition by := (as in foo := bar) have nothing interesting about them; they work exactly as you expect, by setting the value of a left-hand side immediately. Macros on the other hand, create only a stub, that is not evaluated until macro is accessed.

When we run Make for the first time, the directory node_modules may not exist yet, thus had we defined NODE_PATH as a regular variable, its value would have been an empty string & Babel would have failed to find any Node.js modules. But when it’s a macro, Make evaluates it when somebody (a child process) tries to read it. At that point node_modules definitely exists, & Make expands it to a full path with the help of its internal $(realpath) function.

To test all this, run Make in the $(out) directory:

$ make -f ../grepfeed/Makefile node_modules
cp -a ../grepfeed//package.json package.json
npm install --loglevel=error --depth=0 --cache-min 99999999

> node-sass@3.4.2 install /home/alex/lib/writing/articles/javascript-tools-with-gnu-make/_out.blogger/s06/grepfeed/_out/node_modules/node-sass
> node scripts/install.js

Binary downloaded and installed at /home/alex/lib/writing/articles/javascript-tools-with-gnu-make/_out.blogger/s06/grepfeed/_out/node_modules/node-sass/vendor/linux-ia32-47/binding.node

> spawn-sync@1.0.15 postinstall /home/alex/lib/writing/articles/javascript-tools-with-gnu-make/_out.blogger/s06/grepfeed/_out/node_modules/spawn-sync
> node postinstall


> node-sass@3.4.2 postinstall /home/alex/lib/writing/articles/javascript-tools-with-gnu-make/_out.blogger/s06/grepfeed/_out/node_modules/node-sass
> node scripts/build.js

` /home/alex/lib/writing/articles/javascript-tools-with-gnu-make/_out.blogger/s06/grepfeed/_out/node_modules/node-sass/vendor/linux-ia32-47/binding.node ` exists. 
 testing binary.
Binary is fine; exiting.
grepfeed@0.0.1 /home/alex/lib/writing/articles/javascript-tools-with-gnu-make/_out.blogger/s06/grepfeed/_out
├── babel-cli@6.5.1 
├── babel-polyfill@6.5.0 
├── babel-preset-es2015@6.5.0 
├── babel-preset-react@6.5.0 
├── browserify@13.0.0 
├── node-sass@3.4.2 
└── uglify-js@2.6.2 

touch node_modules

and again, to check if we indeed wrote 2 rules properly:

$ make -f ../grepfeed/Makefile node_modules
make: 'node_modules' is up to date.

Npm gets slower & slower with every release so this could take a while. There is NPM_OPT in the node_modules recipe; use it to pass any additional options to npm, for example:

$ make -f ../grepfeed/Makefile node_modules NPM_OPT="--cache-min 99999999"

Sass

To compile .sass files to .css we employ the same algo we have used for static assets.

node-sass := node_modules/.bin/node-sass
SASS_OPT := -q --output-style compressed
ifeq ($(NODE_ENV), development)
SASS_OPT := -q --source-map true
endif
sass.src := $(wildcard $(src)/client/*.sass)
sass.dest := $(patsubst $(src)/%.sass, $(out)/%.css, $(sass.src))

$(out)/client/%.css: $(src)/client/%.sass
»   @mkdir -p $(dir $@)
»   $(node-sass) $(SASS_OPT) --include-path node_modules -o $(dir $@) $<

$(sass.dest): node_modules

compile: $(sass.dest)

Here we use NODE_ENV for the first time to modify the compiler behaviour: to include source maps when we are in a developer mode & to turn on the minification for a production mode.

Output .css files depend on node_module target, which depends on package.json which means if we modify the latter, Make considers our .css files outdated & remakes them.

You may not like a useless rebuilding every time you fix a typo in package.json, but it ensures that after updating a version string of a css package (like Nprogress), you won’t end up with an old code in $(out).

$ grep import ../grepfeed/client/style.sass
@import "nprogress/nprogress"

The usage of NODE_ENV for turning on/off minification through modifying node-sass command line options seems easy & convenient, but it’s inherently un-Unix: the minification step should be done by a separate command.

A more civilized way to do conversions would be to use chains of implicit rules. In a production mode

.css → .uncompressed_css → .sass

& in a development mode

.css → .sass

I’ll live this to you as a homework. For tips of how to achieve that, read the section about .js files transformation below.

ES2015 & Browserify

As our app is written in a subset of ES2015 we need another pattern rule to convert .js files to a ES5. lib directory is the location of the ES2015 code.

babel := node_modules/.bin/babel
ifeq ($(NODE_ENV), development)
BABEL_OPT := -s inline
endif
js.src := $(wildcard $(src)/lib/*.js)
js.dest := $(patsubst $(src)/%.js, $(out)/%.js, $(js.src))

$(js.dest): node_modules

$(out)/%.js: $(src)/%.js
»   @mkdir -p $(dir $@)
»   $(babel) --presets es2015 $(BABEL_OPT) $< -o $@

There is no sign of minification because the destination files in $(out)/lib are for temporal purposes.

The same goes for .jsx files (of which we have only 1) in client directory.

jsx.src := $(wildcard $(src)/client/*.jsx)
jsx.dest := $(patsubst $(src)/%.jsx, $(out)/%.js, $(jsx.src))

$(jsx.dest): node_modules
# we use .jsx files only as input for browserify
.INTERMEDIATE: $(jsx.dest)

$(out)/client/%.js: $(src)/client/%.jsx
»   @mkdir -p $(dir $@)
»   $(babel) --presets es2015,react $(BABEL_OPT) $< -o $@

One thing is different here: the temporal output is placed in $(out)/client, that is the directory that our server uses for its root for static files. After all compilation steps are finished there should be no temporal files. Make doesn’t know that the result of .jsx transcompiling is a temporal link in the chain thus we mark such targets as intermediates, by adding them as dependencies to a special .INTERMEDIATE target. You’ll see shortly what happens to such targets.

This rule contains another shortcut: the transformation from JSX to ES5 is done in 1 step. Ideologically this is Not Right because after JSX conversion we should get plain ES6 which we could then convert to ES5 code.

To get a final bundle with the name $(out)/client/main.browserify.js we write a simple file-based rule:

browserify := node_modules/.bin/browserify
browserify.dest.sfx := .es5
ifeq ($(NODE_ENV), development)
browserify.dest.sfx := .js
BROWSERIFY_OPT := -d
endif

bundle1 := $(out)/client/main.browserify$(browserify.dest.sfx)
$(bundle1): $(out)/client/main.js $(js.dest)
»   @mkdir -p $(dir $@)
»   $(browserify) $(BROWSERIFY_OPT) $< -o $@

js:

Notice how we manually add to bundle dependencies a list of files in $(out)/lib & that our modest js target is empty for now.

This rules contains a catch: if it’s a development mode, the chain is simple

main.browserify.js → .js deps

where everything is compiled with source maps. For a production mode, there is an additional link:

main.browserify.js → main.browserify.es5 → .js deps

where main.browserify.js, despite its name, is created not by browserify but by a separate uglifyjs program.

# will be empty in development mode
es5.dest := $(patsubst %.es5, %.js, $(bundle1))

UGLIFYJS_OPT := --screw-ie8 -m -c
%.js: %.es5
»   node_modules/.bin/uglifyjs $(UGLIFYJS_OPT) -o $@ -- $<

ifneq ($(browserify.dest.sfx), .js)
js: $(es5.dest)
# we don't need .es5 files around
.INTERMEDIATE: $(bundle1)
else
js: $(bundle1)
endif

compile: js

js target gets its prerequisites depending on NODE_ENV value.

The whole dependency graph of JavaScript files look like this (ellipse-shaped nodes are intermediates; dashed are production-mode only):

Now we can test the production mode:

$ NODE_ENV=production make -f ../grepfeed/Makefile js
node_modules/.bin/babel --presets es2015  ../grepfeed//lib/feed.js -o production/lib/feed.js
node_modules/.bin/babel --presets es2015  ../grepfeed//lib/dom.js -o production/lib/dom.js
node_modules/.bin/babel --presets es2015  ../grepfeed//lib/u.js -o production/lib/u.js
node_modules/.bin/babel --presets es2015,react  ../grepfeed//client/main.jsx -o production/client/main.js
node_modules/.bin/browserify  production/client/main.js -o production/client/main.browserify.es5
node_modules/.bin/uglifyjs --screw-ie8 -m -c -o production/client/main.browserify.js -- production/client/main.browserify.es5
rm production/client/main.browserify.es5 production/client/main.js

$ NODE_ENV=production make -f ../grepfeed/Makefile js
make: Nothing to be done for 'js'.
$ touch ../grepfeed/lib/u.js
$ NODE_ENV=production make -f ../grepfeed/Makefile js
node_modules/.bin/babel --presets es2015  ../grepfeed//lib/u.js -o production/lib/u.js
node_modules/.bin/babel --presets es2015,react  ../grepfeed//client/main.jsx -o production/client/main.js
node_modules/.bin/browserify  production/client/main.js -o production/client/main.browserify.es5
node_modules/.bin/uglifyjs --screw-ie8 -m -c -o production/client/main.browserify.js -- production/client/main.browserify.es5
rm production/client/main.browserify.es5 production/client/main.js

Albeit we didn’t explicitly put in any of the recipes a rm command, Make automatically removes targets that were prerequisites in .INTERMEDIATE target.

A watched pot never boils

2 of 4 stand-alone utils we use in our makefile have a built-in “watch” feature & browserify has a separate (& quite popular) watchify wrapper. It’s hard to imagine a project that needs to recompile only those files that watchify watches or the project that only uses node-sass. It’s also hard to imagine a reason why would you include the “watch” feature into your CLI program in the first place, knowing very well that your tool will never be the only one third-party tool in a project.

The only valid reason I can think of is a conscientious endeavour to set a world benchmark for software bloat, but we won’t get into that.

Instead of running 3 separate processes in parallel we will use watchman. It will look out for files we specify & run Make on every change automatically. If we have wrote Makefile properly, only files that are outdated (in $(out) directory) will be recompiled.

To make this more developer-friendly we can play a confirmation sound when Make finishes without errors; run a separate terminal window for watchman output & raise the window in case of a compilation error.

watchman configuration is not exactly the easiest one for it has 2 modes for reading it: from stdin in a json format or via command line options. The latter is more limited & the former is too verbose. We’ll use the json only to escape from the shell quoting issues.

We add another target to the makefile:

watch:
»   watchman trigger-del $(src) assets
»   @mkdir -p $(out)
»   m4 -D_SRC="$(src)" -D_TTY=`tty` \
»   »   -D_OUT_PARENT=`pwd` \
»   »   -D_MAKE="$(MAKE)" -D_MK="$(src.mkf)" \
»   »   $(src)/mk/watchman.json | watchman -n -j

m4 macro processor is absolutely not required, you can replace it with any other you like; we use it only to transform $(src)/mk/watchman.json file:

[
    "trigger",
    "_SRC",
    {
        "expression": [
            "anyof",
            ["pcre", "^(client|lib)/[^#]+$", "wholename"],
            ["pcre", "^package.json$", "wholename"]
        ],
        "name": "assets",
        "command": [
            "_SRC/mk/build.sh",
            "_MAKE", "-C", "_OUT_PARENT", "-f", "_MK", "compile"
        ],
        "append_files": false,
        "stderr": ">_TTY",
        "stdout": ">_TTY"
    }
]

I won’t describe what all this means, please refer to watchman manual for the particulars. What we need to note is that on each file change, watchman will run $(src)/mk/build.sh script that in turn will run Make as

make -C $(src) -f $(src)/Makefile compile

where -C instruct Make to chdir before parsing a makefile provided in the -f CLO.

$(src)/mk/build.sh is closely tied to my Fedora installation, so you will need to modify it for your machine:

#!/bin/sh

# See `watch` target in Makefile.

# clear xterm history
printf "\033c"
# what is to be done
printf "\033[0;33m%s\033[0;m\n" "$*"

# run make
"$@"

ec=$?
media=/usr/share/sounds/freedesktop/stereo

if [ $ec -eq 0 ]; then
    play $media/message.oga 2> /dev/null
else
    play $media/bell.oga 2> /dev/null
    # raise xterm window
    printf "\033[05t"
fi

exit $ec

If everything works fine, you open a new xterm window, run make -f ../grepfeed/Makefile watch there & forget about it. On any compilation error, that xterm window pops up, alerting us that build has failed.

Conclusion

As we see, with a little help of shell scripting & a little knowledge of Make language it is possible to construct a build system for a SPA that uses all the latest JavaScript tools under the hood. There is 0 magic in it & no dependencies on any “plugins”. For why would you need a “plugin” to use a program that is already capable of transforming input?

Much more could be said about the Make language itself. We wrote our build system in 1 big makefile only to stay simple. You don’t have to be such a simpleton in your projects. There were no talks about what is a list in terms of Make, nothing about scoping rules, user-defined functions, canned recipes, etc.

I didn’t cover giant topics like auto-discovering dependencies for .js files (we have cheated by explicitly stating the dependencies for the output browserify bundle) or parallel jobs.

If you’re interested in GNU Make & want to know more, start with its official manual that covers most of the Make language details. After that, read Robert Mecklenburg’s Managing Projects with GNU Make book that will feed you with many ideas that you might otherwise be missing out. If that will be not enough, read The GNU Make Book by John Graham-Cumming. There is nothing to read about Make beyond that book for it contains maximum hardcore staff you will ever extract about the topic.

I want only to remind you that it doesn’t matter what toolchain you chose for a project (based on Make or not). If you fail to deliver a working app, no build system in the world will save you. Nobody cares about your polished infrastructure, for it’s the app that is important to the end user.

Enjoy!

PS. Here is an alternate version of this post that can be more readable on your phone.

Monday, February 15, 2016

Pandoc MathJax Self-contained

If you've ever used MathJax, you've probably noticed that for everything it does it injects script tags w/ various modules, loads fonts on-demand, etc. This is the reason for why pandoc, for example, is unable to produce a truly stand-alone .html file w/ MathJax, where all formulas are pre-rendered or rendered on-the-fly but w/o any external requests.

At 1st I've tried to monkey patch MathJax.Ajax.Require() for dependency discovery & have generated 1 big file w/ all the required modules for PreviewHTML output format, like:

<script>
<% nm = ENV['MATHJAX_SRC'] || "node_modules/mathjax" -%>
<%= File.read File.join nm, "MathJax.js" %>
<%= File.read File.join nm, "jax/input/TeX/config.js" %>
...
</script>

It worked, served its purpose, but was a rough piece of horseplay.

What I really wanted is something like `pandoc file.md -t html5 -o - | mathjax-embed` that would dump a pre-rendered html suitable for the offline use.

Then I remembered that we can always render html (w/ the mathjax script tag) in phantomjs and save the modified DOM. The process should be quite simple: load html, inject a peace of JS w/ the mathjax config, inject a script tag w/ src=mathjax-entry-point, wait until it finishes transforming DOM, print.

Here is a small phantomjs-script that does that: https://github.com/gromnitsky/mathjax-embed.

Here is a rendered example (no JS required & no external resources).

1 caveat: it doesn't embed fonts, thus CommonHTML & HTML-CSS mathjax output formats won't look good. But it works fine for SVG & PreviewHTML ones.

Monday, February 8, 2016

Ruby mail & Base64 Content Transfer Encoding

If you need to parse emails that for some reason still use prehistoric charsets (like koi8-u), mail gem fails to decode bodies of such messages properly.

$ cat message.koi8u.mbox
From alice@example.com Mon Feb  8 22:26:51 2016
From: alice@example.com
To: bob@example.net
Subject: Kings
Date: Mon, 08 Feb 2016 20:26:51 +0000
MIME-Version: 1.0
Message-Id: <1@example.com>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset=koi8-u

7sHE18/SpiDX1sUg083F0svMzywgpiwg1NjNz8Ag0M/XydTJyiwK5NKmzcGk
LCDT1c3VpCC2pNLV 08HMyc0uCvcgy8XE0s/Xycgg0MHMwdTByCwgzc/XIM7
F08HNz9fJ1MnKLArkwdfJxCDQz8jPxNbB pCCmLCDPIMPB0iDOxdPJ1MnKLA
rzwc0g08/CpiDHz9fP0snU2DogIvEuLi4g7ckg0M/XxczJzSEK
$ irb
2.1.3 :001 > require 'mail'
true
2.1.3 :002 > m = Mail.read 'message.koi8u.mbox'
[...]
2.1.3 :003 > m.body.decoded
"\xEE\xC1\xC4\xD7\xCF\xD2\xA6 [...]\n"
2.1.3 :004 > m.body.decoded.encoding
#<Encoding:ASCII-8BIT>

I.e., the result is total garbage.

But as we can obtain a charset name from Mail::Message#charset method, we can just manually convert the string to UTF-8:

2.1.3 :005 > m.body.decoded.force_encoding(m.charset).encode 'utf-8'
"Надворі вже смеркло, і, тьмою повитий,\n
Дрімає, сумує Ієрусалим.\n
В кедрових палатах, мов несамовитий,\n
Давид походжає і, о цар неситий,\n
Сам собі говорить: \"Я... Ми повелим!\n"

Sunday, January 3, 2016

An Oral History of Unix as an epub

During the summer-fall of 1989, Professor Michael S. Mahoney (of Princeton University) recorded a series of interviews w/ Bell Labs people who were involved in the creation of Unix. For example, dmr or McIlroy (Alan Turing always wanted to win a McIlroy Award, but didn't qualify).

This interview project was called An Oral History of Unix. Until the last week I had no idea of its existence. Judging from the text length (& comments in the transcriptions like "end of side A"), each conversation was an hour-long or more.

Unfortunately, the format that transcriptions are in, is an ancient version of MS Word & html version of it contains this hilarious lines:

<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=windows-1252">
<META NAME="Generator" CONTENT="Microsoft Word 97">

I don't know about you, but the last time I saw similarly crafted pages was more than 15 years ago.

Of course as you may guess an encoding in the content type header doesn't match the encoding of the file:

$ curl -sI http://www.princeton.edu/~hos/mike/transcripts/weinberger.htm | grep Content-Type
Content-Type: text/html; charset=UTF-8

It's like 1999 all over again!

Ok, enough w/ that. We can't write to Professor because he passed away in 2008. What we can do is to fix the presentation of the pages or, what I chose to do, to make them more readable on Kindle. I.e. if we generate a TOC & feed the (fixed) html to Calibre, it generates a valid epub file that we then can convert to .mobi or .azw3. The build scripts can be found here. The final result (epub, mobi, pdf): http://gromnitsky.users.sourceforge.net/lit/an-oral-history-of-unix/.

Enjoy the reading!

Sunday, December 20, 2015

Dynamic PATH in GNU Make

Sometimes you may have several targets, where the 1st one creates a new directory, puts some files in it & the 2nd target expects newly created directory to be added to PATH. For example:

$ make -v | head -1
GNU Make 4.0

$ cat example-01.mk
PATH := toolchain:$(PATH)

src/.configure: | toolchain src
        cd src && ./configure.sh
        touch $@

toolchain:
        mkdir $@
        printf "#!/bin/sh\necho foo" > $@/foo.sh
        chmod +x $@/foo.sh

src:
        mkdir $@
        cp configure.sh $@

toolchain target here creates the directory w/ new executables. src target emulates unpacking a tarball w/ configure.sh script in it that runs foo.sh, expecting it to be in PATH:

$ cat configure.sh
#!/bin/sh

echo PATH: $PATH
echo
foo.sh

If we run this example, configure.sh will unfortunately fail:

$ make -f example-01.mk 2>&1 | cut -c -72
mkdir toolchain
printf "#!/bin/sh\necho foo" > toolchain/foo.sh
chmod +x toolchain/foo.sh
mkdir src
cp configure.sh src
cd src && ./configure.sh
PATH: toolchain:/home/alex/.rvm/gems/ruby-2.1.3/bin:/home/alex/.rvm/gems

./configure.sh: line 5: foo.sh: command not found
example-01.mk:4: recipe for target 'src/.configure' failed
make: *** [src/.configure] Error 127

The error is in the line where configure.sh is invoked:

cd src && ./configure.sh

As soon as we chdir to src, toolchain directory in the PATH becomes unreachable. If we try in use $(realpath) it won't help because when PATH variable is set there is no toolchain directory yet & $(realpath) will expand to an empty string.

What if PATH was an old school macro that was reevaluated every time it was accessed? If we change PATH := to:

path.orig := $(PATH)
PATH = $(warning $(shell echo PWD=`pwd`))$(realpath toolchain):$(path.orig)

Then PATH becomes a recursively expanded variable & a handy $(warning) function will print to the stderr the current working directory exactly in the moment PATH is being evaluated (it won't mangle the PATH value because $(warning) always expands to an empty string).

$ rm -rf toolchain src ; make -f example-02.mk 2>&1 | cut -c -100
mkdir toolchain
example-02.mk:2: PWD=/home/alex/lib/writing/gromnitsky.blogspot.com/posts/2015-12-20.1450641571
printf "#!/bin/sh\necho foo" > toolchain/foo.sh
chmod +x toolchain/foo.sh
mkdir src
example-02.mk:2: PWD=/home/alex/lib/writing/gromnitsky.blogspot.com/posts/2015-12-20.1450641571
cp configure.sh src
cd src && ./configure.sh
example-02.mk:2: PWD=/home/alex/lib/writing/gromnitsky.blogspot.com/posts/2015-12-20.1450641571
PATH: /home/alex/lib/writing/gromnitsky.blogspot.com/posts/2015-12-20.1450641571/toolchain:/home/ale

foo
touch src/.configure

As we see, PATH was accessed 3 times: before printf/cp invocations & after ./configure.sh (because for ./ there is no need to consult PATH).

Saturday, December 12, 2015

GTK3

While upgrading to Fedora 23, I've discovered New Horizons of Awesomeness in gtk3. (I think it should be the official slogan for all the new gtk apps in general.)

If you don't use a compositor & select ubuntu-style theme:

  $ grep theme-name ~/.config/gtk-3.0/settings.ini
  gtk-theme-name = Ambiance  

modern apps start looking very indie in fvwm:

https://lh3.googleusercontent.com/-77JmxzfMAm8/Vmwu061FLJI/AAAAAAAAAiw/P0HrryUvhrI/s640-Ic42/gtk3-demo-ambiance.png

Granted, it's not 1997 anymore, we all have big displays w/ a lot of lilliputian pixels, but such a waste of a screen estate seems a little unnecessary to me.

Turns out it's an old problem that has no solution, except for the "use Gnome" handy advice. There is a https://github.com/PCMan/gtk3-nocsd hack but I don't think I'm in a such a desparate position to employ it. A quote from the README:

  I use $LD_PRELOAD to override several gdk and glib/gobject APIs to
  intercept related calls gtk+ 3 uses to setup CSD.  

I have no words. All we can do to disable the gtk3 decoration is to preload a custom library that mocks some rather useful part of gtk3 api. All praise Gnome!

In seeking of a theme that has contrast (e.g. !gray text on gray backgrounds) I've found that (a) an old default theme looks worse than Motif apps from 1990s:

  $ GTK_THEME=Raleigh gtk3-demo  
https://lh3.googleusercontent.com/-NT7umB_tPJc/Vmwu1GMn1jI/AAAAAAAAAi0/DdEE3LnBZyQ/s640-Ic42/gtk3-demo-raleigh.png

Which is a pity because gtk2 Raleigh theme was much prettier:

https://lh3.googleusercontent.com/-qyN_DZbufhA/Vmwu0sCcDNI/AAAAAAAAAio/CFabyRWEVvA/s800-Ic42/gtk2-demo-raleigh.png

& (b) my favourite GtkPaned widget renders equaly horrific everywhere. Even a highly voted Clearlooks-Phenix theme manages to make it practically imperceptible by the eye:

https://lh3.googleusercontent.com/-aRrShlI2Lmw/Vmwu0kfj-tI/AAAAAAAAAis/LSOBlcyTAKI/s640-Ic42/clearlooks-phenix-gtk3-theme.png

A moral of the story: don't write desktop apps (but all kids know this already), ditch gtk apps you run today for they all will become unusable tomorrow (but what do I know? I still use xv as a photo viewer).

Sunday, November 8, 2015

Why Johnny Still Can't Encrypt

Before reading "Why Johnny Still Can't Encrypt" I'd read "Why Johnny Can't Encrypt". Boy it was hilarious!

In the original paper they asked 12 people to send en encrypted message to 5 people. In the process the participants had to stumble upon several traps like a need to distinguish a key algo type because 1 of the recipients used an 'old' style RSA key.

The results were funny to read:

'One of the 12 participants (P4) was unable to figure out how to encrypt at all. He kept attempting to find a way to "turn on" encryption, and at one point believed that he had done so by modifying the settings in the Preferences dialog in PGPKeys.'

'P1, P7 and P11 appeared to develop an understanding that they needed the team members' public keys, but still did not succeed at correctly encrypting email. P2 never appeared to understand what was wrong, even after twice receiving feedback that the team members could not decrypt his email.'

'(P5) so completely misunderstood the model that he generated key pairs for each team member rather than for himself, and then attempted to send the secret in an email encrypted with the five public keys he had generated. Even after receiving feedback that the team members were unable to decrypt his email, he did not manage to recover from this error.'

'P6 generated a test key pair and then revoked it, without sending either the key pair or its revocation to the key server. He appeared to think he had successfully completed the task.'

'P11 expressed great distress over not knowing whether or not she should trust the keys, and got no further in the remaining ten minutes of her test session.'

The new paper "Why Johnny Still Can't Encrypt" is uninspiring. They used a JS OpenPGP implementation (Mailvelope), avalivible as a Chrome/Firefox plugin. Before reading the sequel I'd installed the plugin to judge it by myself.

Mailvelope is fine if you understand that it operates just on a arbitual block of text; it doesn't (& cannot) 'hook' into GMail in any way except for trying to parse encoded text blocks & looking for editible DIVs. It can be confusing if you don't get that selecting the recipeint in the GMail compose window has nothing to do the encrypting: it's easy to sent a mail to bob@example.com where you encoded the message with alice@example.com PK.

In other aspects I've found Mailvelope pretty obvious.

Having 'achieved' the grandiose task of exchanging public keys between 2 emails & sending encrypting messages, I finally read the paper.

Boy it was disappointing.

In contrast w/ the original PGP study, they resorted to the most simplest possible tasks: user A should generate a key pair; ask user B for his PK; send an encrypted email. They got 20 pairs of A-B users. Only 1 pair successfully send/read a message.

The 1 pair.

This is why humanity is doomed.

Monday, September 14, 2015

wordnet & wordnut

Here is a tiny new Emacs major mode for browsing local WordNet lexical database: https://github.com/gromnitsky/wordnut

I was very surprised not to find an abundance of similar modes in the wild.

https://raw.github.com/gromnitsky/wordnut/master/screenshot1.png

Its most useful features are:

  • Completions. For example, do M-x wordnut-search RET arc TAB.
  • Pressing Enter in *WordNut* buffer inside any word. In this way you can browse the WordNet db indefinitely.
  • History. Keybindings are very usual: `l' to go back, `r' to go forward.

Sunday, August 16, 2015

What is Ruby power_assert gem & why you may need it

After upgrading from Ruby 2.1.3 to 2.2.2 I've noticed a new bundled gem called power_assert. It turned out that test-unit requires it for like a year now. It was a 2nd surprise, because I thought that everyone's moved to minitest many years ago & test-unit was left alone for the backward-compatibility sake.

A 'power assert' enabled test-unit has an enhanced version of assert() that can take a block & in a case of failure print values for an each object in a method chain. If no block is given to this new assert(), the old one version is invoked.

$ cat example-1.rb
require 'test/unit'

class Hello < Test::Unit::TestCase
  def test_smoke
    assert { 3.times.include? 10 }
  end
end

$ ruby example-1.rb | sed -n '/==/,/==/p'
===============================================================================
Failure:
      assert { 3.times.include? 10 }
                 |     |
                 |     false
                 #<Enumerator: 3:times>
test_smoke(Hello)
/home/alex/.rvm/gems/ruby-2.2.2@global/gems/power_assert-0.2.2/lib/power_assert.
rb:29:in `start'
example-1.rb:5:in `test_smoke'
     2:
     3: class Hello < Test::Unit::TestCase
     4:   def test_smoke
  => 5:     assert { 3.times.include? 10 }
     6:   end
     7: end
===============================================================================

As I understand, Kazuki Tsujimoto (the author of power_assert gem) got the idea for a pretty picture for a method chain from the Groovy language. Before power_assert gem we could only use Object.tap() for peeking into the chain:

> ('a'..'c').to_a.tap {|i| p i}.map {|i| i.upcase }
["a", "b", "c"]
[
  [0] "A",
  [1] "B",
  [2] "C"
]

Using power_assert we can write a enhanced version of Kernel.p(), where in the spirit of the new assert(), it prints a fancy picture if a user provides a block for it:

$ cat super_duper_p.rb
require 'power_assert'

def p *args
  if block_given?
    PowerAssert.start(Proc.new, assertion_method: __callee__) do |pa|
      val = pa.yield
      str = pa.message_proc.call
      if str == "" then Kernel.p(val) else puts str end
      val
    end
  else
    Kernel.p(*args)
  end
end

$ cat example-2.rb
require './super_duper_p'

p {3.times.to_a.map {|i| "i=#{i}" }.include? 3}
p [1,2,3], [4,5,6], "7"
p { [1,2,3] }

$ ruby example-2.rb
p {3.times.to_a.map {|i| "i=#{i}" }.include? 3}
     |     |    |                   |
     |     |    |                   false
     |     |    ["i=0", "i=1", "i=2"]
     |     [0, 1, 2]
     #<Enumerator: 3:times>
[1, 2, 3]
[4, 5, 6]
"7"
[1, 2, 3]

Unfortunately, it won't work in irb.

If you're like the rest of us who prefer minitest instead of test-unit, you'll need a separate gem for it.