Higher level Makefiles

Higher level Makefiles

March 11, 2019 (1243 words)

Make is what ties it all together. Makefiles are some arcane gizmo magic. They are also old, old enough that they exist everywhere. Some of you might believe that makefiles are just for building code. Indeed they can be used for this:

hello: $(subst .c,.o,$(wildcard *.c))
    @$(CC) $(LDFLAGS) -o $@ $^

Where the power of make comes in, is in its ability to encode village knowledge and arcane corner cases. When I start new projects I always put two things in there immediately: a Readme.md and a Makefile. Both serves as documentation on how to operate and build the project. I have the rule that “just typing make should do all necessary things for a normal development cycle”. This includes potentially installing missing software on new computers.

One of the goals for having a top level makefile is that all the common, non obvious tasks can be encapsulated here and documented.

You want to get started? Type “make”.

You want to build the code? “make build”.

You want to sync the most recent code + data? “make sync”.

You want to ask the servers to test your branch? “make remote-test”

Well, you get the idea…

Some basic recipes

Here follows some basic recipes of different tasks using makefiles:

Cleaning

One rule that we’ve all implemented when using Makefiles for building code is actually “clean”. Something that nukes everything from orbit and starts over. If you want to use this in a portable way, you can always use $(RM) as the command.

clean:
    @$(RM) hello *.o

Installing software

One trick that I like is to automatically install software that might be needed when you run rules. You can do the following:

lint: .provision
    @swift-format -lint $(wildcard *.swift)

.provision:
    @which swift-format > /dev/null 2>&1 || brew install swift-format
    @touch $@

Don’t forget to add .provision to your .gitignore file and your clean rule!

Testing

I’m a big fan of testing and while I would advocate for simply running your unittests as part of your build process and format the output like a compile error to make sure that they stay relevant and not forgotten, there is an argument for putting functional / integration tests inside a separate rule. I usually simply call this rule “test”. This might be a simple functional test that processes a file, dumps out some debug information and compares it to a golden output:

test:
    @gizmo export ./data/sphere.obj ./data/sphere.model
    @gizmo dump ./data/sphere.model ./data/sphere.debug
    @diff ./data/sphere.debug ./code/sphere.golden-output

Default options

I’m usually forgetful of my arguments and options, my muscle memory just contains “make”. Furthermore, a lot of integrations just call make without any arguments.

It so happens if you just call make without any arguments, it will assume that you want to execute the first rule encountered in the Makefile. So for example, if you want each invocation of make to use all of your CPUs:

default:
    @$(MAKE) -j hello

hello: $(subst .c,.o,$(wildcard *.c))
    @$(CC) $(LDFLAGS) -o $@ $^

This can of course be extended into more elaborate schemes.

Makefile in docker

You can actually do some pretty fancy footwork with doing make inside of a docker container for more stable builds, regardless of which host platform you might be on. This rule below will build the linux executable of the source code with the help of the jtilander/dev-debug image:

default:
    @docker run --rm -v $(PWD):/home/jenkins jtilander/dev-debug make hello

hello: $(subst .c,.o,$(wildcard *.c))
    @$(CC) $(LDFLAGS) -o $@ $^