Thursday, January 28, 2016

The Medium experiment wrap-up

Eight months ago, I decided to try Medium as the platform on which to post my essays. Over this time I have published a handful of posts in there—8, to be precise, which is... a very shy number—but the results have been quite satisfactory: the WYSIWYG composer is excellent, the analytics tools are simple but to the point, the looks are great, and the community is nice (though I haven't been able to tap into it just yet).

But where have things failed?

You will have to read more on this topic on a post appropriately titled The Medium experiment wrap-up that I have published on the shiny-new blog section of my personal homepage. But, to sum things up:

Medium has failed as the place where I want to post my articles first though it remains a great secondary place for content redistribution and promotion.

On that site, you can also find externalized copies of the Medium posts I have written so far as well as a commenting system that you may find less intimidating (if only because it doesn't force you to create an account). Yes, this custom website is my new experiment for publication, and I still have yet to decide if it will end up finally replacing The Julipedia. Stay tuned.

Sunday, May 24, 2015

Hello, Medium!

11 years. Next month will mark 11 years since the creation of The Julipedia, the blog you are in now and the blog that got me started into this writing journey. 11 years that have brought 690 posts (yeah, yeah, not that many for such a long time).

And after all this time, it finally hit me: personal blogs have lost their original appeal. It is time for a change. But a change… to what?

Is Medium the answer?

I don't know, but to see what I think on this topic, follow onto my very first post appropriately titled Hello, Medium! and let me know what you think. I'll keep experimenting there, so make sure to follow my @jmmv profile to not miss a beat.

Julio Merino

Thursday, May 21, 2015

Offloading maintenance tasks to Travis CI

From day one, the Kyua source tree has had docstring annotations for all of its symbols. The goal of such docstrings is to document the code for the developers of Kyua: these docstrings were never intended to turn into pre-generated HTML documentation because Kyua does not offer an API once installed.

As you might have noticed, Doxygen is an optional component of the build and it used to run on each make invocation. This changed "recently". Nowadays, Doxygen is only run asynchronously on Travis CI to report docstring inconsistencies post-submission (see the DO=apidocs matrix entry if you are impatient). Combined with feature branches that are only merged into master when green, this is as good as the previous approach of running Doxygen along the build. Scratch that: this is even better because running Doxygen locally on each build took significant resources and penalized edit/build/test cycles.

In this article, I am going to guide you through the specifics of running Doxygen as a separate build in Travis CI. You can extrapolate this example to other "maintenance" tasks that you wish to run on each push—say, building and uploading manpages (which I still have to get to), verifying the style of your source tree, or running IWYU.

Background: docstrings

Since I started writing Python code at Google in 2009, I have become a fan of docstrings.

Having to explicitly document the purpose of each function via a textual description of its arguments, its return values, and any possible exceptions serves to make the code clearer and, more importantly, forces the developer to think about the real purpose of each function. More than once I've caught myself unable to concisely explain what a function does, which in turn led to a refactoring of such code.

Because of this reason, Kyua has had docstrings everywhere since day one. (I have even annotated shell scripts with such docstrings even when these cannot be parsed by Doxygen!) As an example, see the docstring for the randomly selected engine::check_reqs function:

/// Checks if the requirements specified by the test case are met.
/// \param md The test metadata.
/// \param cfg The engine configuration.
/// \param test_suite Name of the test suite the test belongs to.
/// \param work_directory Path to where the test case will be run.
/// \return A string describing the reason for skipping the test,
/// or empty if the test should be executed.
engine::check_reqs(const model::metadata& md,
                   const config::tree& cfg,
                   const std::string& test_suite,
                   const fs::path& work_directory)
{ ... }

Docstring linting

Keeping docstrings in sync with the code is very important—out of date documentation is harmful!—but validating documentation is never easy. One way to perform some minimum validation is to use Doxygen: when Doxygen runs, it spits out diagnostic messages if, for example, the list of documented parameters does not match the actual parameters of a function, of if the return value is not documented for a function that returns a value.

In the past, a post-build hook in Kyua's Makefile triggered a run of Doxygen to sanity-check the contents of these docstrings by looking for warning messages in the output and then printing those at the end of the make invocation. The generated HTML files were discarded.

However, running Doxygen in a moderately sized codebase such as Kyua's, which clocks at ~50K lines of code, takes a significant amount of time. For years, this had annoyed me to the point where I came up with local shell aliases to rebuild only a subset of the source tree without triggering Doxygen—particularly because in a dual-core system, Doxygen easily clogs one of the cores for the majority of the build time.

Recenlty, though, I figured I could delegate the execution of Doxygen to Travis CI and thus only validate that the docstrings are valid at push time. In fact, this approach can be generalized to asynchronously run other maintenance tasks but, for illustration purposes, I am focusing only on Doxygen.

Is Travis enough?

By moving the docstrings sanity-check operation to Travis, I lost the continuous validation that happened every time I typed make. As a result, there is a higher chance for individual commits to break docstrings unintentionally.

But that's just fine.

Per the Kyua Git workflow policies, changes should not be committed directly into master: it is all too easy to commit something that later fails in Travis and thus requires an embarrassing follow-up check-in of the kind "Fix previous". It is much better to do all work in a local branch (even for apparent one-commit fixes!), push the new branch to GitHub, let Travis go for a test run, see if there are any failures, use git rebase -i master along the edit or fixup actions, and make the set of commits sane from the ground up without amendments to fix obvious mistakes. (Think of Darcs if you will.) With a green build for the branch, merging into master becomes trivial and, more importantly, safe.

Therefore, requiring changes to be pushed into master only after getting a green build from Travis ensures that master never gets bogus commits with invalid docstrings. The visible effects are the same as before, so this is good enough.

Are you convinced yet? Let's dive in.

Dealing with Doxygen false negatives

The first problem to deal with before integrating Doxygen into Travis are false negatives in docstrings: that is, the cases where Doxygen would complain about an incorrect docstring that is actually correct. I had trained myself to ignore the false negatives, but a mental process has one major problem: the act of enforcing failures when docstrings are invalid can only be done if the output is deterministic and clean. In other words: we need Doxygen to return 0 if all docstrings look good and 1 if any do not. But because of false negatives, we cannot trust the 1 return values. (I blame Doxygen's C++ parser. Parsing C++ is very difficult and the only reasonable way of doing so these days is by using LLVM's libraries. Anything else is bound to make mistakes.)

The good thing is that the false negatives are deterministic so I wrote a small AWK script (see check-api-docs.awk) that receives the output of Doxygen, strips out any known false negatives, and returns success if there are no new errors and failure if any unknown errors are found. Plugging this into the Makefile results in a check-api-docs target that can be properly used in an automated environment (see for the full details):

check-api-docs: api-docs/api-docs.tag
 @$(AWK) -f $(srcdir)/admin/check-api-docs.awk \

api-docs/api-docs.tag: $(builddir)/Doxyfile $(SOURCES)
 @mkdir -p api-docs
 @rm -f api-docs/doxygen.out api-docs/doxygen.out.tmp
 $(AM_V_GEN)$(DOXYGEN) $(builddir)/Doxyfile \
     >api-docs/doxygen.out.tmp 2>&1 && \
     mv api-docs/doxygen.out.tmp api-docs/doxygen.out

With this done, we now have a check-api-docs make target that we can depend on as part of the build. Such target fails the build only if there are new docstring problems. (Yes, we can manually invoke this target if we so desire.)

Hooking the run into Travis

The Travis configuration file supports specifying a single command to fetch the required dependencies and another command to execute the build. Builds can be configured both by predefined settings, such as the compilers to use or the operating systems to build on, and by manually specified environment variables.

The most naive approach to running maintenance tasks would be to add their actions to the all target in the Makefile so that a simple make invocation from the build script ran the maintenance task. This is overkill though: the maintenance job, which is not going to yield different results in every job matrix entry, will be executed for all entries and thus will stress an already overloaded worker pool. In particular, installing Doxygen in the builder takes a significant amount of time because of Doxygen's dependency on TeX and running Doxygen sucks precious CPU resources.

What is the alternative then? Easy: have a single entry in the matrix running the maintenance task. Can you do that? Yes. How? With environment variables.

Travis allows you to add arbitrary entries to the job matrix in the configuration file by specifying combinations of environment variables that are passed to the scripts of the build. Using this feature, I introduced a global DO variable that tells the scripts what is being done: apidocs to verify the API documentation and build to execute the actual build of Kyua (see and for an example).

With this new DO variable, we can customize the environment entries in the matrix to introduce a new run for the Doxygen invocation (see .travis.yml for full details on this code snippet):

    - DO=apidocs
    - DO=distcheck AS_ROOT=no
    - DO=distcheck AS_ROOT=yes UNPRIVILEGED_USER=no
    - DO=distcheck AS_ROOT=yes UNPRIVILEGED_USER=yes

But this is still suboptimal. Travis builds a matrix of all possible combinations given by the operating systems, compilers, and the environment entries you defined. Running Doxygen on the source tree is independent of all these parameters: it does not matter what operating system you are running on or what compiler is used to build the source tree: Doxygen will yield the same output every single time.

Therefore, adding DO=apidocs as an entry to the matrix makes the number of build combinations explode, which is not acceptable because it is wasteful.

We can do better. We can tell Travis to exclude matrix entries that are unnecessary. To do so, we need to pick an arbitrary combination of settings to serve as the "baseline" for our maintenance tasks and then we have to disable all other matrix entries for this particular environment combination:

        - compiler: gcc
          env: DO=apidocs
        - compiler: gcc
          env: DO=distcheck AS_ROOT=yes UNPRIVILEGED_USER=no
        - compiler: gcc
          env: DO=distcheck AS_ROOT=yes UNPRIVILEGED_USER=yes

Having to think of exclusions is not the most pleasant thing to do, but is easy enough if you have a small set of combinations. (It'd be easier and nicer if one could just list all matrix entries explicitly.)

Anyway: voila! That gives you a new entry in your build matrix to represent the new maintenance task. See a green build and a red build for a couple of examples of how things look like.

Tuesday, April 14, 2015

On Bazel and Open Source

This is a rare post because I don't usually talk about Google stuff here, and this post is about Bazel: a tool recently published by Google. Why? Because I love its internal counterpart, Blaze, and believe that Bazel has the potential to be one of the best build tools if it is not already.

However, Bazel currently has some shortcomings to cater to a certain kind of important projects in the open source ecosystem: the projects that form the foundation of open source operating systems. This post is, exclusively, about this kind of project.

For this essay more than ever: the opinions in this post are purely my own and I have no affiliation to the Blaze team. But yes, I have used Blaze for years.

And for those that don't know me, why am I writing this? Because, first and foremost, I am a "build system junkie" and thus I have general interest in this topic. And second, because I have written various open source software components and packaged countless projects for various operating systems, including NetBSD, FreeBSD, and Fedora; all this for longer than I've been at Google. In fact, I was NetBSD's sole Gnome 2.x maintainer for about 3 years—yeah, call me masochist. These activities led me to learn a lot about: build systems; the way a great bunch of upstream maintainers think and behave; and a ton on how to write portable software that can be built and installed with minimum fuss. I'm far from an expert on the topic though.

Let's get started.

About three weeks ago, Google released Bazel: the open source variant of Google's internal build system known as Blaze. During the six years I have been at Google, I have heard various individuals wishing for an open source version of Blaze and, finally, it has happened! This is a big milestone and, all things considered, a great contribution to the open source community. Kudos to the team that pulled this off.

What I would like to do with this post is, for the most part, guide you through how a sector of the open source world currently builds software and, to a lesser extent, present why Bazel is not yet a suitable build system for this specific use case. By "open source world" I am specifically referring to the ecosystem of low-level applications that form a Unix-like operating system these days, the majority of which are written in C, C++, and interpreted languages such as Python. There certainly are plenty of other use cases for which Bazel makes a lot of sense (think higher-level apps, Android, etc.), but I won't be talking about these here because I do not know their needs.

What is Bazel?

Bazel, just as Blaze, is an exemplary build system. As its tagline {Fast, Correct} - Choose two claims, Bazel is a fast build system and a correct build system. Correct in this context means that Bazel accurately tracks dependencies across targets, and that it triggers rebuilds whenever the slightest changes. Fast in this context refers to the fact that Bazel is massively parallel and that, thanks to accurate dependency tracking, Bazel only rebuilds the targets that really need to be rebuilt.

But the above two qualities are just a byproduct of something more fundamental, which in my opinion is the killer feature in Bazel.

Bazel build rules are defined in BUILD files, and the build rules are specified at a very high semantical level. Compared to make(1), where you specify dependencies among files or phony targets, Bazel tracks dependencies across "concepts". You define libraries; you define binaries; you define scripts; you define data sets. Whatever it is that you define, the target has a special meaning to Bazel, which in turn allows Bazel to perform more interesting analyses on the targets. Also, thanks to this high level of abstraction, it is very hard to write incorrect build rules (thus helping enforce the correctness property mentioned above).

Consider the following made-up example:

    name = "my_program",
    srcs = ["main.cpp"],
    deps = [":my_program_lib"],

    name = "my_program_lib",
    srcs = [

    name = "module1_test",
    srcs = "module1_test.cpp",
    deps = [

This simple BUILD file should be readable to anyone. There is a definition of a binary program, its backing library, and a test program. All the targets have an explicit "type" and the properties they accept are type-specific. Bazel can later use this information to decide how to best build and link each target agains the others (thus, for example, hiding all the logic required to build static or shared libraries in a variety of host systems).

Yes. It's that simple. Don't let its simplicity eclipse the power underneath.

The de-facto standard: the autotools

The open source world is a mess of build tools, none of which is praised by the majority; this is in contrast to Blaze, about which I have not heard any Googler complain—and some of us are true nitpickers. There are generic build systems like the autotools, cmake, premake, SCons, and Boost.Build; and there are language-specific build systems like PIP for Python, PPM for Perl, and Cabal for Haskell. (As an interesting side note, Boost.Build is probably the system that resembles Bazel the most conceptually... but Boost.Build is actively disliked by anyone who has ever tried to package Boost and/or fix any of its build rules.)

Of all these systems, the one that eclipses the others for historical reasons (at least for the use case we are considering) is the first one: the autotools, which is the common term used to refer to the Automake, Autoconf, Libtool, and pkg-config combo. This system is ugly because of its arcane syntax—m4, anyone?—and, especially, because it does a very poor job at providing a highly semantical build system: the details of the underlying operating system leak through the autotools' abstractions constantly. This means that few people understand how the autotools work and end up copy/pasting snippets from anywhere around the web, the majority of which are just wrong.

However, despite the autotools' downsides, the workflow they provide—configure, build, test, and install for everyone, plus an optional dist step for the software publisher—is extremely well-known. What's more important is that any binary packaging system out there—say RPM, debhelper, or pkgsrc—can cope with autotools-based software packages with zero effort. In fact, anything that does not adhere to the autotools workflow is a nightmare to repackage.

The autotools have years of mileage via thousands of open source projects and are truly mature. If used properly—which in itself is tricky, although possible thanks to their excellent documentation—the results are software packages that are trivial to build and that integrate well with almost any system.

What I want to say with all this is that the autotools are the definition—for better or worse—of how build systems need to behave in the open source world. So, when a new exciting build tool appears, it must be analyzed through the "autotools distortion lenses". Which is what I'm doing here for Bazel.

Issue no. I: Cross-project dependency tracking

Blaze was designed to work for Google's unified codebase and Bazel is no different. The implication of a unified source tree is that all dependencies for a given software component exist within the tree. This is just not true in the open source world where the vast majority of software packages have dependencies on other libraries or tools, which is a good thing. But I don't see how Bazel copes with this yet.

Actually, the problem is not only about specifying dependencies and checking for their existence: it is about being able to programmatically know how to use such dependencies. Say your software package needs libfoo to be present: that's easy enough to check for, but it is not so easy to know that you need to pass -I/my/magic/place/libfoo-1.0 to the compiler and -pthread -L/some/other/place/ -Wl,-R/yet/more/stuff -lfoo to the linker to make use of the library. The necessary flags vary from installation to installation if only because the Linux ecosystem is a mess on its own.

The standard practice in the open source world is to use pkg-config for build-time dependency discovery and compiler configuration. Each software package is expected to install a .pc file that, in the usual case, records the compiler and linker flags required to use the corresponding library. At build time, the depending package searches for the needed library through the installed .pc files, extract the flags, and uses them. This has its own problems but works well enough in practice.

I am sure it is possible to shell out to pkg-config in Bazel to integrate with other projects. After all, the genrule feature provides raw access to Python to define custom build rules. But, by doing that, it is very easy to lose Bazel's promises of correct builds because writing such low-level build rules in a bulletproof manner is difficult.

Ergo, to recap this section: the first shortcoming is that Bazel does not provide a way to discover external dependencies in the installed system and to use them in the correct manner. Providing an "official" and well-tested set of build rules for pkg-config files could be a possible solution to this problem.

Issue no. II: Software autoconfiguration

Another very common need of open source projects is to support various operating systems and/or architectures. Strictly speaking, this is not a "need" but a "sure, why not" scenario. Let me elaborate on that a bit more.

Nowadays, the vast majority of open source developers target Linux as their primary platform and they do so on an x86-64 machine. However, that does not mean that those developers intentionally want to ban support for other systems; in fact, these developers will happily accept portability fixes to make their software run on whatever their users decide to port the software to. You could argue that this is a moot point because the open source world is mostly Linux on Intel... but no so fast. The portability problems that arise between different operating systems also arise between different Linux distributions. Such is the "nice" (not) world of Linux.

The naïve solution to this problem is to use preprocessor conditionals to detect the operating system or hardware platform in use and then decide what to do in each case. This is harmful because the code quickly becomes unreadable and because this approach is not "future-proof". (I wrote a couple of articles years ago, Making Packager-Friendly Software: part 1, part 2, on this topic.) It seems to me that, today, this might be the only possible solution for projects using Bazel... and this solution is not a good one.

The open source world deals with system differences via run-time configuration scripts, or simply "configure scripts". configure scripts are run before the build and they check the characteristics of the underlying system to adjust the source code to the system in use—e.g. does your getcwd system call accept NULL as an argument for dynamic memory allocation? configure-based checks can be much more robust than preprocessor checks (if written properly).

I suspect that one could use a traditional "configure" script with Bazel. After all, the main goal of configure is to create a config.h file with the settings of the underlying system and this can be done regardless of the build system in use. Unfortunately, this is a very simplistic view of the whole picture. Integrating autoconf in a project is much more convoluted and requires tight integration with the build system to get a software package that behaves correctly (e.g. a package that auto-generates the configure script when its inputs are modified). Attempting to hand-tune rules to plug configure into Bazel will surely result in non-reproducible builds (thought that'd be the user's fault, of course).

There are other alternatives to software autoconfiguration as a pre-build step. One of them is Boost.Config, which has traditionally been (in the BSD world) troublesome because it relies on preprocessor conditionals. A more interesting one, which I have never yet seen implemented and for which I cannot find the original paper, is using fine-grained build rules that generate one header file per tested feature.

All this is to say that Bazel should support integration with autoconf out of the box or provide a similar system to perform configuration-time dynamic checks. This has to be part of the platform because it is difficult to implement this and most users cannot be trusted to write proper rules; it's just too easy to get them wrong.

Issue no. III: It's not only about the build

In the "real world of open source", users of a software package do not run the software they build from the build tree. They typically install the built artifacts into system-wide locations like /usr/bin/ by simply typing make install after building the package—or they do so via prebuilt binary packages provided by their operating system. Developers generate distribution tarballs of their software by simply typing make dist or make distcheck, both of which create deterministic archives of the source files needed to build the package in a standalone environment.

Bazel does not support this functionality yet. All that Bazel supports are build and test invocations. In other words: Bazel builds your artifacts in a pure manner... but then... how do these get placed in the standard locations? Copying files out of the bazel-bin directory is not an option because putting files in their target locations may not be as simple as copying them (see shared libraries).

Because Bazel supports highly semantical target definitions, it would be straightforward to implement support for an install-like or a dist-like target—and do so in an infinitely-saner way than what's done in other tools. However, these need native support in the tool because the actions taken in these stages are specific to the target types being affected.

One last detail in all this puzzle is that the installation of the software is traditionally customized at configuration time as well. The user must be able to choose the target directories and layout for the installed files so that, say, the libraries get placed under lib in Debian-based systems and lib64 in RedHat-based systems. And the user must be able to select which optional dependencies need to be enabled or not. These choices must happen at configuration time, which as I said before is not a concept currently provided by Bazel.

Issue no. IV: The Java "blocker"

All of the previous "shortcomings" in Bazel are solvable! In fact, I personally think solving each of these issues would be very interesting engineering exercises of their own. In other words: "fixing" the above shortcomings would transform Bazel from "just" a build system to a full solution to manage traditional software packages.

But there is one issue left that is possibly the biggest of all: Bazel is Java, and Java is a large dependency that has traditionally had severe FUD around. Many of the open source projects that would like to escape their current build tools are small projects and/or projects not written in Java. For these cases, introducing Java as a dependency can be seen as a big no-no.

Java is also an annoying dependency to have in a project. Java virtual machines are not particularly known for their portability: the "build once, run anywhere" motto is not completely true. By using Java, one closes the door to pretty much anything that is not x86 or x86-64, and anything that is not Linux, OS X nor Windows. Support for Java on other operating systems or architectures is never official and is always unstable for some reason or another. Heck, even most interpreted languages have better runtime support for a wider variety of platforms! (But maybe that's not an issue: the platforms mentioned before are pretty much the only platforms worth supporting anyway...)

The reason this is a problem is two-fold. The first goes back to the portability issue mentioned above: many open source developers do not like narrowing their potential user base by using tools that will limit their choices. The second is that open source developers are, in general, very careful about the dependencies they pull in because they like keeping their dependency set reduced—ever noticed why there are so many "lightweight" and incomplete reimplementations of the wheel?

So it would seem that Bazel for Java-agnostic open source projects is a hard sell.

But not so fast; things could be improved in this area as well! It's easy to think that Bazel makes use of a relatively limited set of Java features. Therefore, it might be relatively easy to make Bazel work (if it doesn't already) with any of the open-source JVM/classpath implementations. If that were done, one could then package Bazel with that open source JVM together and ship both as a self-standing package, permitting the use of Bazel on pretty much any platform with ease.

Target users

So where does all the above leave Bazel? What kind of projects would use Bazel in the open source world? Remember that we are considering the low-level packages that form our Unix-based operating systems, not high-level applications.

On the one hand, we have gazillions of small projects. These projects are "happy" enough with the autotools or the tools specific to their language: they do not have complex build rules, their build times are already fast enough, and the distribution packagers are happy to not need alien build rules for these projects. Using Bazel would imply pulling in a big dependency "just" to get... nicer-looking files. Hardly worthwhile.

On the other hard, we have a bunch of really large projects that could certainly benefit from Bazel. Of these, there are two kinds:

The first kind of large open source project is a project composed of tons of teeny tiny pieces. Here we have things like, Gnome, and KDE. In these cases, migration to a new build system is very difficult due to: the need to coordinate many separate "teams"; because there must be a way to track build-time dependencies; and also because, as each individual piece is small, each individual maintainer will be wary of introducing a heavy component like Bazel as their dependency. But it could be done. In fact, migrated from imake to the autotools and KDE from the autotools to cmake, and both projects pulled the task off.

The second kind of large open source project is a project with a unified source tree. This is the project that most closely resembles the source tree that Blaze targets, and the project that could truly benefit from adopting Bazel. Examples of this include Firefox and FreeBSD. But migrating these projects to a new build system is an incredibly difficult endeavor: their build rules are currently complex and the impact on developer productivity could be affected. But it could be done. In fact, one FreeBSD developer maintains a parallel build system for FreeBSD known as "meta-mode". meta-mode attempts to solve the same problems Bazel solves regrading correctness and fast builds on a large codebase... but meta-mode is still make and... well, not pleasant to deal with, to put it mildly. For a project like FreeBSD, all the issues above could be easily worked-around—with the exception of Java. Introducing Java as a dependency in the FreeBSD build system would be very difficult politically, but maybe it could be done? I don't know; I guess it'd depend on the JVM being used (after all, GCC used to ship with GCJ in the past).


Despite all the above, I think Bazel is a great tool. It is great that Google could open source Blaze and it is great that the world can now take advantage of Bazel now if they so choose. I am convinced that Bazel will claim certain target audiences and that it will shine in them; e.g. dropping Gradle in favor of Bazel for Android projects? That'd be neat.

But the above makes me sad because these relatively simple shortcomings can get in the way of adoption, even for test-run purposes: many developers won't experience the real benefits of having an excellent build tool if they don't even try Bazel, and if they don't try Bazel they will fall in the trap of reinventing the wheel in incomplete manners. We have too many wheels in this area already.

Get what I'm saying? Go give Bazel a try right now!

That's it for today. Don't leave before joining the bazel-discuss mailing list. And, who knows, maybe you are a "build system junkie" too and will find the above inspiring enough to work on solutions to the issues I raised.

Friday, March 20, 2015

Nexus 9, focused writing, and more

About three weeks ago, I got a Nexus 9 and its accompanying Folio case+keyboard at work with the main goal of drive-testing Google's mobile apps.

Being "free" hardware for testing I could not turn it down, but at first I honestly was not sure what to do with it: I already got a Nexus 10 last year and exchanged it soon after for a Nexus 7 because I did not did not like its bulky feeling. The Nexus 7, on the other hand, is the perfect size for reading news, articles, and books, which is basically the only thing I (used to) do with the tablet.

The Nexus 9

Let me say upfront that the Nexus 9 is a really nice tablet. I have been carrying it around every day since I got it just because I want to use it—I'll get to why soon; bear with me for a second.

Hardware-wise, the tablet looks great and feels well-finished, which is something I could not have said of the Nexus 10. The screen is gorgeous and, what's interesting is that its 4:3 aspect ratio feels just right. (Yes, yes, this is the aspect ratio all screens used to have back in the day. Going back to basics?)

Software-wise... we have Android Lollipop here and, well, let's just say I am not impressed. The hardware in the Nexus 9 is more powerful than what I've got on my Nexus 7 so the tablet is indeed more fluid (how could it not)... but it still feels sluggish in general. Anyway...

The Folio keyboard

I have mixed feelings about the Folio keyboard+case combo.

As a case, I am always afraid it won't do its job because it attaches to the tablet purely with magnets so I'll better not drop it on the floor, ever. On the other hand, this works great should you need to attach your tablet to your fridge; keep that in mind.

As a keyboard, the layout is cramped due to the keyboard's size—almost like a full-sized keyboard but not really; I suspect that if you have big hands (I don't), you won't enjoy typing on this. However, the overall feeling of the keys is reasonably good for a chicklet keyboard, which permits touch-typing at decent speeds.

Unfortunately, the designers of the Folio have gotten this "view" of how things should be in the Android world that I personally cannot understand. No Caps Lock? OK, I can let that go because I can't really code comfortably in this thing... but no ESC key? Really? This is a great keyboard... except it isn't because no ESC key means there is no good way to use SSH (assuming you are a vi(1) user, or a set -o vi user). In fact, if it wasn't for this little detail, this device would be a great on-the-road SSH client.

Lastly, little-known (?) fact: the Folio's keyboard is completely useless on metallic surfaces. If you happen to place it on a metallic table, the circuitry gets confused thinking that you have folded the case and shuts the keyboard off; huge problem for me at work where many relaxation areas have metallic tables. Watch out for you local cafe's tables as well!

No trackpad?

The Folio does not have a trackpad, nor a pointing stick, nor a trackball: that's correct, no pointing device. Or does it?

For the first couple of days, the lack of a trackpad or similar device was annoying. I am extremely used to my MacBook Pro's trackpad and my Magic Mouse with its support for gestures, and none of this was to be seen on this setup.

Except... hold on... there is a touch screen! The feeling of raising a hand to touch the Nexus 9's screen was odd at first, particularly because I hate poking at computer screens due to how dirty they get. But this is something you get used to quickly, very very quickly. The ability to just poke at things feels natural and turns out to be quite useful. In fact, I have caught myself poking at my MacBook's screen once or twice already. Oops!

And, finally, the use case

So what could I use this device for? I didn't know what to do with it really, but I did want to give it a honest try.

Read email? It is good enough, but not as convenient as a desktop email client when you receive hundreds of messages per day. Read books? It's quite heavy and big: the Nexus 7 fits the bill much better. Coding or systems administration? See the issues with SSH above. Watching videos? I have a Chromecast, thank you very much.

This device did not provide me anything that I could not do with any of my other devices: in fact, Android, for a power user, feels limiting: the inability to see more than one app at once, the sluggishness of switching between already-open apps, and the reduced set of supported keystrokes makes some tasks just too difficult.

But I think I discovered the true potential of a tablet with a keyboard (for me, that is): writing. Nonstop writing. And by that I mean no copyediting; just purely writing text without applying any fancy formatting, nor messing with the layout of an article, nor anything else.

Why? Because the whole setup provides an immersive experience to whatever you are doing: apps are full screen and notifications can be easily disabled and/or ignored. The keyboard may surely lack special keys, but it has all you need to type text. With the right writing app, it is really easy to get in the flow and to be freed from constant distractions.

In fact, I have already drafted multiple articles for work and for this blog much faster than I could have on my laptop or workstation. It is that great. For pure article drafting, I am in love.

The killer app: JotterPad

And what can you write with?

My first attempt was to use Drive Docs—Google's full-blown word processor—because it comes pre-installed. Yes, it works, but it has a "problem": Docs has enough features to be distracting. Because the features are there, I invariably get sucked into setting the right typeface, or the right heading structure... As any good writer (not me) will tell you, avoiding all editing during the initial stages of writing is a very good way to create a first draft: a braindump you can later iterate on.

So I went out to research "plain-text" writing apps, and offline at that if at all possible. I tested a bunch and there is one that quickly caught my attention: JotterPad.

JotterPad is this little text editor that follows what-I-think-are all the Material Design guidelines and thus flows correctly in the Android Lollipop environment. It is a simple app that just "feels right" and just works, which is something hard to find these days sadly.

Feature-wise, JotterPad is very limited, but I do think that this is its killer point. A lightweight app that behaves nicely and does its advertised job correctly. With JotterPad and with its "Typewriter mode" setting enabled, you can write pages and pages non-stop without even realizing. Seriously, if you have anything to write and you have your tablet around, give this little app a try.

(I know, I know, this is a sales pitch. But, in fact, this particular app is what triggered me to write this article in the first place and the authors are very receptive to feedback. So they kinda deserve it!)


So yes, I now think that a tablet operating system with a real keyboard has one specific use-case for me: focused work and, in particular, distraction-free writing. Your mileage will certainly vary, but those are my current thoughts regarding this device and software combination.

(Numbers? This post clocks at about 1250 words and I jotted down the first 1000-word draft in like 30 minutes! That's about 65 WPM, which I think is decent considering that this was not just typing: it was composing text from scratch.)

Saturday, February 28, 2015

Kyua turns parallel

After three months of intensive work on Kyua's executor Git branch, I am happy to announce that the new execution engine, whose crown feature is the ability to run test cases in parallel, has just landed in master and passes all self-tests!

You can head over to the commit message for more details on the merge, read the NEWS entries, and skim throught the history of the executor branch to understand how this feature has been built.

One caveat: the history will look surprisingly short for a project that has spanned over three months. The reason is that, in the executor branch, I have routinely been using git rebase -i master to build a reduced set of changes that tell the story behind the implementation without distracting commits of the kind "Fix this little bug I forgot about" here and there. An unfortunate side-effect of this is that the temporal history of the commits makes no sense, and also that all the work I've been doing is not properly accounted for in GitHub's nice activity graphs; oh well, I consider a sane semantical history more important than these tiny details.

Why is this work important? Why is running tests in parallel such a big deal?

First, because the computing industry has fully moved into multiprocessor systems and thus taking advantage of multiple cores is something that any piece of modern software should do. As a little fun fact, Kyua is now able to stress my Mac Mini when running tests, spinning its fans to the maximum; this did not happen at all before.

But second, and I would say more importantly, because many tests are not resource-hungry or the system resources they stress do not overlap the resources used by other tests. For example: a significant number of tests are penalized by disk and time delays in them, which in turn cause the whole test suite to run for much longer than it would otherwise. Parallelization allows these long-running but not-heavy tests to run without blocking forward progress.

Let me also add that a secondary goal of this work is to optimize the inner workings of Kyua by reducing the system call overhead. In particular, eliminating one fork(2) call from every test case has been an explicit design goal. This especially helps Kyua when running on OS X, as fork is particularly expensive on the Darwin kernel (yes, citation needed).

As a very unscientific example: running the Kyua, ATF, Lutok, and shtk test suites with the old sequential execution engine takes about 2 minutes and 15 seconds in my dual-core FreeBSD virtual machine running on a dual-core Mac Mini. With the new implementation, the total run time goes down to 1 minute and 3 seconds using a parallelism setting of 4. Pretty cool I would say, but your mileage may (will) vary!

Are we done yet?

The merge of the executor branch marks the beginning of a major restructuring of Kyua's internals. As things are today, only the kyua test command has been switched to using the new execution engine, and not fully: only the execution of the test's body and cleanup routines happen through the executor; listing of test cases still happens as it did before. Similarly, both kyua list and kyua debug still use the out-of-process, testers-based, sequential implementation.

Therefore, there is a bunch of tricky work left to be done: the old test case execution engine (the runner plus the out-of-process testers) need to be fully removed, which in turn means that their functionality has to first be integrated into the new executor; there is a need for a feature to explicitly mark test programs as "exclusive", which is a prerequisite for tests that modify system-wide settings (as is done in the FreeBSD test suite); and an important regression needs to be fixed.

So... if we are not done, why merge now?

Because the current code is complete enough as a first piece of the whole puzzle. Even if the implementation does not yet meet my personal quality standards, the behavior of the code is already 80% of the way to my goal of fully switching to the new execution backend. You, as an end user, care about the behavior (not so much about the implementation), so by doing the merge now you can already start taking advantage of the new parallel execution functionality.

Also, because I am tired of managing a relatively large set of commits with git rebase -i. At this point, the set of commits that build the executor provide a good foundation for the code and its design. From now on, any other improvements to this codebase, such as the addition of new features or the correction of the existing regressions, should be properly tracked in the Git history.

And lastly because, at the beginning of 2015, I set myself the personal goal of getting this code merged by the end of February... so I just made the deadline! Which reminds me I gotta plan how the year's timeline looks like to reach Kyua 1.0.

Can I try it?

Of course! Please do!

There is no release available yet, but you can obviously fetch the code from the GitHub project page and and build it on your own! If you do that, do not forget to set parallelism=4 (or some other value greater than 1) in your ~/.kyua/kyua.conf file to enable the new behavior.

In fact, I am not going to cut a new release just yet because some of the issues mentioned above are of the "release-blocking" severity and thus must be resolved first. What I am going to do, though, is file bugs for each known issue so that they can be properly tracked.

Have fun and please share any feedback you may have!

Monday, February 16, 2015

Unused parameters in C and C++

Today I would like to dive into the topic of unused parameters in C and C++: why they may happen and how to properly deal with them—because smart compilers will warn you about their presence should you enable -Wunused-parameter or -Wextra, and even error out if you are brave enough to use -Werror.

Why may unused parameters appear?

You would think that unused parameters should never exist: if the parameter is not necessary as an input, it should not be there in the first place! That's a pretty good argument, but it does not hold when polymorphism enters the picture: if you want to have different implementations of a single API, such API will have to provide, on input, a superset of all the data required by all the possible implementations.

The obvious case of the above is having an abstract method implemented by more than one subclass (which you can think of as a function pointer within a struct in the case of C). In this scenario, the caller of this abstract method may be handling a generic condition but the various specific implementations may or may not use all the input data.

Our example

An example taken straight from Kyua is the compute_result method, whose purpose is to determine the status of a test case after termination based on the outputs of the test program, including: the program's exit code, its standard output, its standard error, and files that may be left in the transient work directory. The signature of this abstract method looks like this:

virtual model::test_result compute_result(
    const optional< process::status >& status,
    const fs::path& work_directory,
    const fs::path& stdout_path,
    const fs::path& stderr_path) const = 0;

Kyua implements this interface three times: one for plain test programs, one for ATF-based test programs, and one for TAP-compliant test programs. This interface receives all test-related post-termination data as inputs so that the different implementations can examine any parts (possibly not all) they require to compute the result.

In concrete terms: the plain interface only looks at the exit status; the ATF interface looks both at the exit status and at a file that is left in the work directory; and the TAP interface looks both at the exit status and the standard output of the program.

When you face an scenario like this where you have a generic method, it is clear that your code will end up with functions that receive some parameters that they do not need to use. This is alright. However, as obvious as it may be to you, the compiler does not know that and therefore assumes a coding error, warning you along the way. Not helpful.

Two simple but unsuitable alternatives

A first mechanism around this, which only works in C++, is to omit the parameter name in the function definition. Unfortunately, doing so means you cannot reference the parameter by name any longer in your documentation and, furthermore, this solution does not work for C.

A second mechanism is to introduce side-effect free statements in your code of the form (void)unused_argument_name;. Doing this is extremely ugly (for starters, you have to remember to keep such statements in sync with reality) and I fear is not guaranteed to silence the compiler—because, well, the compiler will spot a spurious statement and could warn about it as well.

Because these two solutions are suboptimal, I am not going to invest any more time on them. Fortunately, there is a third alternative.

Tagging unused parameters with compiler attributes

The third and best mechanism around this is to explicitly tag the unused parameters with the __attribute__((unused)) GCC extension as follows:

model::test_result compute_result(
    const optional< process::status >& status,
    const fs::path& work_directory __attribute__((unused)),
    const fs::path& stdout_path __attribute__((unused)),
    const fs::path& stderr_path __attribute__((unused))) const;

But this, as shown, is not portable. How can you make it so?

Making the code portable

If you want your code to work portably across compilers, then you have to go a bit further because the __attribute__ decorators are not standard. The most basic abstraction macro you'd think of is as follows:

#define UTILS_UNUSED __attribute__((unused))

... which you could parameterize as:


... so that your script could determine what the right mechanism to mark a value as unused in your platform is and perform the replacement. This is not trivial, so take a look at Kyua's compiler-features.m4 for to get some ideas.

Such a simple macro then lets you write:

model::test_result compute_result(
    const optional< process::status >& status,
    const fs::path& work_directory UTILS_UNUSED,
    const fs::path& stdout_path UTILS_UNUSED,
    const fs::path& stderr_path UTILS_UNUSED) const;

... which gets us most of the way there, but not fully.

Going further

The UTILS_UNUSED macro shown above lets the compiler know that the argument may be unused and that this is acceptable. Unfortunately, if an argument is marked as unused but it is actually used, the compiler will not tell you about it. Such a thing can happen once you modify the code months down the road and forget to modify the function signature. If this happens, it is a recipe for obscure issues, if only because you will confuse other programmers when they read the code and cannot really understand the intent behind the attribute declaration.

My trick to fix this, which I've been using successfully for various years, is to define a macro that also wraps the argument name; say: UTILS_UNUSED_PARAM(stdout_path). This macro does two things: first, it abstracts the definition of the attribute so that configure may strip it out if the attribute is not supported by the underlying compiler; and, second and more importantly, it renames the given argument by prefixing it with the unused_ string. This renaming is where the beauty lies: the name change will forbid you from using the parameter via its given name and thus, whenever you have to start using the parameter, you will very well know to remove the macro from the function definition. Has worked every single time since!

Here is how the macro looks like (straight from Kyua's file):

#define UTILS_UNUSED_PARAM(name) unused_ ## name UTILS_UNUSED

And here is how the macro would be used in our example above:

/// This is a Doxygen-style docstring.
/// Note how, in this comment, we must refer to our unused
/// parameters via their modified name.  This also spills to our
/// public API documentation, making it crystal-clear to the
/// reader that these parameters are not used.  Because we are
/// documenting here a specific implementation of the API and not
/// its abstract signature, it is reasonable to tell such details
/// to the user.
/// \param status Status of the exit process.
/// \param unused_work_directory An unused parameter!
/// \param unused_stdout_path Another unused parameter!
/// \param unused_stderr_path Yet another unused parameter!
/// \return The computed test result.
model::test_result compute_result(
    const optional< process::status >& status,
    const fs::path& UTILS_UNUSED_PARAM(work_directory),
    const fs::path& UTILS_UNUSED_PARAM(stdout_path),
    const fs::path& UTILS_UNUSED_PARAM(stderr_path)) const;

What about Doxygen?

As I just mentioned Doxygen above, there is one extra trick to get our macros working during the documentation extraction phase. Because Doxygen does not implement a full-blown C/C++ parser—although I wish it did, and nowadays this is relatively easy thanks to LLVM!—you have to tell Doxygen how to interpret the macro. Do so with the following code to the Doxyfile control file:

PREDEFINED += "UTILS_UNUSED_PARAM(name)=unused_ ## name"

So, what about you? Do you keep your code warning-free by applying similar techniques?