Dr Alexis Petrounias

Information Systems Engineer

Europe

Automating tree and graph visualisation unit tests in Python and Django

written on in categories Django Programming

We deal with how to systematically test visualisations of tree and graph data structures in a frontend browser (rendered as SVG in JavaScript) from within a Python backend test framework.

Problem

We would like to define test cases within an existing unit test framework in Python (and optionally Django) so that we can programmatically make assertions on the frontend JavaScript runtime and its final rendered output.

The renderings are in the form of SVG, which is XML, and the layout algorithms require a DOM which supports SVG, including the getBBox method.

Solution

We will use the browser automation tool Selenium to manage the rendering browser from within our unit tests. For Django, we can use the Live Server Test Case. In order to avoid launching a user interface which we do not need for the purposes of these tests, we will use the PhantomJS driver for Selenium, which is a headless sandboxed WebKit browser with a DOM.

For representing and transporting tree and graph structures we will use the DOT notation, and for rendering we will use D3. A set of JavaScript libraries which can parse DOT notation, manipulate graphs, lay them out, and render them to SVG using D3 are: dagre, dagre-d3, graphlib, and graphlib-dot.

Generating the DOT representation can be achieved through several stable Python libraries, namely: graphviz, PyGraphviz, or pydot.

Individual assertions in our unit tests will be expressed using the (synchronous) arbitrary JavaScript evaluation function available on the sandboxed browser instance.

Implementation

We will create an abstract base test case for dealing with the Selenium driver, as well as providing utilities for injecting appropriate JavaScript libraries and inspecting the browser’s console for messages. Furthermore, we will use Django’s Static File Finders in order to supply the browser with an HTML container as well as JavaScript libraries, although this is not obligatory (see below).

Defining the Base Test Case

Assuming you have a setting for the PhantomJS executable in settings.PHANTOMJS_EXECUTABLE, which should default to “/usr/local/bin/phantomjs”

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
from django.conf import settings
from django.test import LiveServerTestCase
from django.contrib.staticfiles import finders

class VisualisationLiveTestCase(LiveServerTestCase):

    @classmethod
    def setUpClass(cls):
        # Selenium
        from selenium.webdriver.phantomjs.webdriver import WebDriver
        from selenium.webdriver.support.wait import WebDriverWait
        # PhantomJS driver for Selenium
        cls.driver = WebDriver(executable_path = settings.PHANTOMJS_EXECUTABLE)
        # typical renderable pane dimensions for browser
        cls.driver.set_window_size(1024, 768)
        cls.wait  = WebDriverWait(cls.driver, timeout = 10)
        super(VisualisationLiveTestCase, cls).setUpClass()

    @classmethod
    def tearDownClass(cls):
        # CAVEAT EMPTOR! quit must appear before call to superclass
        cls.driver.quit()
        super(VisualisationLiveTestCase, cls).tearDownClass()

    def setUp(self):
        super(VisualisationLiveTestCase, self).setUp()
        # clear any caches and so on
        get_cache('default').clear()

    def tearDown(self):
        # optionally if using transaction management and you don't need
        # persistence across unit tests
        # rollback()
        super(VisualisationLiveTestCase, self).tearDown()

    def add_script(self, path):
        """
        Dynamically adds a <script> element with the given JavaScript file as
        the 'src' attribute, loaded through the Django Static Files Framework.
        Therefore, *path* is a relative path capable of being interpreted via
        any of the installed Static File Finders.
        """
        resolved_path = finders.find(path, all = False)
        script = """
        var script = document.createElement('script');
        script.setAttribute("type", "text/javascript");
        script.setAttribute("src", "{path}");
        document.getElementsByTagName("head")[0].appendChild(script);
        """.format(path = resolved_path)
        self.driver.execute_script(script)

    @property
    def console(self):
        """
        Obtains the browser's console which contains all messages sent to it.
        """
        return self.driver.get_log('browser')

Defining A Concrete Test Case

We can now use the above base test case to create test cases with specific combinations of JavaScript libraries as well as base rendering HTML containers.

class TreeVisualisationLiveTestCase(VisualisationLiveTestCase):

    def setUp(self, *args, **kwargs):
        super(TreeVisualisationLiveTestCase, self).setUp(*args, **kwargs)
        # relative path to static HTML file used for rendering container
        self.driver.get(finders.find('test/tree.html', all = False))
        # the required JavaScript libraries
        self.add_script('js/d3/3.4.8/production/d3.min.js')
        self.add_script('js/dagre/0.1.0/debug/dagre.js')
        self.add_script('js/dagre/d3/0.1.5/debug/dagre-d3.js')
        self.add_script('js/graphlib/0.7.4/debug/graphlib.js')
        self.add_script('js/graphlib/dot/0.4.10/debug/graphlib-dot.js')

Of course, it is perfectly possible to define the JavaScript libraries in the HTML container, or use an entirely different deployment method. The test case could also run in online simulated mode, by means of serving the HTML view through Django as you would normally do.

We can now add assertions which inspect the return values of ad hoc JavaScript evaluations within the sandboxed browser. In the following example we query the nodes of the tree and verify their existence through their labels. Notice that the types of values are preserved between JavaScript and Python.

def test_dot_simple(self):
    dot = """digraph { 1; 2; 1 -> 2 [label=\\"label\\"] }"""
    function = """return graphlibDot.parse("{dot}").nodes()""".format(
        dot = dot)
    self.assertEqual(["1", "2", ], self.driver.execute_script(function))

A more involved test relies on the existence of a script in the HTML container (or other JavaScript library included somehow) which provides the function run(dot) expecting a DOT string and returning the width and height of the SVG canvas after the tree has been layed out and rendered.

    def test_dot_complex(self):
        dot = """
        // The graph name and the semicolons are optional
        graph graphname {
            a -- b -- c
            b -- d
        }
        """
        # invoke run(dot) and pass it the above dot string as its argument
        result = self.driver.execute_script("return run(arguments[0]);", dot)
        # ensure the rendered SVG has the anticipated dimensions
        self.assertEqual(result, {u'width': u'146', u'height': u'208'})
        # ensure the correct messages were printed to the console in the browser
        self.assertTrue('run' in self.console[0]['message'])
        self.assertTrue('done' in self.console[1]['message'])

Our included JavaScript below also sends two messages to the console, which we can check through assertions as well.

var run = function run(dot) {
    console.info("running");
    var svg = d3.select('svg');
    var graph = graphlibDot.parse(dot);
    var renderer = new dagreD3.Renderer();
    var layoutGraph = renderer.run(graph,
        svg.append('g').attr('transform', 'translate(20, 20)'));
    svg
    .attr('width', layoutGraph.graph().width + 40)
    .attr('height', layoutGraph.graph().height + 40);
    console.info("done");
    return { width: svg.attr('width'), height: svg.attr('height') };
};

Our HTML container simply features the following SVG element.

<svg id="svg" width="800" height="600"></svg>

Visually Inspecting

We can generate an effective screen capture of the browser’s rendering in order to visually inspect the output of the unit test.

self.driver.save_screenshot('filename.png')

Example tree visualisation

Alternatives

Instead of DOT notation we can send JSON representations of the trees and graphs; this will remove the DOT generation dependencies in the Python backend, as well as the DOT parsing dependencies in the JavaScript frontend, but at the cost of having to define an ad hoc JSON representation for the structures.

SVG can be generated (asynchronously) at the backend using a variety of tools, and then either embedded in the HTML response, loaded independently and injected in the rendered DOM, or converted into a binary image format and loaded as such. This removes all rendering dependencies from the frontend JavaScript, at the cost of deployment and implementation dependencies in the backend. Benefits of this approach include sharable caching of rendered output, as well as efficiency for small mobile devices which may exhibit an unacceptable lag when rendering complex layouts.

Caveats

Attempting to use any SVG rendering logic which relies on getBBox will fail in standalone node.js as it does not feature a DOM, and using a pluggable DOM such as jsdom will not fix the issue as it does not support the getBBox method.

Using standalone PhantomJS through sub-process communication from Python instead of a structured approach as with Selenium dramatically increases the complexity of testing and of deploying the test suite.

Conclusion

Given the programmatic testability of the entire pipeline starting from a Django view, through serialization of the data structures in DOT notation or otherwise, to layout and rendering into SVG on the frontend, we should always automate their testing and not rely on manual visual inspections of browser UI’s.

Extensions

It should be possible to run a unit tast framework within the frontend JavaScript and integrate its output with the currently executing unit test controlling the frontend sandbox, instead of, or in addition to, assertions relying on ad hoc JavaScript evaluation in the sandbox.

Acknowledgements

Gratitude to Dr Felix Effenberger and Chris Pettitt for their feedback.