Visualizing Latency Part 3: Rendering Event Data

Now that I have introduced the D3 latency heatmap chart component and explained what binning is, I can discuss the primary use case of the chart: rendering event data.

What is event data?

First, I must explain what I mean by event data. For a fuller treatment, please read Analytics For Hackers: How To Think About Event Data, but allow me to summarize: Event data describes actions performed by entities. It has three key pieces of information: action, timestamp, and state. It is typically rich, denormalized, nested, schemaless, append-only, and frequently extremely large. Some examples of event data include system log records, financial market trades, crime records, or user activities within an application.

When I created the D3 latency heatmap chart component, my primary use case was to be able to visualize the latency of a queue-based report generation system. This system logs every single report generation event, along with important data such as start time and duration, to a table inside a SQL server. I imagine there are thousands (or millions) of systems storing event data into SQL tables — and there’s absolutely nothing wrong with that — but event data is also frequently stored in append-only files on filesystems, in stream processing systems like Apache Kafka, or in distributed databases like Apache Cassandra.

When rendering event data, the key decisions are around binning:

  1. What sizes should the bins have?
  2. How should binning be implemented?
  3. Where should binning be performed?

What sizes should the bins have?

This question was discussed in my blog post where I explained what binning is. The short answer it “it depends on your chart size and data distribution.” As you create your chart, be prepared to try out multiple different binning sizes.

How should binning be implemented?

Let’s explore a few common alternatives for implementing binning.

SQL

The process of binning is conceptually equivalent to the process of a SQL GROUP BY, which leads to one possible implementation: in SQL itself.

Consider a table of events with the following (slightly unrealistic) definition:

CREATE TABLE dbo.events(
    id INT IDENTITY NOT NULL PRIMARY KEY,
    dt datetime NOT NULL,
    val float NOT NULL
);

Let’s say we want to bin the x-axis (time) by day, and the y-axis (value) into buckets of 10 (i.e. a bin for values from 0-10, another from 10-20, etc.) Here’s one way to write that query in SQL:

SELECT CAST(dt AS DATE) AS xval,
       FLOOR(val / 10) * 10 AS yval,
       COUNT(*) AS count
FROM dbo.events
GROUP BY CAST(dt AS DATE), FLOOR(val / 10) * 10

Given the above query, changing the size of the bins is simple and straightforward.

How fast is this query? I measured the above query on my development laptop in Microsoft SQL Server 2016 with a table with 1 billion records on it, as well as various candidate optimizations, below:

Test Case Duration (seconds) Throughput (records/second) Comments
Base case 10.23 97.7 million
Use of COUNT(1) instead of COUNT(*) (statistically indistinguishable) (statistically indistinguishable) As expected, COUNT(1) has no performance impact.
Data sorted by dt then val (CREATE CLUSTERED INDEX IX_events_dt_val ON dbo.events(dt, val))) (statistically indistinguishable) (statistically indistinguishable) I was hoping to eliminate the need for sorts in the query, but perhaps the query optimizer can’t tell that the group by expressions are order-preserving.
Columnstore table (CREATE CLUSTERED COLUMNSTORE INDEX) 3.238 308.8 million Reads are much faster but writes might be slower.
Memory-optimized table (CREATE TABLE … WITH (MEMORY_OPTIMIZED=ON)) (insufficient memory) N/A Apparently 16GB allocated to SQL Server isn’t enough.
Memory-optimized table with 10,000,000 records 1.052 9.5 million Throughput is highly suspicious but memory-optimized tables are generally optimized for OLTP not analytics.

Other possibilities which I did not test include:

  • Different GROUP BY expressions to see if they are more efficient or the optimizer can determine that they are order-preserving
  • An event table that stores date and time in separate columns, as a data warehouse might
  • Memory optimized tables with columnstore indexes
  • Memory optimized tables with natively compiled queries

Conclusion: Don’t spend too much time on query optimization, but consider storing your events in a columnstore table rather than a rowstore table.

Imperative (Traditional) Programming Languages

Another approach to binning is to do it in an imperative programming language (e.g. JavaScript, C#, Java, etc.) Here are a few potential avenues to explore:

  1. Accumulate counts in a dictionary/map/hash table which is keyed by (xval, yval).
  2. Using something like LINQ and expressing the query in a form that’s similar to the SQL query above.
  3. If the data is already properly ordered, and your group by expressions are order-preserving, use this information to be able to process the data piece-by-piece rather than having to keep everything all in RAM.

I’m personally intrigued by #3, as I doubt most general-purpose analytical systems or libraries use this optimization technique.

Stream Processing Systems

Another possible place where binning could be performed would be within a stream processing system. Imagine a near-realtime D3 latency chart rendering binned data from Apache Flink cluster which is continuously aggregating and binning raw event data from a Kafka event bus, or from an Amazon Kinesis Data Analytics SQL query.

Where should binning be performed?

My recommendations are as follows:

  1. If you’re dealing with streaming data, do it in the streaming analytics system (Apache Flink, etc.)
  2. If your data set is small, and you want to support responsive dynamic rebinning of the data (e.g. a dropdown where the user can select whether they want to bin the x-axis by day, month, quarter, or year), pull the raw event data into the web browser and perform the binning there. Pay careful attention to the amount of data you are transferring over the network and the amount of RAM you require. Consider presorting the events by date when retrieving the data from your data store.
  3. If your data set is medium-sized, or you want to use third-party libraries, pull the raw event data into the web server and perform the binning there. Network bandwidth and RAM usage are far less of a concern here than within a web browser, but you must still mind them. Consider presorting the events by date when retrieving the data from your data store.
  4. If your data set is very large, consider binning inside the data storage system (e.g. the SQL server) as part of data retrieval (a.k.a. move compute to data not data to compute).

What do I mean by “small”? The answer is, of course, “it depends”, but here’s how I try to break it down. Let’s say we want to be able to render the chart in 5 seconds or less, and that we’ll allocate 2.5 seconds to downloading the data and 2.5 seconds to the JavaScript execution and rendering. If we estimate the target (95th percentile) user has an Internet connection of 1MB/sec, we can transfer no more than 2.5MB of data. If we transfer the data compressed, and the compression achieves a 10:1 ratio, this is 25MB of raw data, which I imagine shouldn’t be a problem to store in RAM. If a single event is 100 bytes uncompressed, we can transfer no more than 250,000 events to the browser.

Naturally, the above recommendations do not take into account all possible considerations. For example, if you pull the raw event data into a web browser, you may now have to solve a new problem: How to keep the data in the web browser up-to-date? This problem doesn’t exist if you perform the binning where the data resides. On the other hand, if you bin inside your data storage system, what additional I/O, CPU, or cache pressure will you add to the server, and how will this interact with the existing utilization profile of the storage system? As with everything, it’s all about tradeoffs.

Advertisements

Visualizing Latency Part 2: What is Binning?

As mentioned in Brendan Gregg’s Latency Heat Maps page, a latency heat map is a visualization where each column of data is a histogram of the observations for that time interval. Using Brendan Gregg’s visualization:

As with histograms, the key decision that needs to be made when using a latency heat map is how to bin the data. Binning is the process of dividing the entire range of values into a series of intervals and then counting how many values fall into each interval. That said, there is no “best” number of bins, and different bin sizes can reveal different features of the data. Ultimately, the number of bins depends on the distribution of your data set as well as the size of the rendering area.

With latency heatmaps, binning often must be performed twice: once for the x-axis (time) and once for the y-axis (interval of observed values).

Allow me to demonstrate this visually.  Here is an Excel file with the historical daily stock price of GE, courtesy of Yahoo! Finance. I have rendered the close prices in D3 Latency Heatmap with four different binning strategies:

12 vertical bins 30 vertical bins
Bin by year-month
Bin by year

As you can see, each chart shows a slightly different perspective.

You may find you need to experiment with multiple binning strategies until you arrive at a latency heatmap chart with the appropriate level of detail for your use case.

Visualizing Latency Part 1: D3 Latency Heatmap

This blog post is the first in a series about how to visualize latency, which is very useful for debugging certain classes of performance problems.

A latency heatmap is a particularly useful tool for visualizing latency. For a great treatment of latency heatmaps, please read Brendan Gregg’s Latency Heat Maps page and the ACM Queue article Visualizing System Latency.

On the right, you can see a latency heatmap generated from a job queueing system which shows a number of interesting properties, not least of which is that the system appears to be getting slower over time.

In order to make creating latency heatmaps easier, I decided to create a reusable D3 latency heatmap chart component. The goal of this component is to handle all the hard work of chart rendering on behalf of the user, so that a user needs to do little more than combine the chart component with their raw data on a web page. Additionally, animating the chart is quite straightforward (see github.com/sengelha/d3-latency-heatmap/samples/example2.html for an example).

My D3 latency heatmap chart component is open source and available on GitHub at https://github.com/sengelha/d3-latency-heatmap.

Creating this chart required me to overcome a number of interesting challenges, such as:

  • How to create a reusable D3 chart component? (Ultimately I based my code on Mike Bostock’s Towards Reusable Charts proposal)
  • How to effectively use D3 scales for rendering non-points
  • How to correctly use D3’s .data(), .enter(), and .exit() to support in-place updates (required for animation)

Feel free to reach out to me with any questions or suggestions!

Data-Driven Code Generation of Unit Tests Part 5: Closing Thoughts

In the previous posts in this series, I walked through the idea of performing data-driven code generation for unit tests, as well as how I implemented it in three different programming languages and build systems.  This post contains some final thoughts about the effort.

Was it worth it?
Almost certainly.  Although it required substantial up-front effort to set up the unit test generators, this approach found numerous, previously-undetected bugs both within my implementation of the calculation library as well as with legacy implementations. It is straightforward to write code generators that test all possible combinations of parameters to the calculations, ensuring that the resulting code coverage is excellent. Adding tests for a new calculation is as straightforward as adding a line to a single file.

Which build system was easiest for integrating code generation?

  1. Visual Studio/MSBuild (it is basically out of the box)
  2. Maven
  3. CMake

Which templating language was the best?

  1. Jinja2/T4 (tied)
  2. StringTemplate (a distant 3rd; I would strongly consider evaluating alternative templating languages for generating Java code)

What’s next?
Code generation opens up a vast number of possibilities for future enhancements. The existing code generators could be improved to only generate code when something changes in order to improve compilation times. More unit tests could be defined within the code generator templates to test invalid parameters, NaNs, etc. Binding libraries (e.g. wrapping the Java calculation library in a set of Spark SQL user-defined aggregates, or the C++ library into a set of PostgreSQL user-defined aggregates) can all be code generated from the same metadata.csv (more on this later).

Data-Driven Code Generation of Unit Tests Part 4: C#, MSBuild, T4, MS Unit Test

This blog post explains how I used C#, MSBuild, T4 Text Templates, and the Microsoft Unit Test Framework for Managed Code to perform data-driven code generation of unit tests for a financial performance analytics library. If you haven’t read it already, I recommend starting with Part 1: Background.

As mentioned in Part 2: C++, CMake, Jinja2, Boost, all performance analytics metadata is stored in a single file called metadata.csv. This file drives all code generation and is what helps ensure inter-platform consistency.

I must admit, I was pleasantly surprised when I discovered that Microsoft provides a template-based code generation engine (T4) out-of-the-box with Visual Studio. Because of this, supporting code generation within a Visual Studio project is as easy as creating a file within your project with the extension .tt. The key part to making it work is that the file must be marked as using the TextTemplatingFileGenerator Custom Tool, which Visual Studio does for you automatically.

I decided the easiest thing for me to do was to create a single .tt file that parses metadata.csv and generates a single C# file with all unit tests for all calculations. I also found it rather convenient to include utility functions within the template itself using the stanza.

The template file I created looked something like:

<#@ import namespace="System.Collections.Generic" #>
<#@ import namespace="System.IO" #>
<#@ import namespace="System.Linq" #>
using ...
<#
    string fileName = this.Host.ResolvePath("..\\..\\..\\metadata.csv");
    var lines = File.ReadLines(fileName);
    var header = lines.First().Split(',');
    // Notice how this for loop will run once per calculation in metdata.csv
    foreach (var line in lines.Skip(1)) {
        // Create a dictionary with the calculation's attributes for
        // use by the code generator
        var arr = line.Split(',');
        Dictionary<string, string> dict = new Dictionary<string, string>();
        for (int i = 0; i < header.Length; ++i) {
            dict[header[i]] = arr[i];
        }
#>

namespace PerformanceAnalyticsUnitTest
{
    [ExcludeFromCodeCoverage]
    [TestClass]
    public class <#= UnderscoreToPascalCase(dict["function_name"]) #>Test {
        [TestMethod]
        public void Test<#= UnderscoreToPascalCase(dict["function_name"]) #>ArrayUnannualized() {
            ...
        }

        ...
    }
}

<#
    }
#>

<#+
    public string UnderscoreToPascalCase(string str) {
        ...
    }
#>

I also made sure that the generated files were excluded from source control by adding them to the .gitignore file — as a reminder, generated source is output, not source code, and should not be checked in to source control.

I ran into a few minor annoyances, such as the source code sometimes not being generated at the proper time in the build cycle, but that was about it. Integrating code generation into a Visual Studio project is about as easy as it gets!

Data-Driven Code Generation of Unit Tests Part 3: Java, Maven, StringTemplate, JUnit

This blog post explains how I used Java, Apache Maven, StringTemplate, and JUnit to perform data-driven code generation of unit tests for a financial performance analytics library. If you haven’t read it already, I recommend starting with Part 1: Background.

As mentioned in Part 2: C++, CMake, Jinja2, Boost, all performance analytics metadata is stored in a single file called metadata.csv. This file drives all code generation and is what helps ensure inter-platform consistency.

In order to integrate code generation into the Maven build process, I was forced to create three separate Java projects:

  1. A Maven project which implements the unit test generation using metadata.csv and StringTemplate (java-gentest-srcgen)
  2. A Maven project which builds a Maven plugin which calls the unit test generator at the right point in the Maven build lifecycle (java-gentest-maven-plugin)
  3. A Maven project which implements the calculations and uses the Maven plugin to generate the source code (java-lib)

All three projects are tied together using a single parent POM which looks like this:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  ...
  <packaging>pom</packaging>
  ...

  <modules>
    <module>java-gentest-srcgen</module>
    <module>java-gentest-maven-plugin</module>
    <module>java-lib</module>
  </modules>
</project>

Unit test source code generator

The Java unit test generator project (java-gentest-srcgen) creates a JAR with a single class, JavaUnitTestGenerator, with a single public method generate().  This method parses metadata.csv and calls StringTemplate to generate a unit test class for each found calculation.

The code looks something like:

public class JavaUnitTestGenerator {
    // A simple logging interface defined in this project so we can integrate
    // into any logging system, including Maven's
    private SimpleLogger logger;
    private File targetDirectory;

    public JavaUnitTestGenerator setLogger(SimpleLogger logger) {
        this.logger = logger;
        return this;
    }

    public JavaUnitTestGenerator setTargetDirectory(File targetDirectory) {
        this.targetDirectory = targetDirectory;
        return this;
    }

    public void generate() throws Exception {
        // The Maven build process adds metadata.csv as a resource to
        // the JAR so that the JAR becomes self-contained       
        try (InputStream is = getClass().getResourceAsStream("metadata.csv")) {
            try (InputStreamReader sr = new InputStreamReader(is)) {
                try (BufferedReader br = new BufferedReader(sr)) {
                    String[] headers = br.readLine().split(",");

                    String line;
                    while ((line = br.readLine()) != null) {
                        String[] arr = line.split(",");
                        Map<String, String> kvp = new HashMap<String, String>();

                        for (int i = 0; i < arr.length; ++i) {
                            kvp.put(headers[i], arr[i]);
                        }
                        generateUnitTest(kvp);
                    }
                }
            }
        }
    }

    private void generateUnitTest(Map<String, String> kvp) throws Exception {
        // Instantiate the template
        URL url = Resources.getResource("unit-test.stg");
        STGroup g = new STGroupFile(url, "US-ASCII", '<', '>');
        ST template = g.getInstanceOf("unit_test");
        for (Map.Entry<String, String> entry : kvp.entrySet()) {
            if (template.getAttributes().containsKey(entry.getKey())) {
                // Don't pass in false values so that if(variable) evaluates to false
                if (!entry.getValue().equals("false")) {
                    template.add(entry.getKey(), entry.getValue());
                }
            }
        }
        // Add various transformations and manipulations to the attributes
        // found in metadata.csv that cannot easily be done in StringTemplate's
        // templating language       
        template.add("algorithm_type_pascal_case", toPascalCase(kvp.get("algorithm_type")));
        template.add("function_name_pascal_case", toPascalCase(kvp.get("function_name")));
        template.add("function_name_camel_case", toCamelCase(kvp.get("function_name")));
        ...

        // Generate the source code file
        File tgtFile = new File(targetDirectory, toPascalCase(kvp.get("function_name")) + "UnitTest.java");
        logger.info("Generating " + tgtFile + "...");
        try (FileOutputStream os = new FileOutputStream(tgtFile)) {
            try (OutputStreamWriter osw = new OutputStreamWriter(os)) {
                STWriter stWriter = new AutoIndentWriter(osw);
                template.write(stWriter);
            }
        }
    }
}

The template itself (unit-test.stg) looks something like:

group javagentest;

unit_test(algorithm_type,
          algorithm_type_pascal_case,
          function_name,
          ...) ::= <<

package com.morningstar.perfanalytics.tests;

...

public class <function_name_pascal_case>UnitTest {
    // BEGIN ARRAY TESTS
    @Test
    public void testArrayUnannualized() {
        ....
        double expected = <expected_value_unannualized>;
        double actual = ...;
        assertEquals(expected, actual, 0.00001);
    }

    ....
>>

Unfortunately, because StringTemplate’s templating language is so weak, we’re stuck with the following limitations:

  1. The JavaUnitTestGenerator class must perform a number of presentation-oriented transformations on attributes found in metadata.csv, such as case conversions or creating small Java snippets. This means that the JavaUnitTestGenerator and the template are extremely tightly coupled in non-obvious ways.
  2. The lack of a for-loop construct in the template language means we cannot perform tricks like we did in Part 2, where we generate all possible combinations of annualization, frequency, etc. for a given calculation.

Maven plugin

The Maven plugin project creates a maven-plugin that wraps the unit test source code generator. The source code looks like:

@Mojo(name = "generate", requiresProject = true, threadSafe = false, requiresDependencyResolution = ResolutionScope.COMPILE_PLUS_RUNTIME, defaultPhase = LifecyclePhase.GENERATE_SOURCES)
public class JavaUnitTestCodegenMojo extends AbstractMojo {
    @Parameter(defaultValue = "${project}")
    private MavenProject project;

    @Parameter(property = "outputDirectory", defaultValue = "${project.build.directory}/generated-test-sources/unit-tests")
    private File outputDirectory;

    @Override
    public void execute() throws MojoExecutionException {
        getLog().info("Generating unit tests for Java calculation library");

        // MojoLogger adapts Maven's logging class to the SimpleLogger
        // interface in the java-gentest-srcgen project
        JavaUnitTestGenerator g = new JavaUnitTestGenerator()
            .setLogger(new MojoLogger(getLog()))
            .setTargetDirectory(this.outputDirectory);
        try {
            g.generate();
        } catch (Exception ex) {
            throw new MojoExecutionException("Error generating source code", ex);
        }

        project.addTestCompileSourceRoot(outputDirectory.getPath());
    }
}

Note how the plugin instructs the calling project to add the generated source code directory to the test compile source root. This way, the calling project doesn’t need to remember to configure this in its pom.xml.

Calculation library

The calculation library contains the implementation of all calculations. It references the Maven plugin project in its pom.xml in order to have the Maven plugin create its unit tests as part of the build process. The relevant stanza looks like this:

  ...
  <build>
    <plugins>
      <!-- Generate unit test code -->
      <plugin>
        <groupId>com.morningstar.perfanalytics</groupId>
        <artifactId>perfanalytics-java-gentest-maven-plugin</artifactId>
        <version>...</version>
        <executions>
          <execution>
            <goals>
              <goal>generate</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>

The Java library also has a few other, hand-written unit tests, in the normal directory (src/test/java/...). The build system knows how to compile and run both the hand-written and the machine-generated unit tests at build time.

I found this approach reliable, and mostly straightforward, but with a few drawbacks. First, the unit test generator, as coded, will generate the source code every time, even if nothing has changed. This means we will perform a lot of unnecessary re-compilation, slowing down builds. Second, weaknesses in the templating language make it frustrating to use; I have to frequently switch back-and-forth between the template file and the Java code to achieve my desired result. Third, it’s unfortunate that we have to write a custom Maven plugin just to perform source code generation at build time; it’d be nice if there was a simpler way. Fourth, the versions of all the Maven projects must be kept in sync at all times; fortunately, the Maven release plugin does this for us automatically.

Next time I’ll talk about doing this in C#, MSBuild, T4 Text Templates, and the Microsoft Unit Test Framework for managed code!

Data-Driven Code Generation of Unit Tests Part 2: C++, CMake, Jinja2, Boost

This blog post explains how I used CMake, Jinja2, and the Boost Unit Test framework to perform data-driven code generation of unit tests for a financial performance analytics library.  If you haven’t read it already, I recommend starting with Part 1: Background.

All performance analytics metadata is stored in a single metadata file called metadata.csv.  This file contains the complete list of calculations, and for each calculation, its settings (i.e. how it differs from other calculations), including properties like:

  1. How many parameters does the calculation take (1, 2, or 3)?
  2. Does the calculation have an online (streaming) implementation?
  3. Does the calculation support annualization?
  4. What is the default annualization mode?
  5. Given a predefined set of inputs, what are the expected values of the calculation for various combinations of time period, annualization, etc.

The file looks something like:

algorithm_type,function_name,num_parameters,minimum_arr_size,supports_streaming,supports_annualization,default_annualization,expected_value_unannualized,expected_value_annualized_daily,expected_value_annualized_weekly,expected_value_annualized_monthly,expected_value_annualized_quarterly,expected_value_annualized_semiannually,expected_value_annualized_daily_200_day_year
absolute_statistics,calculation1,1,1,true,false,never,7.283238516,-999,-999,-999,-999,-999,-999
...
relative_statistics,calculation2,3,1,true,true,always,0.189846006,69.34125385,9.871992334,2.278152077,0.759384026,0.379692013,37.96920129
...

I use CSV rather than JSON or YAML because it can be easily read by CMake during the build process (more below).

A Jinja2 template defines all unit tests for a given calculation.  It uses the attributes found in metadata.csv to determine how to generate the appropriate source code.  For example, if the calculation does not support annualization per the supports_annualization flag, the Jinja2 template will ignore (not generate) the unit tests which test annualization support.

Each calculation has a number of possible combinations to test for, such as:

  1. Test the online vs. offline versions of the calculation
  2. Test the various annualization settings (always, never, calculation default)
  3. Test the various pre-defined annualization periods (daily, weekly, monthly, etc.)
  4. etc.

The Jinja2 template uses for loops extensively to make sure that it tests all possible combinations of all of the above parameters. It looks something like:

{% for calc_type in calc_types %}
{% for annualize in annualizes %}
{% for frequency in frequencies %}

BOOST_AUTO_TEST_CASE(test_{{ function_name }}_{{calc_type}}_annualize_{{annualize}}_frequency_{{frequency}})
{
    ....
}

{% endfor %}
{% endfor %}
{% endfor %}

As you can imagine, the resulting code coverage of the unit tests is excellent.

A Python script, render_jinja.py, knows how to read metadata.csv and pass the appropriate values to Jinja2 in order to generate the unit tests for a given function.  The meat of the Python script looks like:

function_name = ...
output_file = ...
template_file = ...

with open('../../metadata.csv', 'r') as f:
    mr = csv.DictReader(row for row in f if not row.startswith('#'))
    for row in mr:
        if row['function_name'] == function_name:
            fn_metadata = row
            break

# Generate unit test template
env = jinja2.Environment(loader=jinja2.FileSystemLoader('.'), trim_blocks=True)
template = env.get_template(template_file)
result = template.render(fn_metadata)
output_file.write(result)

The build system uses CMake.  It too reads metadata.csv to get a list of calculations, calls render_jinja.py on each calculation to generate the unit test code C++ file, and then compiles and executes the unit tests. Here’s a sample of the CMake build file:

cmake_minimum_required(VERSION 2.8)
project(perfanalytics-cpp-test)

enable_testing()

if (WIN32)
  add_definitions(-DBOOST_ALL_NO_LIB)
  set(Boost_USE_STATIC_LIBS ON)
else()
  add_definitions(-DBOOST_TEST_DYN_LINK)
endif()
find_package(Boost COMPONENTS unit_test_framework REQUIRED)

set(TEST_COMMON_SRC memory_stream.cpp)

# Populate CALC_NAMES from metadata.csv
file(STRINGS ${CMAKE_CURRENT_SOURCE_DIR}/metadata.csv CALC_METADATA)
set(index 1)
list(LENGTH CALC_METADATA COUNT)
while(index LESS COUNT)
  list(GET CALC_METADATA ${index} line)

  if (NOT "${line}" MATCHES "^#")
    # convert line to a CMake list
    string(REPLACE "," ";" l ${line})
    list(GET l 1 calc_name)
    list(GET l 3 supports_streaming)
    list(APPEND CALC_NAMES ${calc_name})
    list(APPEND CALC_SUPPORTS_STREAMING ${supports_streaming})
  endif()

  math(EXPR index "${index}+1")
endwhile(index LESS COUNT)

# Note how we generate source into the binary directory.  This
# is important -- generated source is *output*, not source,
# and should not be checked into source control.
foreach(fn ${CALC_NAMES})
  add_custom_command(
    OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/generated/${fn}_unit_test.cpp
    COMMAND python ${CMAKE_CURRENT_SOURCE_DIR}/render_jinja.py -o ${CMAKE_CURRENT_BINARY_DIR}/generated/${fn}_unit_test.cpp -f ${fn} -t unit_test_template.cpp.j2
    DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/render_jinja.py ${CMAKE_CURRENT_SOURCE_DIR}/unit_test_template.cpp.j2 ${CMAKE_CURRENT_SOURCE_DIR}/../../metadata.csv
    WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
    COMMENT "Generating C++ unit test ${CMAKE_CURRENT_BINARY_DIR}/generated/${fn}_unit_test.cpp"
  )
  list(APPEND TESTCASE_SRC ${CMAKE_CURRENT_BINARY_DIR}/generated/${fn}_unit_test.cpp)
endforeach()

foreach (testSrc ${TESTCASE_SRC})
  get_filename_component(testName ${testSrc} NAME_WE)

  # Test against static library
  add_executable(cpp_static_${testName} ${testSrc} ${TEST_COMMON_SRC})
  target_link_libraries(cpp_static_${testName} perfanalytics_cpp_static ${Boost_LIBRARIES})
  add_test(NAME cpp_static_${testName} COMMAND cpp_static_${testName})

  # Test against shared library
  if (BUILD_SHARED_LIBRARY)
    add_executable(cpp_shared_${testName} ${testSrc} ${TEST_COMMON_SRC})
    target_link_libraries(cpp_shared_${testName} perfanalytics_cpp_shared ${Boost_LIBRARIES})
    add_test(NAME cpp_shared_${testName} COMMAND cpp_shared_${testName})
  endif()
endforeach(testSrc)

A single script, build.sh, ties everything together.  While the full build.sh supports a number of command-line options (e.g. -c, --clean for a clean build; -d, --debug for a debug build; -r, --release for a release build), the core of the script looks like:

set BUILD_TYPE=Debug # or Release
if [ ! -d $BUILD_TYPE ]; then mkdir $BUILD_TYPE; fi
cd $BUILD_TYPE
cmake .. -DCMAKE_BUILD_TYPE=$BUILD_TYPE
cmake --build . --config $BUILD_TYPE
env CTEST_OUTPUT_ON_FAILURE=1 ctest -C $BUILD_TYPE
cpack -C $BUILD_TYPE

Windows uses an equivalent script called build.cmd.

I am quite happy with the results.  Adding a new calculation is almost as simple as writing the implementation of the calculation and adding a single line to metadata.csv. The unit tests are comprehensive and provide great code coverage.  New test patterns (e.g. what should happen if you pass in NULL to a calculation?) can be added to all calculations at once, simply by editing the Jinja2 template file. Everything works across Windows, Mac OS, and Linux.

The only remaining frustration that I have is that the build system will often re-generate the unit test source code, and recompile the unit tests, even though nothing has changed. This notably slows down build times.  I’m hopeful this can be solved with some further work on the CMake build file, but I’ll leave that for another time.