hTest

Declarative unit testing, for everyone

Defining tests

Tests are defined and grouped by object literals with a defined structure. Each of these objects can either be a test, or contain child tests. All properties work across both: if a property doesn’t directly apply to a group, it inherits down to the tests it contains. This allows you to only specify what is different in each test, and makes it easier to evolve the testsuite over time.

Property index

All properties are optional.

Property Type Description
run Function The code to run.
args Array Arguments to pass to the running function.
arg Any A single argument to pass to the running function.
data Object Data that will be accessible to the running function as this.data.
name String or Function A string that describes the test.
getName Function A function that generates the test name dynamically.
description String A longer description of the test or group of tests.
id String A unique identifier for the test.
expect Any The expected result.
throws Boolean, Error subclass, or Function Whether an error is expected to be thrown.
maxTime Number The maximum time (in ms) that the test should take to run.
maxTimeAsync Number The maximum time (in ms) that the test should take to resolve.
map Function A mapping function to apply to the result and expected value before comparing them.
check Function A custom function that takes the result and the expected value (if present) as argments and returns a boolean indicating whether the test passed.
skip Boolean Whether to skip the test(s).

Defining the test

Defining the code to be tested (run)

run defines the code to run, as a function. It can be either sync or async. It is common to define a single run function on a parent or ancestor and differentiate child tests via args and data (described below).

Argument(s) to pass to the testing function (args and arg)

There are two ways to pass arguments to the running function:

arg is internally rewritten to args, so in any functions that run with the current test as their context you can just use this.args without having to explicitly check for this.arg

data: Context parameters

data is an optional object with data that will be accessible to the running function as this.data. A test’s data is merged with its parent’s data, so you can define common data at a higher level and override it where needed. It is useful for differentiating the behavior of run() across groups of tests without having to redefine it or pass repetitive arguments.

Describing the test

Names and name generators (name and getName())

name is a string that describes the test. It is optional, but recommended, as it makes it easier to identify the test in the results.

name can also be a name generator function. It is called with the same context and arguments as run() and returns the name as a string. You can also explicitly provide a function, via getName. This can be useful if you want to specify a name for the root of tests, as well as a name generator for child tests. In fact, if name is a function, it gets rewritten as getName internally.

Single names are not inherited, but name generator functions are.

Name generators are useful for providing a default name for tests, that you can override on a case by case basis via name. You may find this.level useful in the name generator, as it tells you how deep in the hierarchy the test is, allowing you to provide depth-sensitive name patterns.

If no name is provided, it defaults to the first argument passed to run, if any.

Description (description)

description is an optional longer description of the test or group of tests.

Id (id)

This is an optional unique identifier for the test that can be used to refer to it programmatically.

Setting expectations:

All of these properties define the criteria for a test to pass.

To make it easier to interpret the results, each test can only have one main pass criterion: result-based, error-based, or time-based. E.g. you can use maxTime and maxTimeAsync together, but not with expect or throws.

If you specify multiple criteria, nothing will break, but you will get a warning.

Result-based criteria (expect)

expect defines the expected result, so you'll be using it the most.

Error-based criteria (throws)

If you are testing that an error is thrown, you can use throws. throws: true will pass if any error is thrown, but you can also have more granular criteria:

Time-based criteria (maxTime, maxTimeAsync)

The time a test took is always measured and displayed anyway. If the test returns a promise, the time it took to resolve is also measured, separately. To test performance-sensitive functionality, you can set maxTime or maxTimeAsync to specify the maximum time (in ms) that the test should take to run.

Customizing how the result is evaluated

The properties in this section center around making it easier to specify result-based tests (i.e. those with expect values).

Defining the checking logic (check)

By default, if you provide an expect value, the test will pass if the result is equal to it (using deep equality). However, often you don’t really need full equality, just to verify that the result passes some kind of test, or that it has certain things in common with the expected output.

check allows you to provide a custom function that takes the actual result and the expected value as arguments and returns a boolean indicating whether the test passed. If the return value is not a boolean, it is coerced to one. You can use any existing assertion library, but hTest provides a few helpers in /src/check.js (import htest/check):

All of these take parameters and return a checking function:

There are also the following checking functions that can be used directly:

Instead of providing a custom checking function, you can also tweak the default one — the equals() function. Simply pass an object literal with desired options as the value of check, and hTest will produce the proper checking function for you. The supported options are:

Examples

import * as check from "../node_modules/htest.dev/src/check.js";

export default {
	run: Math.random,
	args: [],
	check: check.between({min: 0, max: 1}),
}

You can even do logical operations on them:

import getHue from "../src/getHue.js";
import * as check from "../node_modules/htest.dev/src/check.js";

export default {
	run (color) { getHue(color) },
	args: ["green"],
	expect: 90,
	check: check.and(
		check.is("number"),
		check.proximity({epsilon: 1})
	)
}

Or, for nicer syntax:

import getHue from "../src/getHue.js";
import {and, is, proximity } from "../node_modules/htest.dev/src/check.js";

export default {
	run (color) { getHue(color) },
	args: ["green"],
	expect: 90,
	check: and(
		is("number"),
		proximity({epsilon: 1})
	)
}

hTest can build the right checking function behind the scenes:

export default {
	arg: [1.01, 2, 3.042],
	expect: [1, 2, 3],
	check: {deep: true, epsilon: .1}
}

Mapping the result and expected value (map)

In some cases you want to do some post-processing on both the actual result and the expected value before comparing them (using check). map allows you to provide a mapping function that will be applied to both the result and the expected value before any checking. If the test return value is an array, each individual item will be mapped separately.

Some commonly needed functions can be found in /src/map.js (import htest/map). The following are mapping function generators, i.e. take parameters and generate a suitable mapping function:

These can be used directly:

Skipping tests (skip)

Often, we have written tests for parts of the API that are not yet implemented. It doesn't make sense to remove these tests, but they should also not be making the testsuite fail. You can set skip: true to skip a test. The number of skipped tests will be shown separately.