Skip to content

Tests

Dependencies

Dependencies are managed with Yarn, so you’ll have to install it first.

Then, to install dev dependencies:

yarn

We use electron-mocha, an electron-compatible version of mocha, for testing cozy-desktop. See options in test/mocha.opt.

Unit, integration & scenario tests require that you have a Cozy stack up. It’s also expected that you have an instance registered for cozy.localhost:8080 with the test passphrase. You can start a cozy-stack via the provided docker-compose file:

docker-compose up

See requirements for details on how to setup docker and docker-compose.

Unit tests

For testing a class in isolation, method per method:

yarn test:unit

Integration tests

You can run the integration suite to test the communication between cozy-desktop and a remote cozy stack:

yarn test:integration

Scenarios

Recently we have been writing more and more test scenarios as plain data in ./test/scenarios/*/scenario.js files. Those can then be used to capture local and/or remote events generated by those scenarios actions (by running the yarn capture script , see yarn capture --help for more information). The captured events are stored in the corresponding ./test/scenarios/**/{parcel|local|remote}/ subdirectory. Finally we can use those captures as fixtures in tests (see test/scenarios/**/scenario.js). The main benefit is that both local and remote events can come in remote order, so using versioned test input makes the tests repetable/reliable and allows us to try different event sequences.

  • TODO: Refactor dev/capture* and test/support/helpers/scenarios.js
  • TODO: The local/remote wording is confusing. Use source/target instead?
  • TODO: Rename local/ subdirs to chokidar/
  • TODO: Use captures instead of real actions when run from the remote side (so we can test different event sequences)
  • TODO: Enable and fix the last failing scenarios
  • TODO: Eventually stop asserting the whole chain in every scenario to make the build faster
  • TODO: Eventually find a way to test a few loop effects

Options

It’s possible to run all tests at once:

yarn test

To run a specific set of tests (here testing pouch)

yarn mocha test/unit/pouch.js

For more logs you can activate debug logs:

DEBUG=1 yarn ...

Or even dump the logs of a given level directly in the console:

DEBUG=1 TESTDEBUG=info yarn ...

Coverage

FIXME: Coverage is currently unavailable.

You can enable coverage metrics for any npm command with the coverage.sh script.

Examples:

./scripts/coverage.sh yarn test
./scripts/coverage.sh yarn test-unit

Please note that code coverage is only measured for unit tests by default. Integration tests have another purpose, so they are deliberately excluded, even when running ./scripts/coverage.sh yarn test-integration explicitely.

Please also note that we don’t measure coverage on the GUI for now.

Implementation details:

  1. yarn mocha:coverage is the same as yarn mocha except it also loads ./test/support/coverage.js.
  2. ./test/support/coverage.js uses istanbul to instrument the code in a way compatible with electron-mocha.
  3. babel-plugin-istanbul inserts instrumentation code when compiling from EcmaScript to JavaScript
  4. The mocha tests are run and generate an lcov-style report (including HTML output)
  5. Finally, when run on the CI, we tell Travis to upload the report to the Codecov service.

Property based testing

In theory, property based testing is a way to generalize some unit tests. Here, we are using them more as fuzzing / intergration tests.

The property based testing is not currently runned on CI: it would need some work to do so. Currently, it is mainly aimed at running manually by an experienced developer to find bugs that manual tests haven’t found.

We have two types of property based testing:

  • local_watcher, where we simulate events of a disk, and at the end we check that PouchDB has all the information about the files and directory in the synchronized path. It’s called local_watcher because it mostly tests the local watcher, but it also touches other classes like Prep and Merge.

  • two_clients, where we start a cozy-stack, create an instance, and run 2 cozy-desktop on this instance. Then, we doing things like creating files on both clients, and at the end, we check that the two clients has the same data as the stack (in CouchDB).

For both, we have separated the generation of a test case from running it, and we have no shrinking strategy (no good JS library to do that). The generated test case are in JSON format, and it’s possible to write manually a test case to exibit particular behavior.

You can generate a test with this command:

$ ./test/generate_property_json.js local_watcher | jq . > test/property/local_watcher/generated.json

And then, you can run it with:

$ COZY_FS_WATCHER=channel yarn test:property --grep generated

If you want to run several tests for finding new bugs, there is also the ./test/mass_property_tests.sh script.