Skip to content

Commit a0dc8e1

Browse files
committed
Update readme, test-runner.md
1 parent e8681c3 commit a0dc8e1

File tree

5 files changed

+35
-23
lines changed

5 files changed

+35
-23
lines changed

README.md

Lines changed: 33 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ To generate the speed metrics in the article, I created a node application (part
1414
- [Goals](#goals)
1515
- [Testing in general](#testing-in-general)
1616
- [Documentation: writing tests that outline the functionality of the application](#documentation-writing-tests-that-outline-the-functionality-of-the-application)
17-
- [Philosophy: "What" should we test? What level of "granularity" are we aiming for?](#philosophy-%22what%22-should-we-test-what-level-of-%22granularity%22-are-we-aiming-for)
17+
- [Philosophy: "What" should we test? What level of "granularity" are we aiming for?](#philosophy-what-should-we-test-what-level-of-granularity-are-we-aiming-for)
1818
- [State: the pros and cons of sharing state between tests](#state-the-pros-and-cons-of-sharing-state-between-tests)
1919
- [Coverage: the extent to which one should measure test coverage](#coverage-the-extent-to-which-one-should-measure-test-coverage)
2020
- [Tips](#tips)
@@ -32,15 +32,15 @@ To generate the speed metrics in the article, I created a node application (part
3232
- [mocha-parallel-tests](#mocha-parallel-tests)
3333
- [Popularity and Community Comparison](#popularity-and-community-comparison)
3434
- [Speed Comparison](#speed-comparison)
35-
- [What do "serial" and "parallel" mean?](#what-do-%22serial%22-and-%22parallel%22-mean)
35+
- [What do "serial" and "parallel" mean?](#what-do-serial-and-parallel-mean)
3636
- [Benchmarks](#benchmarks)
3737
- [Ease of Use Comparison](#ease-of-use-comparison)
3838
- [Amount of necessary configuration/dependencies](#amount-of-necessary-configurationdependencies)
3939
- [Writing the tests](#writing-the-tests)
4040
- [Running the tests](#running-the-tests)
4141
- [Failure Reporting and Debugging Comparison](#failure-reporting-and-debugging-comparison)
4242
- [Works with your framework and environment of choice (React, Redux, Electron, etc) Comparison](#works-with-your-framework-and-environment-of-choice-react-redux-electron-etc-comparison)
43-
- [Full Comparison (with "Nice to Haves")](#full-comparison-with-%22nice-to-haves%22)
43+
- [Full Comparison (with "Nice to Haves")](#full-comparison-with-nice-to-haves)
4444
- [Recommendations](#recommendations)
4545
- [Conclusion](#conclusion)
4646
- [Want to contribute?](#want-to-contribute)
@@ -210,18 +210,28 @@ Being the most established of the testing frameworks, Mocha enjoys a solid place
210210

211211
Now that we know a bit about each framework, lets look at some of their popularity, publish frequency, and other community metrics.
212212

213-
| | Weekly Downloads | Last Publish | Publishes in 1 Year | Contributors |
214-
| -------------------- | ---------------- | ------------ | ------------------- | ------------ |
215-
| Jest | 7.2 million | 2020-05-05 | 27 | 1083 |
216-
| Mocha | 4.3 million | 2020-04-24 | 11 | 439 |
217-
| AVA | 227,179 | 2020-05-08 | 20 | 243 |
218-
| mocha-parallel-tests | 18,097 | 2020-02-08 | 4 | 14 |
213+
![chart-popularity-slide](images/chart-popularity-slide.png)
214+
![chart-downloads-slide](images/chart-downloads-slide.png)
215+
216+
> Charts made with <https://npm-stat.com/charts.html?package=ava&package=jest&package=mocha&from=2015-01-01&to=2020-05-27>
217+
218+
Overall, we can see that _all_ the frameworks are rising in popularity. To me, this indicates that more people are writing JavaScript applications and testing them - which is quite exciting. The fact that none of them are on a downward trend makes all of them viable in this category.
219+
220+
221+
| | Weekly Downloads \* | Last Publish | Publishes in 1 Year | Contributors |
222+
| -------------------- | ------------------- | ------------ | ------------------- | ------------ |
223+
| Jest | 7.2 million | 2020-05-05 | 27 | 1083 |
224+
| Mocha | 4.3 million | 2020-04-24 | 11 | 439 |
225+
| AVA | 227,179 | 2020-05-08 | 20 | 243 |
226+
| mocha-parallel-tests | 18,097 | 2020-02-08 | 4 | 14 |
227+
228+
\* Weekly Downloads as of May 15, 2020
219229

220230
🥇Jest is clearly the most popular framework with 7.2 million weekly downloads. It was published most recently and is updated very frequently. Its popularity can be partially attributed to the popularity of the React library. Jest is shipped with `create-react-app` and is recommended for use in React's documentation.
221231

222-
🥈Mocha comes in second place with 4.3 million weekly downloads. It was the de facto standard long before Jest hit the scene and is the test runner of many, many applications.
232+
🥈Mocha comes in second place with 4.3 million weekly downloads. It was the de facto standard long before Jest hit the scene and is the test runner of many, many applications. It isn't published as frequently as the other two which I believe is a testament to it being tried, true, and more stable.
223233

224-
🥉AVA has 227,179 weekly downloads, an order of magnitude fewer than the most popular frameworks. This may be due to its (arguably niche) focus on minimalism or it having a small team that doesn't have the resources to promote the library.
234+
🥉AVA has 227,179 weekly downloads, an order of magnitude fewer than the most popular frameworks. It is published frequently, which positively signals a focus on improvement and iteration. This may be due to its (arguably niche) focus on minimalism or it having a small team that doesn't have the resources to promote the library.
225235

226236
`mocha-parallel-tests` has 18,097 weekly downloads and doesn't enjoy as frequent updates as the major three. It's extremely new and not a framework.
227237

@@ -269,7 +279,7 @@ To generate the speed metrics in the article, I created a node application that
269279

270280
A caveat with all benchmarking tests: the hardware environment (the make, model, RAM, processes running, etc) will affect measured results. For this reason, we'll only be considering the speeds relative to each other.
271281

272-
🥇`mocha-parallel-tests` is the clear winner in this run. 🥈AVA is close behind (and actually ran faster than `mocha-parallel-tests` in a few of the runs.) 🥉Jest is also fast, but seems to have a bit more overhead than the other two.
282+
🥇`mocha-parallel-tests` is the clear winner in this run (and most runs). 🥈AVA is close behind (and actually ran faster than `mocha-parallel-tests` in a few of the runs.) 🥉Jest is also fast, but seems to have a bit more overhead than the other two.
273283

274284
Mocha lags far behind the parallel runners - which is to be expected because it runs tests in serial. If speed is your most important criteria (and its drawbacks are not an issue), you'll see a 200-1000% increase in test speed using `mocha-parallel-tests` instead (depending on your machine, `node` version, and the tests themselves).
275285

@@ -283,11 +293,11 @@ I'll split "ease of use" into a few categories:
283293

284294
### Amount of necessary configuration/dependencies
285295

286-
| | Configuration | Dependencies |
287-
| ---------------------------- | ---------------------- | ------------------------ |
288-
| Jest | Everything is included | built-in |
289-
| AVA | Sensible defaults | some externals necessary |
290-
| Mocha & mocha-parallel-tests | Many, many options | most externals necessary |
296+
| | Configuration | Dependencies |
297+
| ---------------------------- | -------------------------------------- | ------------------------------------------------------------------------------------ |
298+
| Jest | close-to-zero-config: lots of defaults | All dependencies included: snapshot testing, mocking, coverage reporting, assertions |
299+
| AVA | Sensible defaults | some externals necessary. Included: snapshot testing, assertions |
300+
| Mocha & mocha-parallel-tests | Many, many options | most externals necessary (all if in-browser) |
291301

292302
🥇Jest takes the cake in this department. Using its defaults wherever possible, you could have close to zero configuration.
293303

@@ -319,7 +329,7 @@ I'll split "ease of use" into a few categories:
319329
- Good documentation (slightly opaque and a lot to read through), lots of tutorials and examples (in and out of Mocha's docs)
320330
- Assertions\*, coverage reporting, snapshot tests, mocking modules and libraries (everything) must be imported from elsewhere
321331

322-
\* node's built-in `assert` is commonly used with Mocha for assertions. While it's not built into Mocha, it can be easily imported: `const assert = require('assert')`.
332+
\* node's built-in `assert` is commonly used with Mocha for assertions. While it's not built into Mocha, it can be easily imported: `const assert = require('assert')`. If testing in-browser, you wouldn't have access to `assert` and would have to use a library like `chai`.
323333

324334
For mocha-parallel-tests, run tests as you would with Mocha. There is a caveat:
325335

@@ -336,7 +346,7 @@ For mocha-parallel-tests, run tests as you would with Mocha. There is a caveat:
336346
Mocha's influence on test-writing is undeniable. From [Mocha's getting started section](https://mochajs.org/#getting-started), we can see how tests are organized in nested `describe` blocks that can contain any number of `it` blocks which make test assertions.
337347

338348
```js
339-
const assert = require('assert');
349+
const assert = require('assert'); // only works in node
340350
describe('Array', function() {
341351
describe('#indexOf()', function() {
342352
it('should return -1 when the value is not present', function() {
@@ -349,7 +359,7 @@ describe('Array', function() {
349359
[Chai's `expect`](https://www.chaijs.com/) is commonly used instead of assert:
350360

351361
```js
352-
const { expect } = require('chai');
362+
const { expect } = require('chai'); // works in both node and browser
353363

354364
it('should return -1 when the value is not present', function() {
355365
expect([1, 2, 3].indexOf(4)).to.equal(-1);
@@ -397,11 +407,11 @@ Since the frameworks have drastically different styles and similar capabilities,
397407

398408
| | Summary |
399409
| ---------------------------- | ------------------------------ |
400-
| Jest | interactive CLI |
410+
| Jest | interactive CLI or GUI |
401411
| Mocha & mocha-parallel-tests | non-interactive CLI or browser |
402412
| AVA | non-interactive CLI |
403413

404-
🥇Jest has an incredible interactive command line interface. (Using [Majestic](https://github.com/Raathigesh/majestic/) adds a web-based GUI to the experience.) There are numerous options for choosing which tests run and updating snapshots. It watches for test file changes in watch mode and _only runs the tests that have been updated_. There isn't as much of a need to use `.only` because filtering terms is a breeze in its interactive CLI.
414+
🥇Jest has an incredible interactive command line interface. (Using [Majestic](https://github.com/Raathigesh/majestic/) adds a web-based GUI to the experience.) There are numerous options for choosing which tests run and updating snapshots - all keyboard-driven. It watches for test file changes in watch mode and _only runs the tests that have been updated_. There isn't as much of a need to use `.only` because filtering terms is a breeze.
405415

406416
![Jest CLI1](images/jest-cli1.png)
407417
![Jest CLI2](images/jest-cli2.png)
@@ -490,7 +500,7 @@ Let's recap our findings and fill in some gaps with our "nice to haves." (MPT =
490500
As you can see, all the frameworks are incredibly robust for most testing needs. However, if you picked one at random, it might not work for a specific use case. It's not an easy choice, but here's how I'd break it down:
491501

492502
- 🏅Mocha is recommended if you want your tests to run in any environment. It's incredibly community-supported and is extend-able with your favorite 3rd-party packages. Using `mocha-parallel-tests` would give you a speed advantage.
493-
- 🏅Jest is recommended if you want a popular framework that has everything built in with very little configuration necessary. It's the jack-of-all-trades of test runners. It has a delightful command line experience. Finally, it's an excellent pair with React.
503+
- 🏅Jest is recommended if you want to get tests up and running quickly. It has everything built in and requires very little configuration. The command line and GUI experience is unmatched. Finally, it's the most popular and makes an excellent pair with React.
494504
- 🏅AVA is recommended if you want a minimalist framework with no globals. AVA is fast, easy to configure, and you get ES-Next transpilation out of the box. You don't want hierarchical `describe` blocks and you want to support a smaller project.
495505

496506
## Conclusion

docs/test-runner.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,8 @@ This application is a test-runner that can:
2020
- create the same tests that are compatible with the testing frameworks above
2121
- run those tests with a comparison of the times it takes to execute them
2222

23+
My goal was to create something similar to the [TodoMVC project](http://todomvc.com/) which compared the same "todo" app with different frameworks - React, Backbone, Ember, Vanilla, etc. For my test runner - I generate the same tests but with syntax that’s compatible with the test runners, capture the times it took to run, and output a report at the end.
24+
2325
The number and length of the authored tests simulate a "true" test run in a significantly sized enterprise codebase. Each test runner has a template that will run the _same exact_ test blocks and take the _same exact_ amount of time in each block. (This is done with a `setTimeout` with a time that increases with each iteration of the loop that generates the test block.)
2426

2527
To account for a bias in ordering, the scripts corresponding to each test runner are shuffled. This ensures that the suites for each test runner are never called in the same sequence.

images/chart-downloads-slide.png

97.7 KB
Loading

images/chart-popularity-slide.png

94.8 KB
Loading

images/speed-test-results.png

215 KB
Loading

0 commit comments

Comments
 (0)