You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+33-23Lines changed: 33 additions & 23 deletions
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ To generate the speed metrics in the article, I created a node application (part
14
14
-[Goals](#goals)
15
15
-[Testing in general](#testing-in-general)
16
16
-[Documentation: writing tests that outline the functionality of the application](#documentation-writing-tests-that-outline-the-functionality-of-the-application)
17
-
-[Philosophy: "What" should we test? What level of "granularity" are we aiming for?](#philosophy-%22what%22-should-we-test-what-level-of-%22granularity%22-are-we-aiming-for)
17
+
-[Philosophy: "What" should we test? What level of "granularity" are we aiming for?](#philosophy-what-should-we-test-what-level-of-granularity-are-we-aiming-for)
18
18
-[State: the pros and cons of sharing state between tests](#state-the-pros-and-cons-of-sharing-state-between-tests)
19
19
-[Coverage: the extent to which one should measure test coverage](#coverage-the-extent-to-which-one-should-measure-test-coverage)
20
20
-[Tips](#tips)
@@ -32,15 +32,15 @@ To generate the speed metrics in the article, I created a node application (part
32
32
-[mocha-parallel-tests](#mocha-parallel-tests)
33
33
-[Popularity and Community Comparison](#popularity-and-community-comparison)
34
34
-[Speed Comparison](#speed-comparison)
35
-
-[What do "serial" and "parallel" mean?](#what-do-%22serial%22-and-%22parallel%22-mean)
35
+
-[What do "serial" and "parallel" mean?](#what-do-serial-and-parallel-mean)
36
36
-[Benchmarks](#benchmarks)
37
37
-[Ease of Use Comparison](#ease-of-use-comparison)
38
38
-[Amount of necessary configuration/dependencies](#amount-of-necessary-configurationdependencies)
39
39
-[Writing the tests](#writing-the-tests)
40
40
-[Running the tests](#running-the-tests)
41
41
-[Failure Reporting and Debugging Comparison](#failure-reporting-and-debugging-comparison)
42
42
-[Works with your framework and environment of choice (React, Redux, Electron, etc) Comparison](#works-with-your-framework-and-environment-of-choice-react-redux-electron-etc-comparison)
43
-
-[Full Comparison (with "Nice to Haves")](#full-comparison-with-%22nice-to-haves%22)
43
+
-[Full Comparison (with "Nice to Haves")](#full-comparison-with-nice-to-haves)
44
44
-[Recommendations](#recommendations)
45
45
-[Conclusion](#conclusion)
46
46
-[Want to contribute?](#want-to-contribute)
@@ -210,18 +210,28 @@ Being the most established of the testing frameworks, Mocha enjoys a solid place
210
210
211
211
Now that we know a bit about each framework, lets look at some of their popularity, publish frequency, and other community metrics.
212
212
213
-
|| Weekly Downloads | Last Publish | Publishes in 1 Year | Contributors |
> Charts made with <https://npm-stat.com/charts.html?package=ava&package=jest&package=mocha&from=2015-01-01&to=2020-05-27>
217
+
218
+
Overall, we can see that _all_ the frameworks are rising in popularity. To me, this indicates that more people are writing JavaScript applications and testing them - which is quite exciting. The fact that none of them are on a downward trend makes all of them viable in this category.
219
+
220
+
221
+
|| Weekly Downloads \*| Last Publish | Publishes in 1 Year | Contributors |
🥇Jest is clearly the most popular framework with 7.2 million weekly downloads. It was published most recently and is updated very frequently. Its popularity can be partially attributed to the popularity of the React library. Jest is shipped with `create-react-app` and is recommended for use in React's documentation.
221
231
222
-
🥈Mocha comes in second place with 4.3 million weekly downloads. It was the de facto standard long before Jest hit the scene and is the test runner of many, many applications.
232
+
🥈Mocha comes in second place with 4.3 million weekly downloads. It was the de facto standard long before Jest hit the scene and is the test runner of many, many applications. It isn't published as frequently as the other two which I believe is a testament to it being tried, true, and more stable.
223
233
224
-
🥉AVA has 227,179 weekly downloads, an order of magnitude fewer than the most popular frameworks. This may be due to its (arguably niche) focus on minimalism or it having a small team that doesn't have the resources to promote the library.
234
+
🥉AVA has 227,179 weekly downloads, an order of magnitude fewer than the most popular frameworks. It is published frequently, which positively signals a focus on improvement and iteration. This may be due to its (arguably niche) focus on minimalism or it having a small team that doesn't have the resources to promote the library.
225
235
226
236
`mocha-parallel-tests` has 18,097 weekly downloads and doesn't enjoy as frequent updates as the major three. It's extremely new and not a framework.
227
237
@@ -269,7 +279,7 @@ To generate the speed metrics in the article, I created a node application that
269
279
270
280
A caveat with all benchmarking tests: the hardware environment (the make, model, RAM, processes running, etc) will affect measured results. For this reason, we'll only be considering the speeds relative to each other.
271
281
272
-
🥇`mocha-parallel-tests` is the clear winner in this run. 🥈AVA is close behind (and actually ran faster than `mocha-parallel-tests` in a few of the runs.) 🥉Jest is also fast, but seems to have a bit more overhead than the other two.
282
+
🥇`mocha-parallel-tests` is the clear winner in this run (and most runs). 🥈AVA is close behind (and actually ran faster than `mocha-parallel-tests` in a few of the runs.) 🥉Jest is also fast, but seems to have a bit more overhead than the other two.
273
283
274
284
Mocha lags far behind the parallel runners - which is to be expected because it runs tests in serial. If speed is your most important criteria (and its drawbacks are not an issue), you'll see a 200-1000% increase in test speed using `mocha-parallel-tests` instead (depending on your machine, `node` version, and the tests themselves).
275
285
@@ -283,11 +293,11 @@ I'll split "ease of use" into a few categories:
283
293
284
294
### Amount of necessary configuration/dependencies
| Jest |close-to-zero-config: lots of defaults | All dependencies included: snapshot testing, mocking, coverage reporting, assertions|
299
+
| AVA | Sensible defaults | some externals necessary. Included: snapshot testing, assertions |
300
+
| Mocha & mocha-parallel-tests | Many, many options | most externals necessary (all if in-browser) |
291
301
292
302
🥇Jest takes the cake in this department. Using its defaults wherever possible, you could have close to zero configuration.
293
303
@@ -319,7 +329,7 @@ I'll split "ease of use" into a few categories:
319
329
- Good documentation (slightly opaque and a lot to read through), lots of tutorials and examples (in and out of Mocha's docs)
320
330
- Assertions\*, coverage reporting, snapshot tests, mocking modules and libraries (everything) must be imported from elsewhere
321
331
322
-
\* node's built-in `assert` is commonly used with Mocha for assertions. While it's not built into Mocha, it can be easily imported: `const assert = require('assert')`.
332
+
\* node's built-in `assert` is commonly used with Mocha for assertions. While it's not built into Mocha, it can be easily imported: `const assert = require('assert')`. If testing in-browser, you wouldn't have access to `assert` and would have to use a library like `chai`.
323
333
324
334
For mocha-parallel-tests, run tests as you would with Mocha. There is a caveat:
325
335
@@ -336,7 +346,7 @@ For mocha-parallel-tests, run tests as you would with Mocha. There is a caveat:
336
346
Mocha's influence on test-writing is undeniable. From [Mocha's getting started section](https://mochajs.org/#getting-started), we can see how tests are organized in nested `describe` blocks that can contain any number of `it` blocks which make test assertions.
337
347
338
348
```js
339
-
constassert=require('assert');
349
+
constassert=require('assert');// only works in node
340
350
describe('Array', function() {
341
351
describe('#indexOf()', function() {
342
352
it('should return -1 when the value is not present', function() {
| Mocha & mocha-parallel-tests | non-interactive CLI or browser |
402
412
| AVA | non-interactive CLI |
403
413
404
-
🥇Jest has an incredible interactive command line interface. (Using [Majestic](https://github.com/Raathigesh/majestic/) adds a web-based GUI to the experience.) There are numerous options for choosing which tests run and updating snapshots. It watches for test file changes in watch mode and _only runs the tests that have been updated_. There isn't as much of a need to use `.only` because filtering terms is a breeze in its interactive CLI.
414
+
🥇Jest has an incredible interactive command line interface. (Using [Majestic](https://github.com/Raathigesh/majestic/) adds a web-based GUI to the experience.) There are numerous options for choosing which tests run and updating snapshots - all keyboard-driven. It watches for test file changes in watch mode and _only runs the tests that have been updated_. There isn't as much of a need to use `.only` because filtering terms is a breeze.
405
415
406
416

407
417

@@ -490,7 +500,7 @@ Let's recap our findings and fill in some gaps with our "nice to haves." (MPT =
490
500
As you can see, all the frameworks are incredibly robust for most testing needs. However, if you picked one at random, it might not work for a specific use case. It's not an easy choice, but here's how I'd break it down:
491
501
492
502
- 🏅Mocha is recommended if you want your tests to run in any environment. It's incredibly community-supported and is extend-able with your favorite 3rd-party packages. Using `mocha-parallel-tests` would give you a speed advantage.
493
-
- 🏅Jest is recommended if you want a popular framework that has everything built in with very little configuration necessary. It's the jack-of-all-trades of test runners. It has a delightful command line experience. Finally, it's an excellent pair with React.
503
+
- 🏅Jest is recommended if you want to get tests up and running quickly. It has everything built in and requires very little configuration. The command line and GUI experience is unmatched. Finally, it's the most popular and makes an excellent pair with React.
494
504
- 🏅AVA is recommended if you want a minimalist framework with no globals. AVA is fast, easy to configure, and you get ES-Next transpilation out of the box. You don't want hierarchical `describe` blocks and you want to support a smaller project.
Copy file name to clipboardExpand all lines: docs/test-runner.md
+2Lines changed: 2 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -20,6 +20,8 @@ This application is a test-runner that can:
20
20
- create the same tests that are compatible with the testing frameworks above
21
21
- run those tests with a comparison of the times it takes to execute them
22
22
23
+
My goal was to create something similar to the [TodoMVC project](http://todomvc.com/) which compared the same "todo" app with different frameworks - React, Backbone, Ember, Vanilla, etc. For my test runner - I generate the same tests but with syntax that’s compatible with the test runners, capture the times it took to run, and output a report at the end.
24
+
23
25
The number and length of the authored tests simulate a "true" test run in a significantly sized enterprise codebase. Each test runner has a template that will run the _same exact_ test blocks and take the _same exact_ amount of time in each block. (This is done with a `setTimeout` with a time that increases with each iteration of the loop that generates the test block.)
24
26
25
27
To account for a bias in ordering, the scripts corresponding to each test runner are shuffled. This ensures that the suites for each test runner are never called in the same sequence.
0 commit comments