Marcus Node Bits: mocha is cool both as framework and test runner

Posted by Marcus Hammarberg on February 6, 2014
Stats
I'm writing down some of the things I've picked up when I started to learn about NodeExpress and Mongo. Here are all the post in the series:
When it comes to testing frameworks on the javascript and node stack it's an abundance of choices, just like for any of your needs. Once again you need to look around, ask around and make up your mind by yourself.

I'm writing down some of the things I've picked up when I started to learn about NodeExpress and Mongo. Here are all the post in the series:
When it comes to testing frameworks on the javascript and node stack it's an abundance of choices, just like for any of your needs. Once again you need to look around, ask around and make up your mind by yourself.

The two frameworks that are mentioned the most is jasmine and mocha. The way you write your tests in these frameworks are very much alike and the main difference is in how you run them. Jasmine is executed in a browser (as I understand it) and mocha is run at the terminal, but could be run in the browser too. I like the terminal, as you know.

Ok, there’s also test runners that isn’t a test framework… like Karma, and probably some more. These tools run any kind of tests (like jasmine, mocha or what have you).  At first I tried to use this but when I realised that mocha comes with a test runner that suited all my needs up to now. And it was a much easier set up.

I’ve got stuck on mocha since I think the runner is very capable and the syntax is nice. Finally it was really easy to get started with, where there are few more pieces to puzzle together with Jasmine.
Mocha is installed with ease, just like any node package, by
npm install mocha
If you add the “-g” flag you’ll be able to run mocha tests in any directory. Remember, from my package.json-post, that you should include it in your local node_modules as well. In my opinion, once you have pulled down a project you should just have to go “npm install” and then be able to run it. No extra steps should be required.

The syntax

The syntax of a mocha (and to be honest most JavasScript testing frameworks I’ve seen) resembles the Ruby tool RSpec. It’s a pretty straight forward API and for most of your needs you only need to care about 3 to 4 functions:
  • describe(description, func) – this is the top level function of your tests. It takes a string description of the thing that you are describing (or testing if you have your tester hat on) and then a function with it-statements. This would be the test fixture in xUnit frameworks. 
    • These can be nested and we’ll get back to how to  write this without going crazy. 
  • it(description, func) – this is your actual specifications that poke your system and does tests. The test-methods if you want to compare it with xUnit style of tests. 
  • beforeeach(func)and before(func) – this is methods that is executed before... Before what you might ask, well that has to do with where you place them. If you place them inside a describe-block (which is the most common place I use them) then the beforeEach-block is run before each it-statement. It can also be place outside any describe and will then be run for all the describe's.  It suitable for setup of your system under test, for example
  • aftereach(func) and after(func) – and there´s a tear down version of course. The same rules goes for that. 
Mocha is much bigger than this, but with these simple api I've managed most my testing needs so far.

The done() parameter

One thing that really messed things up for me in the beginning was when I left out or didn't handle the done-parameter to my it-statements. I think (and will hopefully be corrected on this if I'm wrong) that mocha runs it's test asynchronously, as many things are with Node. This means that we need to inform mocha on when our test code is done, so that mocha can pick it up again and process the next test.  

This sounds complicated but in reality just comes as a parameter to the functions in our describe, before/after and it-statements. That parameter, done, is a function that we call when our test is complete. Like this:

Failing to call the done-function will render a timeout exception and you'll faced with an error message like this:


Confused the crap out of me at first. Call done() in the end of your it-statements and it goes aways. 

How I write the tests and keep sane counting parentheses

This is just following the way of writing that I sketched out in this post but it was when writing mocha-tests that I first found the great use of writing my code in this manner. Because, like I said, the describe-functions are just that; functions. But when a complete test case is written out they look like something completely different. It's not uncommon that they end in });});});}); and everyone is just fine with it.

Let’s take an example ; let’s pretend we’re writing the specifications for how users behave in our system. This includes the validation and storage (and probably some more). I would write this like this, in sequences:
Already at this point you could run the tests. It wouldn’t say much as result, since there are no real tests in there, but it’s a good check to see that you haven’t mess things up.
I often fill out empty it-functions for some for the features as well. This doesn’t against the TDD rhythm of having just one failing test, since writing the it’s like this will just render them “inconclusive”. I think it’s a nice way of structuring you’re work and get an overview.
Under each describe I would write stubs for the it-statements, like this:
Running the test (with mocha -u bdd -R spec see below) like they are written above produce this nice little output in our console:

And you’re now ready to start work with your first test.

Mocha test runner and reporters

The mocha test runner is a command line tool that you run from your command prompt. The Visual Studio guy in me is complaining at this fact, but the newly born terminal guy is ok with it and holds the Visual Studio guys hand (yeah, there’s quite a few of me in there… should probably do something about that some day).

Ok, the test runner. At it’s core is very simple; go mocha and it will run all tests (as in “all the .js-files with describe-functions in them)  in the test directory. But there’s a lot of switches and here are the ones that I often use:
  • -u bdd – with this switch you can set the style of writing. BDD-style means that you write describe and it in your tests, i.e. what I’ve described above. There’s also “tdd” and “exports” but I don't use that. 
  • -R spec – this is how you want the result of your test execution to be presented, i.e. the reporter. I like the specification style since it reads nice, especially if you pay attention to how you formulate and nest  your specifications. There are loads of other reporters (10+ on my machine). Noteworthy are:
    • dot - .F.. you get it, huh? 
    • list - writes sentences from your specs
    • markdown - that produces markdown pages 
    • min - a minimal reporter that just gives you number of passed/failed/inconclusive 
  • -w - this handy little flag starts watching your directory for changes. That is; it keeps executing the test for each change. I often issue this command in a separate terminal and keep them running as I go. I have had some problems getting this command to pick up new files, but a quick stop-and-go resolves that. And you stop with CTRL+C if you didn´t know that.
As I said, there are loads of more switches but these are the ones that I’ve used the most.
One last thing about switches: if you end up having a lot of them you can save them in a separate file called mocha.opts or as a script in your package.json like I described before.  Put the mocha.opts file in your test directory and just enter each switch on a separate line in the file, it’s just a simple text file.
I've come to favour adding a node under scripts in the package.json file and just add the switches there. I often do two scripts npm test and npm watch_test. One with the -w-flag and one without.

I wanted to talk about should here too but this post is a bit long already. Until next time. 


Published by Marcus Hammarberg on Last updated