Beginning BDD with Django - Part Two
Part two of a two-part tutorial on Behaviour Driven Development with Django. In this part we use Behave to write and run our tests.
This is the second in a two part series attempting to answer the questions:
- Why should I consider using BDD?
- What are the key concepts?
- How can I use it to test my Django project?
In the first half of this series we outlined the benefits of BDD and scoped and wrote a Gherkin feature file for our Filter Users
feature. In this part we’ll use Behave to hook up our feature file to an automated test suite.
This guide has been tested to work with the following stack:
- Python 3.3
- Django 1.7
- Factory Boy 2.4.1
- Splinter 0.7.0
- Selenium 2.44.0
- Django Behave 0.1.2
- Phantom JS 1.9.8
Contents
Revisiting Our Feature File
For reference, let’s take a quick look at the Filter Users
feature we wrote in the first part of this series:
filter_users.feature
Remember that? Great! Let’s get started.
Dependencies
First off, we’ll need to install our dependencies:
- Behave will run our BDD tests.
- Django Behave will let us run our Behave tests via the Django test runner.
- PhantomJS will drive our interactions with the browser.
- Splinter sits on top of PhantomJS (and others) and will help us write simpler, more elegant test code.
- Factory Boy will allow us to generate Users and Interests to use in our tests.
After installing all of the above, update settings.py
:
- Add
django-behave
toINSTALLED_APPS
- Set
TEST_RUNNER = 'django_behave.runner.DjangoBehaveTestSuiteRunner'
We'll be using Django's built in test runner throughout this tutorial. But if you prefer to use PyTest, then you should check out pytest-django and pytest-bdd.
Folder Structure
Next we’ll need to create a new bdd
app where we can save our existing feature file as filter_users.feature
:
We could instead include feature folders inside individual existing Django applications. However, utilising one central bdd application allows us to share the same environment for all of our tests, whilst accounting for situations where individual tests cases span multiple Django applications.
Remember to add bdd
to your INSTALLED_APPS
in your settings.py
file.
Setting Up Our Test Environment
Creating Factories
Because we’ve already written our feature file, we know that we’ll need Users
and Interests
in the database to run our test scenarios. To create these, we’ll use Factory Boy - a features replacement tool.
We’ll setup our factories in the same application that our User
and Interest
models are defined:
factories.py
Let’s go over whats going on here:
First, we define the model we want to instantiate by setting the model
inside the class Meta
block.
Next, we define defaults for the corresponding model fields. In our example, all of our users will have the first name of ‘Standard’ unless we specify otherwise.
For the email field (in our UserFactory) and name field (in our InterestFactory), we can use a factory sequence, so that each object in our factory is unique. If we now create two instances of InterestFactory
, they will each have a unique name - the first will be interest1
, the second interest2
.
Finally, to define the many-to-many relationship between User
and Interest
, we need to setup our interests as a method using the post_generation hook.
Voila! Now we’re all set to create objects in our tests. For example, we can:
Configuring environment.py
We can use our environment.py file to define what should happen before and after certain points in our tests. There are several hooks we can utilise, but for our example, we’re going to focus on:
before_all
Code defined here will run before all of our tests begin. We’ll use this hook to set up our browser.after_all
Code defined here will run after all of our tests finish. We’ll use this hook to quit our browser.before_scenario
Code defined here runs before each individual scenario. We’ll use this to setup (and teardown) our database. This will help us keep our data clean between each scenario.
Our example:
environment.py
The context
variable is an instance of behave.runner.Context.
This variable holds additional contextual information during the running of tests, so we could also pass it additional information and retreive that value later.
Running Our Tests
Now, we’ve setup our environment, we’re ready to run our tests! In your terminal run:
You’ll see:
Why? Because Behave can’t find any instructions (known as steps) for each of our scenarios. Conveniently, Behave provides us with some default snippets. Copy these from your terminal and paste them into the filter_users.py
file - grouping common steps together:
Step functions are defined using step decorators, here shown as @given
, @then
and @when
. These are universally imported when you import Behave; you do not need to import them individually.
Step decorators use a string to match your Gherkin feature file step - this must be an exact match for the test to run correctly.
The decorated function (in this case def impl()
) can be named anything - It doesn’t matter. The only thing you must do is pass it the context
that we mentioned earlier.
Writing Test Code
Let’s go through each of our steps and write our test code.
1. Given there are a number of interests
For this step, we’ll need to use our InterestFactory
to create the interests listed in our feature file. We can access the name of our interests by looping over each row in our context.table
using the interest
column heading as a key.
filter_users.py
2. And there are many users, each with different interests
In this step we create our users by:
- Splitting the items listed under our ‘interest’ heading into list items
- Fetching the interests (that we created in our last step) from the database
- Creating a new user with our
UserFactory
, passing in the the interest objects
filter_users.py
3. Given I am a logged in user
To log in a user, we navigate to the login page and interact with the login form. Here we can start to appreciate the power of Splinter for browsing, finding and filling in form fields.
filter_users.py
At this point it might be helpful to see our tests running in a ‘real’ browser. To do this, we need to install Selenium.
Now we can tell Splinter to run our tests using Firefox (rather than the default PhantomJS):
Running with Firefox is significantly slower than with PhantomJS, so unless I'm debugging, I tend to stick with running PhantomJS locally and test with other, heavier browsers on my continuous integration server.
4. When I filter the list of users by …
We can combine each of our filter steps into one single step by using Behave’s step parameters.
First, we need to change our feature file, wrapping our filter
variable in string formatting:
filter_users.feature
This allows us to write one (and only one) step for each filter step:
filter_users.py
5. Then I see … users
Finally, we can use the same pattern to count the number of users in our results.
filter_users.feature
And in our python file:
filter_users.py
Wrapping Up
That’s it for our test code! Now it’s over to you to write application code to make these failing scenarios pass.
I hope you’ve enjoyed reading these articles as much as I’ve enjoyed writing them. If you have any questions or comments, don’t hesitate to leave them below.