Thanks to Paul Grenyer for asking me (and Allan Kelly) about when to write story tests. He's not the first person to ask me about this so thought I'd share my advice with you here.
I believe that part of the confusion about when to write tests for user stories comes because people tend to jump into talking about "tests" too quickly. There's a subtle difference between Acceptance Criteria and Acceptance Tests but often these terms are bandied about as meaning the same thing.
Liz Keogh has an excellent blog post on Acceptance Criteria vs. Scenarios where she explains that acceptance criteria are general rules covering system behaviour from which executable examples (Scenarios) can be derived. In his book "Agile Estimating and Planning", Mike Cohn calls these rules "Conditions of Satisfaction", you can simply list these as bullet points on the back of story cards. As Liz points out, you can uncover acceptance criteria by having conversations with your stakeholders about example scenarios. So acceptance criteria and example scenarios are a bit like the chicken and the egg - it's not always clear which comes first so iterate!
In contrast, acceptance tests are basically scripts - manual or automated - that detail specific steps to check a feature behaves as expected. You don't need to write out all of these tests in order to plan the next iteration. Instead, you can use acceptance criteria to clarify the scope of each user story, so the team understands just enough detail to agree on what to work on in next iteration. You can also use acceptance criteria to split "epic" stories - optional or nice-to-have acceptance criteria may be bundled into later stories.
So when do we write the acceptance criteria? Well, the whole team can get together, to flesh out the user stories with acceptance criteria, as part of iteration planning but this can take a while. Larger or distributed teams may prefer to do this activity before the planning meeting - a trio of customer, developer, and tester is usually all you need. If you work this way, remember to review the acceptance criteria with the whole team before committing to delivering a story in the next iteration.
Now we get to when to write the tests. Acceptance test scripts can be written in the same iteration as the code, before or in parallel with developing the code. If you're lucky your customer has time to sit down to do this with you or perhaps a developer and tester paired together can do this. Remember to check with your customer that you've captured the intent of the user story before going ahead and implementing the code.
Of course, I'm not suggesting that you write all of the acceptance tests for all the stories at the beginning of the iteration. Just that when work starts on a story, the first thing to do is to get really clear about what tests should pass - following an Acceptance Test-Driven Development approach - this can be done by writing one acceptance test at a time or the whole set for the story.
Summing up, explore acceptance criteria and example scenarios as part of planning and write the tests close to when you write the code. Please do post any further questions and suggestions as comments.
I was discussing this very topic with a client yesterday. Also present was Benjamin Mitchell who told us how one of his teams used to mark the card with a yellow dot in the planning session to indicate tests needed to be written. The tester would then write the tests during the iteration and mark the card with another dot (I forget which colour) to indicate they were ready.
Posted by: allan kelly | 07 July 2011 at 07:09 PM
Great post. I'd never really thought there might be a difference between tests and criteria, but, when pointed out, it actually seems pretty obvious.
Where does BDD fit into this? Is that the Acceptance Criteria, but in the form of tests? Or is it not relevant at all in this context?
Posted by: Mike Pearce | 07 July 2011 at 10:05 PM
Thanks, Mike. I hadn't thought so much about the difference until recently. BDD is all about helping people express acceptance criteria and acceptance tests in a way that everyone can understand.
Posted by: Rachel Davies | 07 July 2011 at 10:10 PM
Thanks for sharing that, Allan. I've worked with several teams that use sticky dots to indicate the state of a story card. Although nowadays, with Kanban becoming more popular, it seems to be common to move cards into a column on the team board to indicate their state.
Posted by: Rachel Davies | 07 July 2011 at 10:19 PM
In my team, product owner takes the responsibility of specifying what the acceptance criteria would be before we even take the story for implementation in the sprint. This is an effort to keep ourselves from taking highly ready stories only. The PO comes from a technical background so he is capable of writing the criteria in gherkin but we don't require it to be. Plain english is considered also ready.
Team then demostrates the criteria using functional automated test tools or if thats not possible physical demonstration.
Posted by: sachin kundu | 10 July 2011 at 09:40 AM
Thanks for sharing what your team does, Sachin. I think a lot of development teams would like to work with a PO who has time to do this before Sprint Planning. The only slight downside of having the PO do this alone is that there's a chance conversations with the team don't happen. Often the value of fleshing stories out with tests can help identify ways to build a simpler solution. Perhaps you don't have this issue with such a technical PO.
Posted by: Rachel Davies | 10 July 2011 at 04:34 PM
The way you described acceptance test seems to be similar to that of a test case. Is that right?
Posted by: Chris Chan | 14 July 2011 at 09:22 AM
Chris, well sort of. Acceptance tests are test cases. However, not all test cases are acceptance tests. Acceptance tests are the subset of test cases that relate to checking that the agreed acceptance criteria have been met.
Posted by: Rachel Davies | 14 July 2011 at 12:00 PM
Nice post.
I usually have a req workshop before sprint planning, especially
with distributed teams, where the Scrum team turns the acceptance
criteria into a couple of test cases or examples. Works great to create a common understanding
across a distributed team.
Posted by: Cesario Ramos | 20 July 2011 at 03:01 PM
Rachel,
While distinguishing between "acceptance criteria" and "acceptance tests" is logical and probably fine at a local team level, the most credible resources on User Stories do not make that distinction.
You mentioned Mike Cohn's book, but he also refers to the "acceptance criteria" you speak of as "acceptance tests". Further, in Ron Jeffries(the co-inventor of user stories) 3 c's article:
http://xprogramming.com/articles/expcardconversationconfirmation/
He refers to "defining" the acceptance tests vs. "implementing" the acceptance tests.
I sometimes refer to "creating" the Story Tests vs. "executing" or "automating the story tests."
Further, I would worry that "criteria" would take the focus off of a test strategy and might lead us back to "the system shall" style prose requirements.
Again, though, I think it is perfectly fine for any team to set up whatever terminology they like. I personally prefer to use industry standard terminology (when it exists formally or by consensus) when I can because I think it helps the global Agile community.
In this case, IMO, Ron and Mike's terminology of "Acceptance Tests" is the consenus standard.
Posted by: Scrumcrazy.wordpress.com | 07 June 2012 at 11:08 AM
Spot on! Acceptance criteria and acceptance tests are independent entities. The terminology can confuse learners easily. It's easy to forget that stories with acceptance criteria are not contemporary substitutes for funk specs! Thanks for the article
Posted by: Stephanie | 12 December 2012 at 03:55 AM
Great post. This is the very thing that I'm seeing many clients struggle with. Because they use them as synonyms vs. different entities it's driving a poor behavior.
This poor behavior is one that causes them to over-specify the story prior to it being brought into the sprint/iteration.
Posted by: William F. Nazzaro | 24 January 2013 at 02:12 AM