Tuesday, October 30, 2012

Test data for BDD

My team has been running into issues with our functional tests.  Two things seem to be at the core of it:
  1. Running a single test takes enough time to disrupt the flow
  2. Setting up test data for them has been painful
Out of these the second has been the major concern.  We've got some ideas for speeding up the running of the tests, but haven't started using them throughout the team (like using grails interactive mode).

For test data the team has decided to go with a static database.  I have some concerns with this approach because I'm used to building up the test data and I've had issues with sharing test data leading to pollution between the tests.

An advantage we have with our application is that a majority of the data stored in it is generated during normal processing.  So, in theory, once we've got a set of data we can work with it in isolation.

A disadvantage, is that it takes a bit before we've generated all of the data for the full set.  So we can't quickly generate our data.  The real issue here is when we update our data model.  In general we've been willing to wait the days that it might take for all accounts to be updated.  Running tests against a static database means that we'd need to be able to generate the test data for it or wait for the released code to process it and copy it over to the static system.

Though maybe we could cut a path in between the static and live by limiting how much data we pull in and only working on the subset.  This means we'd generate the data each time before running tests against it and expect to reset it for the next time or at least refresh it.

I'm still leery of not exploring edge cases, but maybe that's not the place for functional tests.  Should we just let them thread through things to make sure everything is communicating, but rely on integration and unit tests to explore all of the variations and make sure they are covered?  Seems plausible - though I think I should do more reading on BDD and the various disciplines around that.

Monday, October 22, 2012

First iteration under XP

Unfortunately, I went on vacation the last day of the iteration, missing the demo and retrospective.  From my perspective we stubbed our toes a bit.

We focused on our most recent app that had been developed using BDD.  It was the teams first shot at it, so there were definitely rough edges to our practice of it.

We're struggling a little with defining what our tester should be doing.  Our current focus is helping with developing acceptance tests and doing some exploratory testing.  This felt awkward during the first week because our tester didn't have access to our git repository and was sick/working from home at the beginning of the sprint.  So we didn't communicate or share the acceptance tests we had already developed.  I hope getting access to the git repo will help some of this.  Regardless, we need to take time to discuss them.

Another painful spot was a story that ended up bring much larger than expected.  What made this painful was we recognized that there was more work and discussed it as developers, but neglected to create a task to expose it.  So when we ran into some trouble we ended up splitting the story at the last minute.

Finally, we had some difficulties working through the automation of our acceptance tests.  Between learning how to write the tests and learning the tools (cucumber and geb) we stumbled a couple times.

While I enjoyed Florida, I was a little sad to not be with the team as they start the second week of this.

Sunday, October 7, 2012

Transitioning to XP


My team at work is in the process of reading The Art of Agile Development by James Shore and Shane Warden.  I gather initially it was brought to the team as a way to learn how to improve our writing of stories. We've decided to go further with it and try to rethink our development process.

We have seven developers that maintain several applications, with generally two or three of them getting focused attention (more than just fixing bugs) at any given time. These apps vary greatly in how much automated testing had been done for them and how much technical debt they've pulled up.

Our first step/attempt is going to involve one person trying to do any necessary maintenance work for all but one of the apps. Everyone else will be doing a release on the remaining one.

Some concerns we've got:
  • How quickly can we minimize the maintenance role? Cause it's no fun being the only one on the outside.
  • We don't have a coach. The book seems good, but experience is better.
  • We've still got a bunch to learn/internalize.  From TDD to visible charts to retrospectives.  While it makes sense it's not second nature so we'll have to be mindful - not the worst thing to have to be. :)
  • We can't just focus on one app and switching between them will be necessary to continue to deliver value for each of them.
  • We've got a customer identified for our first product, but the other apps aren't as clear cut for who could serve that role.
I'm certain will have more concerns and challenges as we go along, but hopefully we'll keep moving forward.