Showing posts with label software development. Show all posts
Showing posts with label software development. Show all posts

Friday, April 3, 2015

Concerned with HIP sprints

During our planning I very often hear maintenance or architecture work come up and immediately be deferred with "that's a HIP sprint item."  Ultimately this feels like a cop out.  Granted that's not what the Scaled Agile Framework (SAFe) is going for with it's advocation of them.  It seems more of a place for preparing for a release and dealing with those things that can't be done before hand.

Granted my company is mostly focused on updates to existing applications and adding features to them.  (That's not to say that new things are being developed, just that the number of brand new things is less than existing.)  So while we'll occasionally have those items it seems that should be some stories in your iteration instead of the focus of an entire iteration.  Though even if it took over an iteration - does it need a special name?

I guess it's making me think this is a situation that some teams have found themselves in where they need a HIP sprint and it's being generalized as a rule to help protect others.  In the end I feel it flies in the face of sustainable pace since it gives a dumping ground for things that should be done in each sprint.  Granted we're supposed to have a mix of these items in each sprint already, but it seems the HIP sprint provides the illusion of a crutch to fall back on.  (To be sure - there's some coaching that is needed to ensure that we're not too focused on product development with no regard for keeping the system working.) 

I'm certainly giving short shrift to a topic that has had some thoughtful discussion, but it's something that's currently on my mind.

Monday, January 19, 2015

Fixing Appium functional tests

Lately I've been trying to get functional testing for our mobile application up and running.  Some initial work was done for it, making use of Cucumber and Appium.  Sadly, it hasn't been a focus so while we had things working for a bit we messed things up with the upgrade to Xcode 6, which caused a delay while Appium was updated to be able to work with some changes.

One of the interesting side effects was that calling click() on the WebElement used to work, but after the upgrade those tests failed.  For which I found this discussion that led me into trying to use tap().  I fumbled for a little since:

  • We have a hybrid app and we've focused all our work in the WEB_VIEW context.  Turns out calling tap in that context doesn't work and you have to switch to NATIVE_APP context.
  • Then I tried using the tap(fingers, WebElement, duration) variation of the method, but that would crash Appium an error "uncaughtException: Cannot read property 'x' of undefined".
    • I haven't nailed this down, but I'm guessing this is because I found the WebElement in the WEB_VIEW context, then switched to NATIVE_APP context to tap.
  • Finally used the x, y coordinates of the WebElement and used the tap(fingers, x, y, duration) method to actually do the tap.
Then I ran into another issue - the location of the WebElement wasn't quite right.  First off we are still running in compatibility mode which caused the location of things to be fairly far off when using the iPhone 6 simulator - at least that's our current theory.  When using the iPhone 5 simulator things seemed more consistent.  The y coordinate was still off by a little that we had to add as an offset - 26 pixels.  The theory we have for this is that the changes in iOS 7 to make the status bar transparent might be behind this.

Tuesday, October 30, 2012

Test data for BDD

My team has been running into issues with our functional tests.  Two things seem to be at the core of it:
  1. Running a single test takes enough time to disrupt the flow
  2. Setting up test data for them has been painful
Out of these the second has been the major concern.  We've got some ideas for speeding up the running of the tests, but haven't started using them throughout the team (like using grails interactive mode).

For test data the team has decided to go with a static database.  I have some concerns with this approach because I'm used to building up the test data and I've had issues with sharing test data leading to pollution between the tests.

An advantage we have with our application is that a majority of the data stored in it is generated during normal processing.  So, in theory, once we've got a set of data we can work with it in isolation.

A disadvantage, is that it takes a bit before we've generated all of the data for the full set.  So we can't quickly generate our data.  The real issue here is when we update our data model.  In general we've been willing to wait the days that it might take for all accounts to be updated.  Running tests against a static database means that we'd need to be able to generate the test data for it or wait for the released code to process it and copy it over to the static system.

Though maybe we could cut a path in between the static and live by limiting how much data we pull in and only working on the subset.  This means we'd generate the data each time before running tests against it and expect to reset it for the next time or at least refresh it.

I'm still leery of not exploring edge cases, but maybe that's not the place for functional tests.  Should we just let them thread through things to make sure everything is communicating, but rely on integration and unit tests to explore all of the variations and make sure they are covered?  Seems plausible - though I think I should do more reading on BDD and the various disciplines around that.

Monday, October 22, 2012

First iteration under XP

Unfortunately, I went on vacation the last day of the iteration, missing the demo and retrospective.  From my perspective we stubbed our toes a bit.

We focused on our most recent app that had been developed using BDD.  It was the teams first shot at it, so there were definitely rough edges to our practice of it.

We're struggling a little with defining what our tester should be doing.  Our current focus is helping with developing acceptance tests and doing some exploratory testing.  This felt awkward during the first week because our tester didn't have access to our git repository and was sick/working from home at the beginning of the sprint.  So we didn't communicate or share the acceptance tests we had already developed.  I hope getting access to the git repo will help some of this.  Regardless, we need to take time to discuss them.

Another painful spot was a story that ended up bring much larger than expected.  What made this painful was we recognized that there was more work and discussed it as developers, but neglected to create a task to expose it.  So when we ran into some trouble we ended up splitting the story at the last minute.

Finally, we had some difficulties working through the automation of our acceptance tests.  Between learning how to write the tests and learning the tools (cucumber and geb) we stumbled a couple times.

While I enjoyed Florida, I was a little sad to not be with the team as they start the second week of this.

Sunday, October 7, 2012

Transitioning to XP


My team at work is in the process of reading The Art of Agile Development by James Shore and Shane Warden.  I gather initially it was brought to the team as a way to learn how to improve our writing of stories. We've decided to go further with it and try to rethink our development process.

We have seven developers that maintain several applications, with generally two or three of them getting focused attention (more than just fixing bugs) at any given time. These apps vary greatly in how much automated testing had been done for them and how much technical debt they've pulled up.

Our first step/attempt is going to involve one person trying to do any necessary maintenance work for all but one of the apps. Everyone else will be doing a release on the remaining one.

Some concerns we've got:
  • How quickly can we minimize the maintenance role? Cause it's no fun being the only one on the outside.
  • We don't have a coach. The book seems good, but experience is better.
  • We've still got a bunch to learn/internalize.  From TDD to visible charts to retrospectives.  While it makes sense it's not second nature so we'll have to be mindful - not the worst thing to have to be. :)
  • We can't just focus on one app and switching between them will be necessary to continue to deliver value for each of them.
  • We've got a customer identified for our first product, but the other apps aren't as clear cut for who could serve that role.
I'm certain will have more concerns and challenges as we go along, but hopefully we'll keep moving forward.