Monday 9 July 2012

Business Model Development for iterative capability improvement.


Well there has been talk on the grapevine about new and upcoming ways of business modelling and talk of disruption of typical methods.
Off the back of that I had a bit of a go at creating a business modelling concept that may provoke some thought.  I have two groups of hexagons with a bridge between the two and the arrows denote the possible path of opportunity between the customer at the top and the supplier at the bottom.  A point to note is that the internal supplier revenue to fund capability is coloured to match the customer input as ultimately "building capability comes from iterative investment and delivery throughout the whole model on an onging basis.

Flow of Activity through the Model. Revenue Return would re-invest in Capability
As you can see it is relatively straightforward, however clearly shows that there are varying elements on the customer side that need consideration. It is not just solving a problem that will make a proposition acceptable, there are large parts to be played by the Customer perception of the supplier.  Customer experience flows directly into the customer relationship and this is the gateway to the proposition even being considered.

Wednesday 20 June 2012

The release sprint

I had a conversation this afternoon based upon a process diagram I had put together that contained different sprint types; zero, development and release.  A debate ensued along the lines that test and quality and delivery artefacts should be produced all the time and that at the end of every development sprint there should be a shippable product; not so says I, I say "potentially shippable" product.

Herein lies the crux of the difference; the release sprint is all about taking something "potentially shippable" and making it "shippable" or "shipped".  The backlog items for a release sprint are typically all the tasks to put the code/product into production, so they can include:
  • Deployment of the newly coded product into its production envrionment
  • Populating the new product with production data from the previous live version
  • Training / documentation handover for operational support teams
  • Formal QA activities if in a regulated environment (eg FDA / FSA)
  • Failover and fallback planning in case of a failed deployment

..however they should specifically not include developing any further functionality.

A rule of thumb is that a release sprint should take no longer than a single iteration of the development sprints that have been played. If this is not enough time then there are sub processes that need further tuning so that played and accepted story work is closer to "shippable" when it is "done".

Wednesday 6 June 2012

My Agile day off - comparative estimation in the home

At risk of taking my work home with me I found myself planning my domestic tasks for the day yesterday in an agile manner.

My to-do list was:
  • Fix chicken mesh around the vegetable beds to stop the hens eating the new plants.
  • Bath the Dog
  • Adjust the wheel geometry on the car
  • Do the online banking and invoicing
  • Tidy the workshop
  • A bit of carpentry
  • Help with kids homework
So I assigned a complexity to each one, taking the "bath the dog" as my 1 point starter task:
  • 3 points.  Fix chicken mesh around the vegetable beds.
  • 1 point.  Bath the Dog
  • 3 points.  Adjust the wheel geometry on the car
  • 2 points.  Do the online banking and invoicing
  • 2 points.  Tidy the workshop
  • 2 points.  A bit of carpentry
  • 1 point.  Help with kids homework
Giving me a total of 14 points of complexity to deliver in one day, albeit with no idea what my velocity was.

From a planning perspective the task that was "needed" most was to sort out the chicken mesh around the veg' beds before the hens ate the plants; other items were less urgent so could go further down the list.

"So that, the hens don't ruin the new vegetable plants, as, a vegetable grower  I need, a way of stopping the chickens getting to the plants"

I started jobs at 10.30 (lazy start I know) and had completed the first job by 1pm. Two and a half hours to deliver 3 points !?  So at this point I was able to re-plan.  There was no way I was going to be able to do another 10 points in the afternoon - more like six. as I had four or five hours left.

Next was a bit of lunch then on to bathing the dog, followed by tidying the workshop, the banking and helping the kids with their homework - all done by about 5pm and all taking around 30 minutes per point.  My initial 3 point story had put me off the geometry job, and the carpentry wasn't important compared to other tasks and can wait till next weekend, but the rest all got done.

So what did I learn.  I'm pretty accurate at estimating against things I have done before and things similar to previous jobs (tidying a workshop is the same activity as tidying a garden for instance), and I am less accurate at estimating against work I have not done.  As it turned out, my 3 point job of fixing the chicken mesh up should have been a 5 point story. As for the geometry on the car; well looking at my velocity of 1 point per half hour is it a 1.5 hour job? I shall find out next weekend.

When I draw up the job list for my next "iteration" (day off) I shall take a learned velocity of 9 into account, although I will probably stretch for 10 or 11.

Friday 25 May 2012

Invest - what is a good Agile story part 2

Whilst wandering across the Agile landscape I came across an excellent mnemonic for getting a good set of story cards together; so lets INVEST in writing a good story:

Stories should be: Independent, Negotiable, Valuable, Estimable, Small, Testable:
  • Independent - One user story should be independent of another (as much as possible). Dependencies between stories make planning, prioritization, and estimation much more difficult. Often enough, dependencies can be reduced by either combining stories into one or by splitting the stories differently.
  • Negotiable - A user story is negotiable. The "Card" of the story is just a short description of the story which do not include details. The details are worked out during the "Conversation" phase. A "Card" with too much detail on it actually limits conversation with the customer.
  • Valuable - Each story has to be of value to the customer (either the user or the purchaser). One very good way of making stories valuable is to get the customer to write them. Once a customer realises that a user story is not a contract and is negotiable, they will be much more comfortable writing stories.
  • Estimable - The developers need to be able to estimate (at a ballpark even) a user story to allow prioritization and planning of the story. Problems that can keep developers from estimating a story are: lack of domain knowledge (in which case there is a need for more Negotiation/Conversation); or if the story is too big (in which case the story needs to be broken down into smaller stories).
  • Small - A good story should be small in effort, typically representing no more, than 2-3 person weeks of effort. A story which is more than that in effort can have more errors associated with scoping and estimation.
  • Testable - A story needs to be testable for the "Confirmation" to take place. Remember, we do not develop what we cannot test. If you can't test it then you will never know when you are done. An example of non-testable story: "software should be easy to use".
I like this a lot.

Tuesday 22 May 2012

Death by Technical Stories - what is a good Agile story card?


 
I have received and given training in what makes a "good" Agile story many times and seen many many more story cards that fill me with a sense of dread.

A good healthy Agile project needs people to not only understand what they need to implement, but also why and who for.  I know this has been written on the internet hundreds of times but it is worth writing again:
 
  • So that...
  • As..
  • I need...

Now these are business headings (or Interaction Driven ones depending on what you read):

 
"So that" I can use a credit card online, "as" a web channel customer, "I need" a screen to enter my credit card details with"

Anyone picking up the story card can immediately see what it is all about, and if they turn it over they should get a set of acceptance criteria that define the "how":

"Must be able to input card name, long card number, expiry date, and csv number"

In writing cards from a user perspective it is easy to weigh up their necessity and business value as a part of backlog prioritisation and brokering, also as every card is effectively a placeholder for a conversation it gives detail as to who to go to talk to if things are not clear.

 
The worst stories that will quickly kill an agile project are technical ones, they should be forbidden:
  • "So that I can index the database as a DBA I need..."
  • "So that I can edit the codebase more effectively as a developer I need to install XYZ tool"
  • "So that my class structure is better formed as a software engineer I need a book on Python"

  
As there is no way in seeing business value in technical stories then there is no way any product owner can "support" a story or indeed prioritise it over other features.  These cards should never be written and anyone who says they can't write a business story relating to a technical task needs asking, why are you doing the technical task then?

  • Do - put something precise in each section that demonstrates the business value and pick a proper person or user group as the "As"
  • Don't - write technical "So that”s or "I Need"s, or equally bad "a developer" "a user" or "a manager" etc in the "As"

Also, work with the product owners and BA’s to come up with a good “smart” set of acceptance criteria to go on the back of the card as these drive the whole TDD element of the Agile development process.

And finally for everyone who says you've got to put technical stuff somewhere; correct - it goes on the Task cards that underpin the stories, but these are only defined within the sprint Backlog, once the user stories have been selected to play in a sprint.

 
Happy story writing!

Thursday 17 May 2012

Who gets involved in what during the Agile cycle

Well the short answer is that in a highly colaborative environment everyone gets involved where they can add something to the discussion (not just prolong a discussion with anecdotes and trivia though)

Following a conversation I was having with a colleague (a very pragmatic agile manager) the other day I produced the following picture which neatly describes the "layers of the cake" and who gets involved.  Solid lines are primary involvement and dotted lines are more informal, informing or secondary.


I think its pretty self explanatory, cognisant there will always be times when people across the spectrum get involved in odd tasks, I was however very pleased with the form and simplicity of it - so thank you Matt Taylor and I hope you don't mind my choice of colours.

Tuesday 15 May 2012

Brokering the backlog - Estimation and manipulation of time to complete in Agile

As the Agile lifecycle progresses and more stories are broken down and estimated there will come a time where a reasonable prediction can be made as to how long it will take to complete a release (or "theme" sub components if an earlier estimate is required). This is simply based on the amount of outstanding complexity points to be delivered within the backlog of stories, divided by the measured (and predicted) velocity of the sprint teams involved in the development.

Step ActivityResultComment
1All user story cards are estimated for complexity.Backlog items have complexity estimate.Epic cards that are not broken down into stories should also be estimated in comparison to other Epic stories that have already been broken down.
2Velocity of teams is predicted.Predicted delivery velocity per sprint iteration going forwards.Team velocity of previous early sprints will have been captured (burn down charts etc) and a prediction can be made with regards to that total "per sprint" (all teams) value going forwards, potentially increasing due to improved learning and more practice or decreasing if resources change etc. can be averaged out to make the maths simpler if a small change is velocity is predicted, for large-scale changes the calculation should be broken up to reflect the steps in the velocity profile.
3Velocity is complared to ComplexityCalculated result of the number of sprints needed to deliver the backlogBacklog Complexity / Sum of teams velocity = number of sprints needed
e.g. 1150 points backlog / (ave. team 1 velocity 30 points + ave. team two velocity 34 points) = 18 sprints (36 weeks)
4Backlog is accepted / backlog needs reducing / resource capacity needs increasing.Total backlog complexity and delivery velocity is either re-factored or accepted.Either:
a) Product owners and Roadmap owners agree that the timescale estimate for delivery is in line with corporate goals (it should be noted that this estimate is not a commitment to delivery and has no contingency or overhead for release processes).
or
b) Product owners and Roadmap owners feel the timescale estimate for delivery is not aligned with corporate goals. Options are:
Reduce complexity in the backlog by removing stories, this requires the brokering of functionality amongst PO's to see what can be excluded from the release.
Increase velocity of development by increasing the number of team members, potentially adding whole scrum teams if needed.

Monday 14 May 2012

Break it down - getting to User Stories with Themes

So the organisation has a goal? Some broad ideals on a roadmap for delivery over the next 18 months / 3 years or whatever.  Thats all very well but what does it mean in real Agile terms.

If you ask the people at the top the detail of what is going into the "Platinum Release" sheduled for next year they won't really know, but they do know that they really really need it to stay ahead of the competition / win that new business etc.

Here this is where you can use some agile terms to get a level of insight and begin the agile breakdown.  I normally propose to organisations that they consider each of these top level milestones on the roadmap and assign "themes" to them which bullet out the contents.  At board level there will be some comprehension that within a certain new product it will have different things in it, and even at this level this is all useful information despite the very broad nature of the content.

If you considered a fictitious Microsoft Office 20XX product to be the next "Product release" then you could think that the "Themes" within could be "Word" "Excel" "Powerpoint" "Outlook" "Frontpage" "Publisher" and "Access".  At board level this is a good enough level of detail; no one is being asked to specify "cross product cut and paste" as a requirement, however even here there are levels of prioritisation that could be deployed as needed; I would suggest Word and Excel are probably more strategically important than Frontpage for example.

These "Themes" can be taken on by groups of product owners to start creating "Epic stories" around each theme and in turn, with the help of business analysts and other parties, these epics can be broken down further into "User Stories".

The hierarchy of understanding therefore is as follows:
  • Roadmap
  • Themes
  • Epics
  • User Stories
This can be neatly mapped onto the organisation so that at the top level the CxO view is at Roadmap and Theme; the Product owners/Business Owners at Theme and Epic; and the Product owners/Team members at Epic and User Story level.

As Agile is effectively transparent, when it comes to having to broker the requirements (assuming they don't all fit (as they never do)) then there is a platform whereby, at top level whole branches of the development effort can be measured and considered, as well as the individual components on every branch.  This means that someone could say "we are loosing Frontpage to secure the release" just as easily as "we are loosing cross product cut and paste as we cant afford the complexity".

Themes are not the solution to every project, however do provide a handy "Epic-epic" that is understandable at board level, without the directors having to get involved with "all that agile story mumbo jumbo".

Thursday 10 May 2012

Sprint Length - the beating heart of the Agile process

I had a conversation yesterday about the length of sprints as there were a range of opinions in the room.  In my world, the two week sprint is ideal  (as opposed to the "Scrum" norm of four weeks) for five main reasons:
  • Its an appropriate amount of time for teams to be focussed internally on what they have been set to achieve.
  • From a higher managment perspective seeing progress every couple of weeks is nice, whereas something longer (like monthy) seems a bit too "admin" (akin to attending monthly sales report meetings) and a bit waterfall.
  • If all goes wrong and a sprint is wasted, managment can "swallow" losing two weeks of time, but longer is less aceptable "what do you mean we lost a month - this is supposed to be Agile?"
  • It gives an opportunity to review the supporting Agile processes (CI, TDD etc) regularly so that they can be tuned or re-engineered if there are problems, also there is more retrospective review as a whole.
  • Shorter sprints mean less work-in-progress at any one time so you have potentially less integration problems and less risk of regression test failure if CI processes are not mature.
Some organisations are pushing the bouandaries a bit and going for one week sprints.  In my mind if the organisation is "Agile-mature" so both the automated processes and the manual ones such as sprint planning can be slick and quick, then its perfectly do-able, but if many companies tried this I think they would find they are doing at least one full day of sprit prep and retrospective for every iteration, therefore losing at least 20% of the available coding time to the process.

The final point I would make on spint lengths is that whatever you do choose, don't change it for the sake of it.  There is little point playing a two week sprint, followed by a three week sprint, followed by a two and then a one as all it does is "upset" the rest of the organisation. "Is it sprint planning/show and tell this week?" - "no-idea, what was last week?".

So much that is external to the scrum teams needs to interact with the Agile process (Product owners, stakeholders, IT support, BAU, release managers etc) that they need the calendar to be constant so they can plan their workload.  A whole organisation adopting Agile should have the sprint length as the constant heartbeat in the middle of it that everyone can "hear" in the background.

Wednesday 9 May 2012

Test execution in Agile - PDD to TDD

Yesterday there was a wide-ranging discussion about the absolute need to get automated testing up and running to underpin Agile development practices within the company I am currently consulting within.  My stance is a simple one; without a well tuned level of automation, the amount of time taken to execute and maintain a test execution set that gives coverage of the developed code and assures regression will soon start to outweigh the time taken to develop the features.  Sadly, the company has "had a go" in the past but never really got it up and running.

The Agile concept with regards to testing and automation is straightforward.  In phase zero set up your tools; get Fitnesse, Ruby, Watir, Bamboo, Buildmaster, Jenkins (formerly Hudson), Maven or whatever you choose sorted so you have your baseline builds and a test platform - why not even script a smoke test or base sanity test that can be automated and scheduled overnight for starters.  Get the "screens green" early, so people start to look at them.

Now for every sprint; after sprint planning the developers or QA/Testers get to the priority stories first and script a set of unit tests that encompass the acceptance criteria on the back of each of the story cards.  Get them within an executable test harness so they can be executed really easily. Run the job, and bits of the "screens" go red - great; now its the developers job to get the screens to green again by delivering those stories and writing just enough code to get he unit tests to pass (they don't call it Test Driven Development for nothing). Deliver coded functions, run (and pass) all scripted tests, commit code to baseline, get new baseline code, rinse and repeat.

At the end of each sprint all these created tests can be added to the scheduled test execution run that (assuming there has been an integration of the codebase) going forwards can execute every night, or just on a regular basis to give a regression result.  Effectively, the regression pack starts at basically nothing and then builds with every sprint, hand in hand with the functionality.  If you are really organised you can build the regression set quicker (as soon as tests are created) but you just have to be careful that all stories that are in play (with tests written) are "done" in the source repository before regression is run otherwise you will get failing scripts.

Of course this just deals with the unit tests, meanwhile the proper "Testers" are writing functional tests that could potentially be automated as well, but thats another story.

As for PDD - well that's Panic Driven Development, not uncommon in organisations that have had a false start at Agile, failed to get it going and been left with a lot of pieces and problems. Better avoided than implemented !