Large-Scale Epic Ranking

Updated: Nov 9

How do you facilitate 120 people to rank 40 epics in just 1 day?

With a few small modifications Mike Cohn's Theme Screening described in Agile Estimating and Planning (2005) works very well.


The basic technique is:

  1. Identify the selection criteria that guide your decision making. Examples might include cost savings, customer retention, market differentiating.

  2. Choose a baseline epic that is well understood by the audience and is likely not the highest scoring or the lowest scoring for each of the selection criteria. Mark “=“ in each cell under the baseline row

  3. Compare each epic with the baseline epic using the selection criteria. If it is better than the baseline, place a “+” in the cell. If it is worse give it a “-” symbol. If it is the same give it a “=“. Repeat for each epic.

  4. Scoring 1 for each “+” and -1 for each “-“, add up the scores in each column to get the total score for each epic. The highest ranked epic has the highest score.


Variations on a Theme

This is how to modify the technique to work at a very large scale:



  1. On a large wall, attach butcher paper and layout the ranking table. Attach the epics horizontally left to right as the header row of the ranking table. Use large Post-its so that the Epics can be easily moved if necessary.

  2. Identify the baseline epic before the event and propose to the group. It might be sensible to have more than one candidate baselines just in case.

  3. Identify the selection criteria at the event. Ask each table (5 -8 people) to brainstorm selection criteria. Affinity group the selection criteria and dot-vote to reduce the list to the top 3-5. Hold a confidence vote before proceeding.

  4. Give each presenter 3 mins to describe their epic with 2 mins for questions

  5. Modify the scoring technique to get to consensus quickly and avoid ties. Give each table green, red and blue sticky dots. Each table discusses the presented epic briefly (2 mins) and decided whether the epic was better (green dot), worse (red dot) or same (blue dot) than the baseline for each selection criteria. Attach the dot in the appropriate cell on the ranking table.

  6. Score +1 for each green dot, -1 for each red dot.

Conclusion

Did it work? Yes it did. We were able to zero in on the epics to be prioritized and the group was able to draft a lean canvas for each in the afternoon of the 2nd day. Naturally we had some challenges along the way.

  1. We were slow to start. Everybody was unfamiliar with the flow but after the first few epics we established a good rhythm. If you are facilitating this exercise watch out for presenters who say "you know me, I'll keep this brief". They never do. Use a time-box and stick to it.

  2. The target number of epics for the workshop was 22. By the start of the first morning that list had grown to 40 and grew to over 60 as attendees realized key initiatives were missing. Although we made good progress it was apparent to all in the room that we had too many epics so before the the second day we culled some of the lower value or future epics and refactored others.

  3. The great thing with the voting approach is the scores are not impacted by the number of votes cast. If we lose a table or a table abstains we just assume a blue dot.

  4. The colored dots made for a very good information radiator; taking a step back it is easy to note from the clustering of red or green where the least important and most important epics were.


36 views

© 2020 FiveWhyz LLC

  • LinkedIn Social Icon
  • Twitter Social Icon