Agile Journey Index

Here we have more detail on the Agile Journey Index which is a companion to the our assessment offering.

The Agile Journey Index © (AJI) is an assessment tool designed to provide a quick snapshot of agility, assessing the levels of competency of an organisation or team in Agile software development.

The AJI looks at 19 practices divided into three categories Plan, Do, Check.  They have been summarised here for the reader’s convenience (from the Agile Journey Index Handbook)

Category 1 – Plan

The first category is planning. It collects all the practices related to envisioning the product and planning its implementation.

  1. User Stories

Most agile teams use User Stories to express their requirements. This is in contrast to some teams that use Use Cases, which tend to be longer and require more specification of the user interface. They also show alternate paths. But User Stories have the advantage of brevity up front, and capture the detail closer to the time of implementation, ideally in the form of executable acceptance tests. Delaying the detail allows better results because more is known of the system and changing market when the stories are written.

  1. Product Backlog

The product backlog is key in Scrum and other Agile Frameworks. It is a prioritized list of features. This usually includes new work, but can also include defects reported by customers or meta stories such as catching up on neglected test automation. The team pulls work from this queue when they are ready for more. Adjusting the priority of the list can determine the course of the project.

  1. Estimation

Some teams do not have formal methods of estimation. Some use a model called COCOMO II, but that requires lines of code as input, which can be equally difficult to estimate. Most agile teams use “Planning Poker” which is taken from wide band Delphi. It involves each team member secretly gauging how much effort and complexity is required for each story. They also reveal their thoughts at the same time.

The person with the high number justifies why it is so high. The person with the lowest estimate states why it is so low. After two minutes of discussion, the process may repeat a few times. The value in this technique is not just the numbers but also the resulting discussion.

Sometimes it becomes clear that there is not enough data to estimate a story. That is good to find out early. The team may vote with a ‘?’ card for this. The “Sequence of Values” used for planning poker in Mike Cohn’s work is based off a scale that occurs in nature. The purpose of using that scale is to allow finer granularity at the common small numbers of 1-13, and not argue small differences between larger numbers.

Planning Poker is designed to offer a way to quickly estimate a large pile of requirements to quickly sketch a release plan. More detailed estimates are done at the beginning of an iteration when stories are broken down into internal tasks.

  1. Release Plan

Agile features multiple levels of planning, including release planning, iteration planning, and daily goals. Agile also admits estimating software is an imprecise art, so it is necessary to start, gain empirical experience with the team and domain, and adjust based on that experience. However, Agile does account for a rough plan at the beginning. That is to not assume the unknowable can be understood. This does not mean we fail to plan for Agile projects. It is all the more vital to plan for them. But the plan focuses on what we know, and discounts what we don’t. It takes a slightly different shape.

The iron triangle of project management says one of the following three factors must be variable: project content, number of people, and end date. We do not usually have the luxury of adding people and educating them in time to contribute for this release. The end date is often fixed to meet a trade show or other market need. And if the date moves, more content pressure pours into the release which can push the date out further. So while projects with fixed scope can do Agile by varying end date, it is preferable to vary the scope and guarantee the end date.

Note the invisible fourth leg of the Iron Triangle is Quality. There is no compromise on Quality on agile projects. Deferring work to build in quality or repair trouble spots costs time later when we are no longer in proactive control. Any rush to omit steps that bake in quality will make the project take longer and cost more when measuring not the rush to code check in, but the time until the customer can productively use our solution.

That leaves scope as our variable. The release plan reflects this by showing one set of features as must do’ elements (based on our conservative velocity, or rate of work). The next set are our ‘stretch goals’. The team will probably do them all, but if someone is out sick for 2 weeks with the swine flu, or our customer’s industry regulations change so as to introduce new requirements, the new items will push the lowest priority ‘stretch goals’ out of the release.  As Paul Gibson said at IBM – “Shame on development if we don’t ship our must do requirements. Shame on marketing if they promise our stretch goals.”

  1. Iteration Plan

Release planning just sketches out what we think will happen, but is done at the time when we know the least. We may not have experience with the team, technology, or feature set. A lot may change in our customer’s world between the time the release plan is written, and the time we begin to implement a specific requirement.

  1. Big Picture

It’s important to have a big picture in mind as we develop our project. This is especially true because we will shift our focus as we get feedback from our iterations. Leveraging change is good, but without a guiding framework we may get far off track. Big picture documents address this risk.

There are four kinds : Vision, Roadmap, Personae, Story maps

  1. Governance (not done in our review)

We like the purity of similar iterations Scrum offers. But for large projects it can help to prove the architecture though working code. We want iterations to be similar, but it seems there is room for different ‘flavors of iterations. The concept is adapted from the Rational Unified Process by IBM. The ‘flavors’ are Inception, Elaboration, Construction, Transition, and Production (which Scott Ambler added). We don’t include construction in our governance metric because we assume it is handled well by most teams.

Companies who must account for time to claim capitalization expenses can adjust this section as needed to reflect the governance requirements for their industry.

Category 2 – Do

The next part is the hard part – technical implementation of your project. What practices build in quality and highlight the need for mid course correction?

  1. Stand-up Meeting

Successful Agile teams exhibit some winning behaviors in their stand up meetings.   The ABCs are Assist, Blockers, Cheer, Delta, takE it offline, and Finish :

A – Assist: Do people offer or request help in the meeting (egoless programming)

B – Blocker: Are impediments called out?

C – Cheer: Do people cheer when a team mate moves a task or story to the done column? This aids in self directed teams.

D – Delta: Does the team look at the burn-down chart or numbers to see if they are falling behind their desired pace? Do they take action if required?

E – Exit: Do you hear ‘takE it offline’ when conversations take more than a minute?

F – Finish: Do people describe what they finished and will finish as opposed to an ‘80% done’ status or other rambling time fillers.

  1. Task Board

One of the most useful tools for teams to monitor their progress is the taskboard. In its simplest form, it is just a set of notes pasted to the wall or the equivalent electronic tool. Such tools are useful beyond software projects. But in its most advanced application it allows you to generate the “flow” sought by agile teams.

It is the heart of the Kanban method, and commonly used to manage the sprint in Scrum. It is also used to manage portfolio level features and epics in Dean Leffingwell’s Scaled Agile Framework ™.

  1. Burndown

The ideal line on a burn-down chart may not reflect the typical pattern of work. Its value is in simplicity, and is a reasonable approximation. However, measuring conformance to the line is *not* the goal. Having zero hours left at the end of the iteration is. However, after conducting a statistical review of teams that strayed too far from their ideal burn – down slope an failed their sprints, and those that strayed a bit and yet succeed, Bill has found that wiggling plus or minus 15% away from the line is acceptable, and in fact, expected. (There is no wiggle room on the last day when your hours left should be zero!). The variance of successful teams indicated that 1 standard deviation was 22% away from the line. We tighten that to establish a guideline of 15%. If a team varies more than that successful teams will do a quick mid sprint retrospective.

  1. Code Review

Research has shown that formal code inspections can cut a team’s defect injection rate by 50%. They are not free – it takes about 15% extra labor.  Oddly, the cost benefits of Pair Programming are similar. In pairing, one person thinks big picture, and the other does the work and thinks about details. They switch every 15 minutes. It takes two people but goes almost twice as fast. But since defects are prevented early the investment pays off. This helps for some challenging tasks, but the benefits are seen in the course of a couple of months. As people rotate partners they learn about tools and the project over time. Note that it is not just for computer programming.

Formal inspections differ in that preparation is needed, then the review, then the fixes. This cycle can take a while and make it hard to fit in a short iteration.  Automated tools can also be used in this space to check for subtle errors, but they miss the gradual dissemination of team learning or human look at the big picture.  A combination of methods can be employed.

  1. Unit Test

Unit Testing engages the mind of the developer to create automated test harnesses that prevent future bugs from creeping in. It is valuable because it balances the pace of new buggy features by making quality safety nets visible.

  1. QA Automation

Test automation not only saves time, but shows a different class of bugs than is possible in manual testing.

  1. Quality Engineering

There is a field of ‘Quality Assurance’ which talks more about process and numerical analysis of efforts than test execute. But we will use the QA term nonetheless. Intelligent manual testing is still important, even if automated testing exists.

  1. Continuous Integration

Are the products builds performed automatically without manual steps?  Is the product built at least once per day?

  1. Done

Getting to ‘done’ is one of the factors that distinguish hyper productive scrum teams from mediocre teams. The team needs to define this. The exercise produces part of the benefit. But a good example of completeness is:

  • All unit tests run and successful
  • Code has been refactored (cleaned up to reduce the change of future bugs)
  • All code inspected or paired
  • Code has been built
  • Tests have been run

Code that is projected to not meet those criteria before the end of the sprint should not be checked in.  Half tested code sprinkled with ‘to do’ comments should not be in the code base because they may cause programs as more code is written on top. It’s too hard to pull out later.

Allow the team to define completeness criteria.  It should include all types of work, so no extra work need be done later.

Category 3 – Check

Feedback is the key mechanism of Agile. This is true both of product feedback, and feedback on how we do our work.

There are also four lenses used to award badges to teams who have done a superior job in adopting practices and adhering to the Agile mindset and values.  These are useful within the context of continuous improvement of teams when follow-up interviews acknowledge more agile practices.

  1. Demo

Are demonstrations held at the end of the iterations?

Are the right mixes of stakeholders included in the feedback sessions?

Does the team collect comments on the product from the stakeholders based on the demonstration?

Use to improve is the feedback used to improve the product in future iterations according to priority?

  1. Retrospective

One of the most powerful elements in Agile is the frequent lessons learned. Traditional projects used to save this for the end of the release. That is too late to be valuable. The defects caused by faulty process have already been injected. People may not remember what happened early in the process. And they may not care because they may be moving to a new project at the end of the release.

Iterative projects address this by holding more frequent lessons learned, or ‘retrospectives’ at the end of each iteration. Don’t make a big list of items. Focus on two actions. The rest will still be there in two weeks (or may solve themselves).

People will forget to do them as they work on their code if there are too many changes all at once. Focus and Finish a Few :

  • Reflect every sprint (few weeks)
  • Take action to improve the process
  • Measure results where possible
  • Share ideas between teams
  1. Kaizen

Do team members take time to learn and keep up with their profession?

Do people collect fresh ideas from other teams and around the industry?

Does the team share what they have learned with other teams in the organization and community?

The Agile Journey Index Process

  • Interviews are scheduled and held with team members
  • Notes are made during the assessment
  • Each question asked consists of components that score higher as the questions are answered. Lower level criteria needs to be met before the higher criteria can be achieved. If a component has not been implemented the scoring ceases. i.e. You can’t get a 7 if you have not fulfilled the criteria for a 5.
  • The results of the assessment are summarised
  • Feedback and recommendations are provided
  • A follow up assessment is performed 6 months later (if required)

In practice this instrument is known to score teams well on their first interview, but in subsequent interviews scores similar or lower, as they truly find out what each item means as they seek to improve. This should not be seen negatively, moreover a motivation for continual improvement.

In the scale :

  • ‘1’ means the practice is not present
  • ’10’ mean the practice fills all criteria and has been reviewed by a peer.
  • Numerical rating has criteria associated with it – to get a perfect 10, you need to have fulfilled all criteria.

The coach rates a team’s use of each practice from 1 to 10 – scoring occurs in odd numbers. Even numbers are used to show some progress between two levels or a trend of decline.

Badges are awarded for achieved levels of agility.

It is important to note that the premise of the results and subsequent recommendations is to keep communication open and two-way to ensure that both good and poor results are dealt with in a collaborative atmosphere of continuous improvement.