siliconindia | | May 201993. All User Stories are Readily Testable: The user story serves as a centrepiece for iterative development. It has the format `As a user persona, I want to do something so that I can achieve some testable reward'. That last clause about the testable reward? That's super important and a cornerstone of agile analytics. For every user story, it should be clear how you would prototype that story, put it in front of a test subject, prompt them with a goal, and see if they achieve that goal or not. 4. Key User Stories get Tested: And do key user stories get tested? I do a lot of work with teams and we spend time on writing more testable user stories. I've never met anyone who thought writing better user stories was a bad idea. But it's the teams that make a habit of testing early and often, with interactive prototypes for instance, that actually stick with the practice of making their stories testable. Beyond the obvious benefit of using evidence to find the right design early, prototype testing also creates a more focused and coherent transition to analytics in the software once it's released. 5. Experiments are Instrumented into Code: Instru-menting analytics into code is easy and affordable, and most companies do it. That said, it's the team that's car-rying strong, user-centric hypotheses through their prod-uct pipeline that's going to pick the right focal points for the experiments those observations are supporting. For example, one project our Masters of Science in Busi-ness Analytics students are working on is the USFDA's `MedWatch' site. On it, users submit information about adverse reactions to drugs. Let's say we're trying to make it easier for a busy doctor to submit these reactions in or-der to increase the data we collect. What should we A/B test? There are a lot of `interesting' possibilities, but with-out validated learning on what that doctor is thinking and wanting when they visit the site, we're unlikely to invest in A/B tests that really move the needle on performance.6. Analytics are Part of Retrospectives: Successful teams don't demo their software; they interpret experiments. Working in short 1-2 week sprints is a common feature of agile. Teams talk about how things went, and why & how they want to modify their practice of agile. Thankfully, this is common practice. What's less common is for teams to make a habit of reviewing their experiments during those retrospectives. Ultimately, we're creatures of habit, and so, a team that's not explicitly creating time to review their experiments is probably not going to get to agile analytics. 7. Decisions are Implied by Analytics: Are decisions implied by the team's analytics, or is the plan just to `review the numbers'? The team that's practicing agile analytics already knows the implication of their observations, because their observations are tied to experiments and the experiments are tied to decisions. For example, are you really ready to kill that feature if it sees low engagement? What if a user complains and says they absolutely have to have it? Agile analytics makes the job of deciding easy. Beyond the obvious benefit of using evidence to find the right design early, prototype testing also creates a more focused and coherent transition to analytics in the software once it's released
< Page 8 | Page 10 >