This is Part 2 of a 4 part series. Part 1: Why Software Requirements In The Real World Are Hard discusses the challenges of developing requirements and what good ones might look like. This post looks at the requirements development process and its outputs on a real-world project.

TL;DR

The familiar dichotomy between agile approaches that prioritise shipping software to elicit real requirements and waterfall approaches that prioritise upfront requirements engineering is overly simplistic. In between these poles, there’s an intermediate approach for developing requirements that is easy enough to implement and better at delivering value to users and stakeholders. Features and benefits of this approach include:

Vision Coach

Vision Coach is the real-world project I’ll use as a way into this topic. It’s a platform my team built in partnership with Bayer Healthcare for patients living with, and doctors treating, an eye disease called diabetic macular edema (DME/DMO). DME affects c.21 million people with diabetes globally and is the leading cause of blindness in adults of working age. 
Bayer Healthcare provides a therapy that is one of a class of therapies that eye doctors use to improve the vision of people with retinal diseases like DME. Despite being a sight-threatening condition, patient adherence to therapy is poor, meaning vision outcomes are often suboptimal. Addressing this problem formed the focus of the project.
For ease, I've stuck with traditional terms throughout - e.g. “requirements, elicitation, specification”. Though not perfect (isn't calling hypotheses "requirements" weird?), they have the advantage of being familiar.

Requirements approach

Debates about how to do requirements often centre on two antithetical approaches that I’ll call Analysis paralysis and Iteration worship. Analysis paralysis says you must elicit and specify requirements upfront before any coding can start, they must have a perfect set of attributes (consistency, lack of ambiguity, completeness etc), and if this takes weeks or even months of effort, so be it.
Iteration worship says the opposite - the best way to elicit requirements is to build something and test it out with users. Users don’t know what they want, or at least can’t always articulate it, and it’s not until they’re presented with working software that their true requirements emerge. Upfront specification is therefore a waste of time.
Very broadly, this describes waterfall and agile approaches to requirements development. The two are opposites, and it’s usually assumed you’re on one side or the other. So which side are you on?
Well, obviously you’re not on the side of Analysis paralysis. Spending lots of time eliciting requirements from stakeholders, making them consistent, complete, testable (and all the rest) before you start coding is futile in the face of uncertainty and change, and all it does successfully is raise the cost of failure and learning.
Which is no good if you need to fail and learn a lot, like most teams. Oh, and the fact it doesn’t work is well evidenced - the Standish Group’s Chaos survey is one source frequently wheeled out as proof. 
So that means you’re on the Iteration worship side, right? Well no, at least not as it’s been characterised (or caricatured?) here. This approach has problems too. First, it’s simply not true that you can’t say anything valuable about requirements without first shipping software to users - rapid prototyping using wireframing tools is one technique capable of eliciting useful evidence for requirements before coding starts.
Second, iterations aren’t actually that cheap - sure, they’re cheaper than delivering software waterfall-style, but they’re still expensive vs. techniques like rapid prototyping.
Third, if you genuinely spend no time defining your requirements, what you build is likely to be further away from your target, necessitating more iterations to get there.
Our approach fell somewhere in between the two - some specification up front combined with shipping working software early to elicit further requirements from users in higher fidelity experiments.

Process and hierarchy

In Part 1, I identified some key properties of a requirements development process and its outputs - e.g. it needs to be collaborative, iterative, and its outputs need to be tailored to different audiences. Going beyond this, it’s helpful to define a process and identify techniques for optimising outputs.
Figure 1 shows the process we followed. It consisted of four activities:
Figure 1. Requirements development process (adapted from Wiegers & Beatty, Software Requirements, 3rd Edition). 
The process was iterative and involved moving back and forth between different activities, often in the same session, meaning faster feedback loops and better outputs. It also involved a review and approval decision point for client stakeholders, which was required before any coding could start. Beyond this, it was integrated into the broader Scrum process we used to deliver the project, which consisted of 2-week sprints, daily standups, client showcases and retrospectives at the end of the sprint, and planning at the start of the next.
Additionally, we defined a hierarchy that consisted of the levels shown in Figure 2.
Why define a hierarchy? Different people need different information captured at different levels of abstraction. On Vision Coach, client stakeholders spent time reviewing the vision, scope, user stories and high-level features, but weren’t interested in technical designs or tasks.
A delivery team also needs context for decisions, which a sensible hierarchy can provide.
Figure 2. The requirements hierarchy. 
Defining the hierarchy was the easy part. Populating it was more time consuming, but given we weren’t in the analysis paralysis game and had a small team, we populated only as much as we needed to upfront to get going. Initially this involved more work at the vision & scope levels to get the project greenlit, and then at the lower levels.
Importantly, we didn’t always populate the hierarchy top-down, a good example being a scope change, where we might document only a user story and tasks if it fitted with existing features and non-functional requirements and wasn’t sufficiently contentious or complex to call for technical designs.
Which is to say, we used the hierarchy more as a guide than an enforceable schema - it helped us structure requirements when we needed to produce them at the appropriate level(s) of abstraction.

Elicitation

Healthcare is complicated. There are lots of stakeholders, usually related in complex ways. These include patients, doctors, clinics, hospitals, payers, regulators… the list is extensive. Direct engagement with all of them is impractical, so you create representative proxies, which was our approach here.
Requirements and constraints (conditions placed on requirements) came from a large number of stakeholder groups. Here are the main ones (there were others!):
Users
Client functions
Client suppliers
Us
For the client, we created a core team at the global level to represent key client functions across the business. We elicited requirements from this group, and if we needed to speak with other stakeholders (e.g. specialists in fields like medical device regulation or data privacy) this was facilitated by this team.
For patients and doctors, elicitation was more complicated, as pharma companies (and their suppliers) are bound by strict regulations and internal processes for communicating with them, meaning user testing isn’t easy, quick or cheap.
Luckily, in the first instance, we had access to clinical expertise internally, and were able to rely on extensive market research and user testing with patients of a previous similar(ish) prototype. On an ongoing basis, we elicited requirements using a mixture of observation, interviews, workshops, testing with prototypes (built to differing levels of fidelity) and ad-hoc follow up.

Analysis, specification & validation

Elicitation outputs were documented by the PO and UX lead, usually as unstructured notes in the first instance, which were played back to client stakeholders for review and approval. These were then collated by the PO and turned into user stories (Level 2 in our hierarchy) and documented in a Jira ticket using an agreed template that included:
Discussion with the internal delivery team started in backlog refinement sessions, the goal of which was to refine stories, nail down acceptance criteria, and augment them with features, non-functional requirements, technical designs and tasks (Levels 3-5).
Discussion was finished off in planning, where we compiled a sprint backlog of requirements that met our Definition of Ready. Disagreements about technical designs, often due to complexity, were the cue for further design work, which we did in design sessions during sprints.
In all these sessions the PO, with support from appropriate domain experts, represented users and client stakeholders to the developers, helping to answer their questions and guide their decisions.
This is how we specified artifacts at Levels 3-5 in our hierarchy:

Example requirements

Let’s look at a thin vertical slice of our hierarchy from onboarding for the patient mobile app to see how the outputs turned out. It includes a mix of content that applies to the platform globally as well as to patient app onboarding only. 
Vision and scope (Level 1)
We used this neat template (originally from Geoffrey Moore's Crossing the Chasm) to capture the vision and scope succinctly:
User stories (Level 2)
The onboarding epic collected together all the user stories for onboarding, which consisted of two separate flows - sign up and login. Figure 3 shows a screenshot of a story in both flows - SMS-based one-time password (OTP) verification. It uses the template described above, and has acceptance criteria covering both primary and alternate goals.
Figure 3. Example user story for account verification during onboarding
Features and non-functional requirements (Level 3)
Features
The onboarding feature was captured in a Jira epic as lists for the separate sign up and login flows.
Sign up:
Login:
It was supplemented by the quasi activity diagram shown in Figure 4, and linked to related user stories.
Figure 4. Patient mobile app onboarding flow. 
Non-functional requirements (NFRs)
Technical designs (Level 4)
Figure 5. UML sequence diagram showing the end-to-end authentication flow
Figure 6. Vision Coach regional deployment
Tasks (Level 5)
Implementation work was documented in sub-tasks attached to the parent ticket in Jira. Generally we took an incremental approach to implementation, starting with a minimum viable product (MVP), and layering up functionality from there. Staying with the phone number verification story, front- and back-end (FE & BE) tasks included:
Increments beyond MVP included the following tasks (FE & BE):

Challenges and solutions 

We encountered a number of challenges developing and managing requirements using the approach sketched out here. These were the main ones and some of our solutions:

Summing up

Though much of what you read online about requirements development suggests approaches are polarised between Analysis paralysis and Iteration worship – and most people are aligned to the latter – there are intermediate approaches that can yield real benefits quickly. 
For my money, the most material benefits of an intermediate approach like the one described here are improved velocity, cash burn and team morale as a result of improved decision making due to better context. From personal experience, I have watched our team’s velocity improve by ~40% as a result of spending time thinking about and defining requirements properly.
This "objective" measure comes with some caveats - it it is calculated crudely by measuring the difference between average story points completed per sprint for a defined period pre and post introduction of better requirements development, and it fails to control for other confounding variables (e.g. personnel changes).
But directionally it’s interesting, and the fact it was also accompanied by a dramatic improvement in subjective measures - team morale and satisfaction with progress made during sprints - provides some additional evidence.
In summary, the ROI on that initial investment of time and effort is material and the payback period can be as short as a single sprint, so it is definitely worth doing.
What would I do differently next time? Short of coming up with a magic way of making compliance functions totally agile and fully integrated with the development cycle, the top things I would do are:

What’s next

Part 3 focuses on tools for managing requirements. It includes an analysis and evaluation of tools we’ve used in the past, and that other modern software teams tend to know about and consider when deciding how to manage their requirements.

Bibliography

A special thanks to Karl Wiegers for his helpful review comments!