Large, non-software companies introducing Agile to their organizations tend to suffer from a cognitive dissonance of sorts: we would like to have the same look and feel across the entire company, delivering stellar-quality products, yet we want to enable high-performing, self-organizing, self-managed, and self-empowered teams to deliver (or demo) at the end of each sprint. Resolving this cognitive dissonance — and flagging and addressing the impediments slowing teams down — will benefit every layer of the organization.
To some extent, pursuit of new technologies without a clear problem-to-be-solved is natural for basic research, such as for developing new materials. For applied R&D, defining the need to be met is essential, yet many times teams are not able to begin with a clear understanding of the user’s problem. Integrating the end customer into the team would clearly be ideal, but this is often not possible in large industrial development work (for software and non-software teams). So what’s a team to do to span the gap between its work and the customer market?
In some cases, it’s tempting to strive for a “superior” solution: wanting the product to do everything that everyone could possibly want. But building the most complete offering possible is costly and slow to get to market — the opposite of the business agility companies need today. As an example, an innovative next-generation digital loop controller startup wanted to bring fiber to the curb 20 years ago, yet the MVP for the first market version of the distributed hardware-software system was defined to require support for coin (public, pay) phones. Not unexpectedly, developing and testing coin phone support for the first MVP took significant time and effort. The startup didn’t make it off the runway with its first market offering before running out of funding and filing for bankruptcy. By that time, coin phones were already fading away in the US; not only was all the effort and time spent on building coin support a waste, it literally cost the startup its business.
In other cases, it’s easy to fall in love with the latest technology and start looking for a way to use it without first thinking deeply about the real need and other ways it could be met. Think about artificial intelligence (AI) and machine learning (ML) for a moment. How often do we know the underlying user problem, the user value that we’re providing, and what decision the user is trying to make with the analytics, before we dive into applying the latest ML techniques?
In both cases — the desire to create advanced, complete offerings and the desire to use advanced, sexy technologies — the problem at core is lacking a laser focus on the real end-customer need that we are trying to solve, and what end customers value most (including whether they would prefer to have a partial solution sooner versus having a more complete solution later).
Try This
Thread a proposed applied research project back to user needs and value before pursuing a technical solution. In applied research, we might begin with a hypothesis derived from a question or, better yet, a series of questions (such as “5 Whys”). A simple checklist item at project initiation can help: “What end-user need will this research address?” Modifying existing checklists, or “tailoring,” is a great way to turn a business decision process such as stage-gate modeling into an enhanced support for agility, rather than an impediment.
If your team is not sure what a customer needs versus what the customer wants or what would be nice to have, techniques like Kano modeling and low-fidelity prototyping can help. With analytics, if people ask for KPIs and dashboards, try to drill down to focus on what decisions they are trying to make from the requested KPIs and dashboards, and explore how they make those decisions today. In one practical application, a decision-oriented approach with Kano modeling was successfully applied to kick off a major initiative in asset health, with involvement of an enterprise-wide group of cross-functional roles from the end customer’s organization.
In industrial analytics, as in Extreme Programming and many other areas, after the customer need has been identified, it can be highly beneficial for eliciting valuable early feedback to first “do the simplest thing that could possibly work.” For example, in asset performance analytics or failure prediction, simple statistics, visualizations, statistical process control techniques, or classification algorithms may provide useful-enough insights to guide users to good decisions, without investing effort in deep learning on GPU clusters. Another consideration with analytics is that, before people trust the results, they generally want to understand how and why the analytics came up with the recommendations. The need for “explainable AI” is a challenge with advanced AI techniques, one that is being addressed by research. But why get stuck on that barrier to user acceptance when simpler analytics techniques that are more intuitive may meet the real need more quickly and easily, with less computational burden?
[For more from the authors on this topic, see “5 Scenarios for Addressing Agile’s Cognitive Dissonance in Large, Non-Software Companies.”]