COVID-19 marks the end of the golden age of black box algorithms. Our once-reliable forecasting models and optimization tools are buckling under the crushing uncertainty brought on by the virus, leaving us unable to predict a week out, let alone a quarter or a year. What becomes of our optimization, data, and predictive modeling assets? Do we toss out our meticulously projected LTV calculations? Will our heavy customer segmentation investment from 2019 be rendered useless, or worse, alienate our customer base?
More importantly, how can we augment our data or models to salvage this investment? How can we ensure our models can survive the next shock?
Divergence in predicted outcomes
The most recent six months of economic data roughly sketch out a K-shaped recovery graph for businesses, industries, and the labor market alike. An aggressive bifurcation of our economy into winners and losers reveals that the need for adaptation is not evenly distributed across all businesses.
Which businesses can survive another year like this? Which types of consumer behaviors are permanently altered? Are all hyper-optimized supply chains a thing of the past? How will the new patterns of work and transportation impact different industries? (Our partner agency, DAC, has helped clients answer these questions and make lasting adaptations.)
Given such pronounced diversity in reactions to the pandemic, it’s imperative that we zoom into individual businesses to expose and analyze specific goals, assumptions, and data collection if we hope to preserve trust in our predictive models. We can’t just settle for generalized tweaks and blanket adjustments anymore.
Broken models and polluted data
COVID-19 is smashing fragile algorithms by presenting businesses with treacherous terrain that no AI has ever trained for. It has become essential to diagnose where the fractures are in order to formulate an adjustment plan, which requires an X-ray imaging exercise for your models.
Unearthing the assumptions that underpin a model begins with reviewing any existing supporting documentation. Yes, documentation can be a pain to produce once your projections are made, but a thorough centralized catalog of assumptions and supporting rationale facilitates easy review and analysis of model durability.
We also inherit many significant, implicit assumptions when applying our old models to new data without adjustment. Do we accept the same competitor set as last year? Has device sharing within a household changed user profiles pre- and post-pandemic? Do our data points themselves mean the same thing anymore? Does a conversion signal (i.e. add to cart, view shipping costs, etc.) hold the same correlative predictive power as before?
Isolating and analyzing this comprehensive list of explicit and implicit assumptions is a critical to evaluating the robustness of a predictive model. Gather a multi-disciplinary jury of stakeholders and subject matter experts to scrutinize this list and select a set of final grouping of input values.
The degree to which your models need to be overhauled depends on how far your assumptions have to move to stay relevant. In some instances, assumptions will have to be dropped, suspended, and entirely rethought. For instance, customer segmentation models based on historical purchase patterns, product preferences, and even third-party demographic clusters may simply have become invalid.
Finally, limiting our field of vision to previous modeling work may miss critical factors that influence or even define forecasts. Strategic pivots in your organization in response to the pandemic might mean that your model is no longer optimizing towards the right success variable anymore. You may not have the right set of KPIs to measure meaningful performance in this new environment, or you might not be collecting critical data points needed to achieve your goals. Investing resources into auditing and rebuilding your models’ assumption framework cannot be avoided if these tools are at all depended upon for strategic planning.
Planning for resilience or efficiency? Maybe both.
During uncertain times it may, in theory, be prudent to avoid predictions altogether. But realistically we risk being laughed out of the room after trying to secure new budgets, win RFPs, or value a business without a projection in hand.
When you’re asked to resource plan for next year, you’re forecasting. Do you pack light, and be agile? Do you gamble on a particular set of assumptions? Or do you plan for all sorts of weather? As is sometimes asked, do you plan for efficiency or resilience?
This year has revealed that firmly planting your feet on a point estimate produced by a forecasting model—or letting an AI tool optimize to a fixed goal—doesn’t work anymore. And though our human judgment algorithms have been trained to view changing forecasts as a dance away from accountability (or simply forecasting backwards), we have to accept the possibility that balancing efficiency and resilience may be our only hope of still using data to plan ahead. We have to think of building modeling frameworks, not just models.
Introducing the TEAR modeling framework
Developing robust, adaptive predictive models requires a modeling framework that shifts us from defining models as assets but rather treating them as processes. To that end, we have outlined a four-dimensional framework to accompany any post-pandemic modeling exercise.
-
Transparent
All underlying assumptions, selected by a cross-organizational group of subject matter experts and senior business leaders, are presented in a centralized and readily accessible living document. The document is co-authored by the team with ideas captured in plain language.
-
Expandable
A detailed workflow is in place to capture, purchase, and/or integrate new data into your model. The process details constraints, measuring impact on model performance, and other considerations.
-
Adjustable
Assumptions can be easily changed by non-technical experts to see impact on forecast changes; an interface or simple change request workflow. Stress tests/scenarios can be produced easily with limited turnaround time.
-
Revisited
Regularly schedule meetings to validate/challenge/evaluate assumptions in the model, as well as the performance of the model.
Stop, collaborate, and transition
It’s easy to see why predictive analytics has enjoyed steady growth in recent years. Business leaders have been emboldened to invest heavily in solutions without needing even a fundamental grasp of the inner workings of models. Why? There was simply no need as models seemed to reliably forecast or optimize on a more-or-less self-sufficient basis.
Now, amid rampant uncertainty, if you don’t have a functional understanding of your AI tools or predictive models, you’re in a world of trouble. There’s really only one thing for it: you need to stop what you’re doing and assemble the right multi-disciplinary team to transition into a transparent, expandable, adjustable, and frequently revisited modeling process.
Fortunately, that just happens to be one of our specialties. Let’s talk.