Big Data’s Biggest Challenge? Convincing People NOT to Trust Their Judgment

Here’s a simple rule for the second machine age we’re in now: as the amount of data goes up, the importance of human judgment should go down.

The previous statement reads like heresy, doesn’t it? Management education today is largely about educating for judgment—developing future leaders’ pattern-matching abilities, usually via exposure to a lot of case studies and other examples, so that they’ll be able to confidently navigate the business landscape. And whether or not we’re in b-school, we’re told to trust our guts and instincts, and that (especially after we gain experience) we can make accurate assessments in a blink.

This is the most harmful misconception in the business world today (maybe in the world full stop). As I’ve written here before, human intuition is real, but it’s also really faulty. Human parole boards do much worse than simple formulas at determining which prisoners should be let back on the streets. Highly trained pathologists don’t do as good a job as image analysis software at diagnosing breast cancer. Purchasing professionals do worse than a straightforward algorithm predicting which suppliers will perform well. America’s top legal scholars were outperformed by a data-driven decision rule at predicting a year’s worth of Supreme Court case votes.

I could go on and on, but I’ll leave the final word here to psychologist Paul Meehl, who started the research on human “experts” versus algorithms almost 60 years ago. At the end of his career, he summarized, “There is no controversy in social science which shows such a large body of qualitatively diverse studies coming out so uniformly in the same direction as this one. When you are pushing over 100 investigations, predicting everything from the outcome of football games to the diagnosis of liver disease, and when you can hardly come up with a half dozen studies showing even a weak tendency in favor of the clinician, it is time to draw a practical conclusion.”

The practical conclusion is that we should turn many of our decisions, predictions, diagnoses, and judgments—both the trivial and the consequential—over to the algorithms. There’s just no controversy any more about whether doing so will give us better results.

When presented with this evidence, a contemporary expert’s typical response is something like “I know how important data and analysis are. That’s why I take them into account when I’m making my decisions.” This sounds right, but it’s actually just about 180 degrees wrong. Here again, the research is clear: When experts apply their judgment to the output of a data-driven algorithm or mathematical model (in other words, when they second-guess it), they generally do worse than the algorithm alone would. As sociologist Chris Snijders puts it, “What you usually see is [that] the judgment of the aided experts is somewhere in between the model and the unaided expert. So the experts get better if you give them the model. But still the model by itself performs better.”

Things get a lot better when we flip this sequence around and have the expert provide input to the model, instead of vice versa. When experts’ subjective opinions are quantified and added to an algorithm, its quality usually goes up. So pathologists’ estimates of how advanced a cancer is could be included as an input to the image-analysis software, the forecasts of legal scholars about how the Supremes will vote on an upcoming case will improve the model’s predictive ability, and so on.  As Ian Ayres puts it in his great book Supercrunchers“Instead of having the statistics as a servant to expert choice, the expert becomes a servant of the statistical machine.”

Of course, this is not going to be an easy switch to make in most organizations. Most of the people making decisions today believe they’re pretty good at it, certainly better than a soulless and stripped-down algorithm, and they also believe that taking away much of their decision-making authority will reduce their power and their value. The first of these two perceptions is clearly wrong; the second one a lot less so.

So how, if at all, will this great inversion of experts and algorithms come about? How will our organizations, economies, and societies get better results by being more truly data-driven? It’s going to take transparency, time, and consequences: transparency to make clear how much worse “expert” judgment is, time to let this news diffuse and sink in, and consequences so that we care enough about bad decisions to go through the wrenching change needed to make better ones.

We’ve had all three of these in the case of parole boards. As Ayres puts it, “In the last twenty-five years, eighteen states have replaced their parole systems with sentencing guidelines. And those states that retain parole have shifted their systems to rely increasingly on [algorithmic] risk assessments of recidivism.”

The consequences of bad parole decisions are hugely consequential to voters, so parole boards where human judgment rules are thankfully on their way out. In the business world it will be competition, especially from truly data-driven rivals, that brings the consequences to inferior decision-makers. I don’t know how quickly it’ll happen, but I’m very confident that data-dominated firms are going to take market share, customers, and profits away from those who are still relying too heavily on their human experts.

From Data to Action An HBR Insight Center

Read Original Post from the Harvard Business Review

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

ERROR: si-captcha.php plugin says captcha_library not found.