Watson Paths points to the need for some machine-to-man translation as data science advances.
As Danny Hillis, an artificial intelligence expert, put it, “The key thing that will make it work and make it acceptable to society is story telling.” Not so much literal story telling, but more an understandable audit trail that explains how an automated decision was made.
“How much of this decision is the machine and how much is human?”.
Such a stance, others say, amounts to a comforting illusion – good marketing perhaps, but not necessarily good data science.
The promise of big data decision-making, after all, is that decisions based on data and analysis – more science, less gut feel and rule of thumb – will yield better outcomes.
One solution, according to Gary King, director of Harvard’s Institute for Quantitative Social Science, may be for the human creators of the scoring algorithms to tweak them not so much for maximum efficiency or profit but to give somewhat greater weight to the individual, reducing the risk of getting it wrong.
A human helper can provide that dose of nuanced data that escapes the algorithmic automaton.