Widget Image
CURRENT ISSUE
ISSUE 00
SUPPORT US
PURCHASE PRINT
Follow us:
Saturday / April 10.
  • No products in the cart.

Intelligence Deficit

[IMAGE:GOOGLE]

The dream of computers capable of achieving tasks that were previously considered unachievable is just amazing. With the world progressing towards an age of unlimited innovations and unhindered progress, we can expect that Artificial Intelligence will have a greater role in actually serving us for the better.

Most researchers agree that a super-intelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect Artificial Intelligence to become intentionally benevolent or malevolent. But there is remarkably little talk of the limits of AI.

 

In particular, Machine Learning systems often have low interpretability, meaning that humans have difficulty figuring out how the systems reached their decisions. Deep neural networks may have hundreds of millions of connections, each of which contributes a small amount to the ultimate decision. As a result, these systems’ predictions tend to resist simple, clear explanation. Unlike humans, machines are not good storytellers. They cannot always give a rationale for why a particular applicant was accepted or rejected for a job, or a particular medicine was recommended. Ironically, even as we have begun to overcome Polanyi’s Paradox, we are facing a kind of reverse version, Machines know more than they can tell us.

This creates risks, the machines may have hidden biases, derived not from any intent of the designer but from the data provided to train the system. For instance, if a system learns which job applicants to accept for an interview by using a data set of decisions made by human recruiters in the past, it may inadvertently learn to perpetuate their racial, gender, ethnic, or other biases. Moreover, these biases may not appear as an explicit rule but, rather, be embedded in subtle interactions among the thousands of factors considered.

A second risk is that, unlike traditional systems built on explicit logic rules, neural network systems deal with statistical truths rather than literal truths. That can make it difficult, if not impossible, to prove with complete certainty that the system will work in all cases especially in situations that were not represented in the training data. Lack of verifiability can be a concern in mission-critical applications, such as controlling a nuclear power plant, or when life-or-death decisions are involved.

Third, when the ML system does make errors, as it almost inevitably will, diagnosing and correcting exactly what is going wrong can be difficult. The underlying structure that led to the solution can be unimaginably complex, and the solution may be far from optimal if the conditions under which the system was trained change.

While all these risks are very real, the appropriate benchmark is not perfection but the best available alternative. After all, we humans, too, have biases, make mistakes, and have trouble explaining truthfully how we arrived at a particular decision. The advantage of machine-based systems is that they can be improved over time and will give consistent answers when presented with the same data.

 

ML systems are getting quite good at the former but remain well behind us at the latter. We humans are a deeply social species, other humans, not machines, are best at tapping into social drives such as compassion, pride, solidarity, and shame in order to persuade, motivate, and inspire.