Main Menu

Algorithms With Minds of Their Own

How do we ensure that artificial intelligence is accountable?

Everyone wants to know: Will artificial intelligence doom mankind—or save the world? But this is the wrong question. In the near future, the biggest challenge to human control and acceptance of artificial intelligence is the technology’s complexity and opacity, not its potential to turn against us like HAL in “2001: A Space Odyssey.” This “black box” problem arises from the trait that makes artificial intelligence so powerful: its ability to learn and improve from experience without explicit instructions.

Machines learn through artificial neural networks that work like the human brain. As these networks are presented with numerous examples of their desired behavior, they learn through the modification of connection strengths, or “weights,” between the artificial neurons in the network. Imagine trying to figure out why a person made a particular decision by examining the connections in his brain. Examining the weights of a neural network is only slightly more illuminating.

Concerns about why a machine-learning system reaches a particular decision are greatest when the stakes are highest. For example, risk-assessment models relying on artificial intelligence are being used in criminal sentencing and bail determinations in Wisconsin and other states. Former Attorney General Eric Holder and others worry that such models disproportionately hurt racial minorities. Many of these critics believe the solution is mandated transparency, up to and including public disclosure of these systems’ weights or computer code.

.

Read more

 

.