Nicole Junkermann, NJF Capital founder, presents an A-Z of Artificial Intelligence – a series of short videos focused on key areas of interest in this hugely topical and consequential field.
As AI technology continues to make breakthroughs in everything from medical diagnostics to transport, one slight issue remains:
Often, nobody knows quite how the machines have done it.
The neural networks behind much machine learning are effectively taught to programme themselves, by being fed data and working out their own rules for processing it.
This means that even the scientists and engineers who created them might not truly understand how they work.
The idea of such black box systems operating in safety-critical fields like medicine or autonomous vehicles raises concerns over accountability and error-correction.
Scientists are researching ways to create Explainable AI, which enables humans to better understand how the machines make their decisions.
The US Defense Department’s research arm, the Defense Advanced Research Projects Agency, has commissioned a major research project on Explainable AI.