Regulating AI the Right Way

The news that the House of Commons Science and Technology Committee is currently running an inquiry into the governance and regulation of AI is encouraging. And the expected white paper on AI regulation is also an important step towards realising the government’s ambition to make the UK a science and technology superpower.

The UK has all the ingredients it needs to be the de facto world leader in the development and application of AI. But we must start to think differently about the regulatory frameworks that these modern tech businesses are operating in if we are to see the kind of return that the country desperately needs.

When it comes to the subject of AI, because we all intrinsically believe that we know how we make our own decisions, we transpose our misunderstanding of the world onto simplistic approaches to regulation and explainability of AI. However, this self-analysis is easily shown to be a seductive illusion.

There is currently a lot of research aimed at allowing AIs to explain their decisions. But this misses the point. Rather, the issue is our ability to understand the explanation.

There are some elegant experiments that show that when you ask someone to explain their reasoning for a decision, it often bears little resemblance to the reality. We give explanations in natural language, which typically involves fewer than five variables and fairly strong relationships. For example, the car was going too fast, the road was wet, or the driver was tired.

However, any such simple relationships could be easily coded using non-AI computer methods. We only need AI where the relationships are complex and subtle.

So, if the actual explanation is: “Mr Smith, through a combination of the higher order nonlinear probabilistic relationship of 140 variables summaries in the following 20 spreadsheets, the AI rejected you.”. Reducing this to: “Mr Smith, your credit rating given your area is bad” is, in effect, misleading.

Furthermore, expandability counter to intuition costs a price. It is another thing the AI must optimize for and that often costs in performance. So, we can’t have our cake and eat it. How many more people will we let die in autonomous car accidents so the AI can explain its decisions to us? Explainable approaches in these problems are often much less accurate.

No one in the world is leading when it comes to AI regulation yet. The European Union is producing a series of simplistic proposals which, in all likelihood, will put European states much lower down the list of places that AI companies will want to develop and expand. Meanwhile, the US is charging too much the other way and may end up having to backfill its regulatory regime after the horse has bolted.

Post-Brexit, Britain now has the ability to move faster because we are no longer encumbered by our European partners. We need to take advantage of that.

But we must recognise that regulation of AI is not an armchair sport. It cannot be done by amateurs often with social commentary thrown in. It requires a deep understanding of these kind of problems and the reality of how decisions are made in the real world. Having people in government with a scientific and mathematical background can help to alleviate some of these challenges.

Dr Mike Lynch OBE FRS FREng

Previous
Previous

Britain must act or be left behind in the technology race with China

Next
Next

Response to the Chancellor’s Autumn Statement