AI Should be banned
Marc Warner, who is a member of the government’s AI council and is the head of Faculty AI, admitted that some versions of the tech could be prohibited should be disallowed if there isn’t a “good scientific justification” amid discussions around regulating the tech.
He told BBC News: “Narrow AI” – systems used for specific tasks such as translating text or searching for cancers in medical images – could be regulated like existing technology.
“These are algorithms that are aimed at being as smart or smarter than a human across a very broad domain of tasks – essentially, every task.”
“If we create objects that are as smart or smarter than us, there is nobody in the world that can give a good scientific justification of why that should be safe.”
“That doesn’t mean for certain that it’s terrible – but it does mean that there is risk, it does mean that we should approach it with caution.
“At the very least, there needs to be sort of strong limits on the amount of compute [processing power] that can be arbitrarily thrown at these things.
“There is a strong argument that at some point, we may decide that enough is enough and we’re just going to ban algorithms above a certain complexity or a certain amount of compute.
“But obviously, that is a decision that needs to be taken by governments and not by technology companies”.
Marc – who added his signature to the Centre for AI Safety’s foreboding that it would lead to the end of humanity – believes the “value” of AI comes to making sure its safe.
He said: “‘Do you want cars or aeroplanes to be safe?’ I want both.”
“My long-term bet is that actually, to get value out of the technology, you need the safety – in the same way to get value out of the aeroplane, you need the engines to work.”