Death cults such as ISIS ‘CERTAIN to get their hands on KILLER

Death cults such as ISIS ‘CERTAIN to get their hands on KILLER ROBOTS’

Terror organisations and rogue nations are certain to get their hands on Terminator-style killer robots in the near future, a top ranking security chief has warned.

Alvin Wilby, vice-president of research at French defence giant Thales, told a House of Lords Committee it will not be long before evil groups are in possession of lethal artificial intelligence (AI).

Autonomous weapons which essentially control themselves and do not need human interference to attack are already being developed, researchers have warned, and soon enough they will get into the wrong hands.

Speaking at the House of Lords inquiry this week, Mr Wilby said the “genie is out of the bottle” with this sort of potentially deadly technology.

He said there could be attacks carried out by “swarms” of small drones which require little assistance from humans.

Mr Wilby told the Lords Artificial Intelligence committee: “The technological challenge of scaling it up to swarms and things like that doesn’t need any inventive step.

“It’s just a question of time and scale and I think that’s an absolute certainty that we should worry about.”

The threat does not only come from these new-age weapons, but other technology such as smart cars which could potentially be hacked and used to target pedestrians.
Mr Wilby continued: “If someone’s car is reprogrammed to kill pedestrians, it’s become an autonomous weapons system. That’s a credible terrorist threat.”

Ministry of Defence official Mike Stone said: “I think it’s absolutely inevitable that this is going to get into the hands of non-state actors and certainly rogue states, North Korea and Iran top the list in most people’s minds.”
Just this week, billionaire businessman Elon Musk who has previously warned of the dangers of AI but is developing driverless cars, said that the robots are more of a threat to world safety than Kim Jong-un’s regime.

Mr Musk tweeted on Friday: “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”