More and more influential researchers and observers are contemplating the idea that artificial intelligence will eventually be a threat to human beings.
Recently, no less an authority than Bill Gates has been thinking of applying a new tax to limit the (ab)use of artificial intelligence, while Elon Musk has been arguing with Mark Zuckerberg over just how scared we should all be about AI.
Sifting through numerous articles on the topic, one is led to believe that a truly apocalyptic future awaits us all: sophisticated machines learning from humans how to fight and shoot, mobilising around the sole purpose of exterminating the human race; potentially evolving into something more complex than themselves and, eventually, developing their own race…
Even the most cool-headed commentators have embraced the prediction that artificial intelligence will at the very least take jobs that should otherwise be performed by humans. Some of the most creative minds would definitely think that this type of starvation would be part of a methodical plan to destroy the human race in the most discrete yet subtle way, even though I personally have not read anything about this just yet (but would not be surprised if I did).
No matter what the reason might be, it seems that artificial intelligence will develop the intelligence required for killing just about everyone in a truly Hollywoodesque fashion.
But while these concerns might hold merit, it is important to keep them in perspective, and not lose sight of a number of unshakable truths that should inform any rational debate about AI. For instance, artificial intelligence can neither kill nor destroy anyone, unless it is instructed to do so. Just like an autonomous car that can become a deadly weapon if only, and only if, instructed to plough into pedestrians. The same goes for weaponry: an autonomous/intelligent rocket would be even more terrifying than non-autonomous rockets only if instructed in targeting civil buildings instead of military barracks or other rockets.
This conclusion however would leave nobody in their safe cocoon, due to the following assumption: it is expected that at some point malicious humans will create malicious artificial intelligences capable of threatening other humans. As a consequence, those other humans will create other artificial intelligences in order to counter such threats.
This phenomenon is not only possible, but is already slowly turning into a reality. Fortunately there are some solutions that might keep this phenomenon under control, to a certain extent at least.
One such solution would be AI Certification. Certifying artificial intelligence before operating it would be essential as more and more AI will be incorporated into everyday tasks. This process would be equivalent to what is already required within the European Economic Area (EEA) with the CE marking (or with the FCC Declaration of Conformity used on certain electronic devices in the US to establish conformity with health, safety and environmental protection standards). In fact, if artificial intelligence is not marked “safe”, it cannot operate. All the others can and must be destroyed.
It is essential that such certification is performed with technology that is publicly available and maintained by multiple peers in order to encourage fairness but at the same time discourage centralisation. Public availability and decentralisation may be summarised under one broad term: blockchain.
Needless to say that at this point in time, blockchain technology is mature enough to fulfil such requirements, while being supported by a community that is more and more aware of the importance of decentralisation and public ownership.
The number of attempts to place artificial intelligence on the blockchain are showing the needs to be prepared to the scenarios I have just introduced.
A side project I have been working on for a while is finally seeing the light.
Check it out at fitchain.io