Current State of Weaponised Machine Learning

Recently several famous people have started speculating on apocalyptic visions of Machine Learning, ML, and artificial intelligence in general. Although these points of views are often entertaining, they distract us from the real adverse effects that these tools already have. ML is indeed dangerous, but in a much more boring way, that is no somersaulting samurai killer robots. Furthermore, I would argue that most dangers of ML will be driven by malicious human actors.

Many of the recent speculations on ML and AI tech revolves around thinking that the machines soon start acting on their own. This may be because of marketing exaggeration or just that human imagination is very strong. In reality we are very far from self-aware, we have not yet even started understanding how the brain works. On the contrary, the current state of the world suggests that the damage from ML will originate from people using it in bad faith.

Any tool, ML included, can be applied to serve different purpose and the most destructive use of a tool is when it is weaponised. Again the phrase weaponised ML conjecture images of popular Sci-Fi movies but that is also a distraction. Weaponised ML is already here and it is unexciting compared with laser shooting robots. Weaponised ML is already being used by state actors and even some companies to the demise of specific sections of the world’s population.

For example ML is already being used to create fake personas on the internet. This is done by automating account creation using software. ML plays a role where the software automatically solves captchas. With this ML has enabled mass fake accounts that are able to divert the attention of social media and disrupt conversion in these networks.

ML’s role in disrupting social media is not limited to fake account creation. Data based techniques have already been used to influence elections through voter suppression and fake news generation. It may be argued that data science is not ML, but I would argue that the boundary between automated A/B testing and ML is very thin.

Censorship is another area where ML is being applied. In some cases censorship has legitimate uses, For example removing ISIS material from web platforms. But for countries like Iran and China censorship is widely used to target political movements. Recently China has started using OCR techniques to block images that contain banned subjects [1] and I am aware of Iran’s effort to use ML as part of their filtering system for sexual subjects.

Like any other malicious use of tech the key to neutralising it is research. We need to be actively researching how the bad actors are evolving and creating tools that reduce the impact of weaponised ML.

I really think we need much more effort in understanding these tools. History has shown that when bad actors get hold of new tech they mess up the world, unless there is someone to stand up to them.

[1] https://isc.sans.edu/forums/diary/Why+Does+Emperor+Xi+Dislike+Winnie+the+Pooh+and+Scrambled+Eggs/23395/