Artificial Intelligence can be used for malicious purposes as well
The past 100 years have seen an incredible rise in technology advancement, and Artificial Intelligence is part of it. While humans strive to make their lives easier, like using self-driving cars or relying on Cortana, Alexa and Siri to do their daily tasks, the computing technologies can also be used for far worse purposes.
While others worry about machines taking over the world and destroying humanity, security researchers at IBM considered a far likely scenario in the near future and created DeepLocker – an AI-powered malware that is capable of using evasive techniques to obfuscate its presence and avoid security software entirely.
The most notorious malware like WannaCry, Trickbot, and Zeus devastated the most influential organizations, resulted in millions of damages, and disrupted the work of vital sectors like hospitals all over the world. While such attacks can be prevented by using safety measures and adequate security software, the AI-based malware can result in an attack the world has never seen before.
Deeplocker: the insights
Artificial Intelligence can be used to improve online safety by detecting and eliminating malware before it can enter the computer. Unfortunately, cybercriminals can also utilize AI-based technology to enhance malware and use it as a weapon.
IBM specialists presented DeepLocker at the Black Hat USA conference to show how bad actors can use the current technological advances of AI when combined with malicious programs. This will also allow researchers to stay ahead of hackers.
As described by researchers, the new type of malware is “highly targeted and evasive.” For example, hackers can direct attacks towards specific individuals based on their appearance, because photos can be obtained anywhere on social media. The video camera can be manipulated to match the online picture, and only then execute the command to upload the malicious payload. This means that such a sophisticated attack could target anyone.
To improve the evasion and obfuscate its presence, Deeplocker is capable of hiding its payload within carrier applications – for example, video conferencing software. Such an approach would prevent most of the anti-virus scanners and other security measures to trigger the alarm until the malware reaches its destination by using geolocation, voice and facial recognition or similar features.
DeepLocker's Deep Neural Network model provides “trigger conditions” that need to be met for malware to be executed. In case the target is not found, the virus stays obfuscated inside the app, which makes reverse-engineering for experts an almost impossible task.
DeepLocker works as intended, as the experiment proves
To prove the efficiency and precision of AI-based malware, security engineers demonstrated the attack using the notorious WannaCry virus. They created a proof of concept situation where the payload was hidden inside a video conferencing program. None of the anti-virus engines or sandboxes managed to detect the malware, which resulted in the conclusion by researchers:
Imagine that this video conferencing application is distributed and downloaded by millions of people, which is a plausible scenario nowadays on many public platforms. When launched, the app would surreptitiously feed camera snapshots into the embedded AI model, but otherwise behave normally for all users except the intended target
What is more, applications like Social Mapper can be implemented inside the malware which would make the detection of a potential targets even more manageable task.
Indeed, the power of Artificial Intelligence is probably limitless, but the experiment proves that security researchers still have a lot of work to do when it comes to cybersecurity. The examination of various apps should be taken into consideration, and any unexpected actions should be flagged immediately.