One of the most difficult challenges of building an effective ML system to prevent email attacks is keeping up with the rapidly changing and adversarial nature of the problem. Attackers are constantly innovating: not only launching new attack campaigns, but also tweaking the language and social engineering strategies they employ to convince people to give up their login credentials, install malware, or send money to a fraudulent bank account, and so on. This article discusses one of our approaches to rapidly improving our detection engine based off new attacks discovered through our security researcher or identified as false negatives.
Read the article here.
The post Eng Blog: Stopping New Email Attacks with Data Augmentation and Rapidly-Training Models appeared first on Abnormal Security.