Sign in to my dashboard Create an account
Menu

Federated Learning (FL) for Future Security | NetApp

Date

November 7, 2022

Author

Mohammad Naseri, University College London; Yufei Han, Inria Rennes; Enrico Mariconti, University College London; Yun Shen, NetApp; Gianluca Stringhini, Boston University; Emiliano De Cristofaro, University College London

CCS '22: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications SecurityNovember 2022

Modern defenses against cyberattacks increasingly rely on proactive approaches, e.g., to predict the adversary's next actions based on past events. Building accurate prediction models requires knowledge from many organizations; alas, this entails disclosing sensitive information, such as network structures, security postures, and policies, which might often be undesirable or outright impossible.

In this paper, we explore the feasibility of using Federated Learning (FL) to predict future security events. To this end, we introduce Cerberus, a system enabling collaborative training of Recurrent Neural Network (RNN) models for participating organizations. The intuition is that FL could potentially offer a middle-ground between the non-private approach where the training data is pooled at a central server and the low-utility alternative of only training local models. We instantiate Cerberus on a dataset obtained from a major security company's intrusion prevention product and evaluate it vis-à-vis utility, robustness, and privacy, as well as how participants contribute to and benefit from the system. Overall, our work sheds light on both the positive aspects and the challenges of using FL for this task and paves the way for deploying federated approaches to predictive security.

Resources 

The paper can be found at: https://dl.acm.org/doi/10.1145/3548606.3560580 

Drift chat loading