Towards Intelligible Robust Anomaly Detection by Learning Interpretable Behavioural Models
Network anomaly detection for enterprise cyber security is challenging for a number ofreasons. Network traffic is voluminous, noisy, and the notion of what traffic should beconsidered malicious changes over time as new malware appears. To be most useful, an anomaly detection algorithm should be robust in its performance as new types of malwareappear: maintaining a low false positive rate but raising alarms at traffic patterns which correspond to malicious behaviour; and provide intelligible alarms that present their reasoning to support both the analysis of the alarms and necessary incident response.
In this paper we investigate new methods for building anomaly detectors using interpretative behavioural models which, we argue, can capture "normal" behaviours at a suitable level of abstraction to provide robustness, in addition to being inherently intelligible as they are interpretable for the security analyst. We consider two such models: a simple Markov Chain model with minimal behavioural structure and a Finite State Automata (FSA) with more structure, and show how these can be learned from normal network trafifc alone. Our results show that the FSA performs better than common classier methods with comparable results to standard Botnet detection methods. The results also indicate
that the additional structure in the FSA is important. The FSA shows promise for robustness, although further work (with more data) is needed to fully explore this.