Skip to main content
Log in

A Comparative Study on the Impact of Adversarial Machine Learning Attacks on Contemporary Intrusion Detection Datasets

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Adversarial attack techniques have taken a firm stand against the capabilities of deep neural networks, rendering them less efficient in performing their functions. Various kind of attacks have been studied and appropriate defense mechanisms have been proposed in the Computer Vision and Image Processing domains. The progress in Intrusion Detection System (IDS) domain is relatively less although it is gaining momentum lately. One of the concerns in the IDS domain is that most of the research work has been carried out using old datasets. There is a need to study the properties of newer benchmark datasets and analyze their characteristics under adversarial settings. Contemporary datasets include modern network behaviors and attack scenarios, which help IDSs perform well in modern networks. The more realistic a dataset is, the more efficient it can make an IDS model in a real environment. This paper addresses the said concern by conducting a study on recent datasets in the light of adversarial perturbations. We analyze how various adversarial attack algorithms, under white box settings, impact contemporary IDS datasets, namely, UNSW-NB15, Bot-IoT, and CSE-CIC-IDS2018. This paper summarizes the study and discusses how various classification algorithms perform when an IDS model is trained with each of the chosen datasets. The results included in the paper indicate that the adversarial examples are successful in decreasing the detection capabilities of the IDS models covered in the study. We provide a conclusion based on the evaluation results and share thoughts on the direction in which we are headed for future work.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Anderson JP. Computer security threat monitoring and surveillance. Fort Washington: Anderson Co.; 1980.

    Google Scholar 

  2. Kemmerer RA, Vigna G. Intrusion detection: a brief history and overview. Computer. 2002;35(4):supl27–30.

    Article  Google Scholar 

  3. Innella P, et al. The evolution of intrusion detection systems. Tetrad Digit Integr. 2001;1–15

  4. Li AZ, Barton D. A brief history of machine learning in cybersecurity. 2022. https://www.securityinfowatch.com/cybersecurity/article/21114214/a-brief-history-of-machine-learning-in-cybersecurity. Accessed 14 Nov 2019.

  5. Othman SM, Ba-Alwi FM, Alsohybe NT, Al-Hashida AY. Intrusion detection model using machine learning algorithm on big data environment. J Big Data. 2018. https://doi.org/10.1186/s40537-018-0145-4.

    Article  Google Scholar 

  6. Dalvi N, Domingos P, Sanghai S, Verma D. Adversarial classification. In: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, 2004, pp. 99–108.

  7. Biggio B, Corona I, Maiorca D, Nelson B, Šrndić N, Laskov P, Giacinto G, Roli F. Evasion attacks against machine learning at test time. In: Advanced information systems engineering. Berlin: Springer; 2013. p. 387–402. https://doi.org/10.1007/978-3-642-40994-3_25.

    Chapter  Google Scholar 

  8. Carlini N, Wagner D. Towards evaluating the robustness of neural networks. IEEE Symp Secur Priv (SP). 2017. https://doi.org/10.1109/sp.2017.49.

    Article  Google Scholar 

  9. Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations (ICLR), ICLR 2015. http://arxiv.org/abs/1412.6572

  10. Papernot N, McDaniel P, Goodfellow I. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. 2016. arXiv preprint arXiv:1605.07277

  11. Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A. The limitations of deep learning in adversarial settings. IEEE Eur Symp Secur Priv. 2016. https://doi.org/10.1109/eurosp.2016.36.

    Article  Google Scholar 

  12. Rigaki M. Adversarial deep learning against intrusion detection classifiers. MS Thesis, Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Sweden, 2017. [Online].  https://www.diva-portal.org/smash/get/diva2:1116037/FULLTEXT01.pdf

  13. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R. Intriguing properties of neural networks. 2013. arXiv preprint arXiv:1312.6199

  14. Huang L, Joseph AD, Nelson B, Rubinstein BI, Tygar JD. Adversarial machine learning. In: Proceedings of the 4th ACM workshop on Security and artificial intelligence, 2011, pp. 43–58.

  15. Wang Z. Deep learning-based intrusion detection with adversaries. IEEE Access. 2018;6:38367–84. https://doi.org/10.1109/access.2018.2854599.

    Article  Google Scholar 

  16. Dwibedi S, Pujari M, Sun W. A comparative study on contemporary intrusion detection datasets for machine learning research. IEEE Int Conf Intell Secur Inf (ISI). 2020. https://doi.org/10.1109/isi49825.2020.9280519.

    Article  Google Scholar 

  17. Barreno M, Nelson B, Sears R, Joseph AD, Tygar JD. Can machine learning be secure? In: Proceedings of the 2006 ACM Symposium on Information, computer and communications security, 2006, pp. 16–25.

  18. Martins N, Cruz JM, Cruz T, Abreu PH. Analyzing the footprint of classifiers in adversarial denial of service contexts. In: Progress in artificial intelligence. Berlin: Springer International Publishing; 2019. p. 256–67. https://doi.org/10.1007/978-3-030-30244-3_2210.1007/978-3-030-30244-3_22.

    Chapter  Google Scholar 

  19. Yuan X, He P, Zhu Q, Li X. Adversarial examples: attacks and defenses for deep learning. IEEE Trans Neural Netw Learn Syst. 2019;30(9):2805–24. https://doi.org/10.1109/tnnls.2018.2886017.

    Article  MathSciNet  Google Scholar 

  20. Papernot N, Goodfellow I, Sheatsley R, Feinman R, McDaniel P, et al. cleverhans v2. 0.0: an adversarial machine learning library. 2016. arXiv preprint arXiv:1610.00768

  21. Ring M, Wunderlich S, Scheuring D, Landes D, Hotho A. A survey of network-based intrusion detection data sets. Comput Secur. 2019;86:147–67. https://doi.org/10.1016/j.cose.2019.06.005.

    Article  Google Scholar 

  22. Javaid A, Niyaz Q, Sun W, Alam M. A deep learning approach for network intrusion detection system. Proc EAI Int Conf Bio-inspired Inf Commun Technol. 2016. https://doi.org/10.4108/eai.3-12-2015.2262516.

    Article  Google Scholar 

  23. Koroniotis N, Moustafa N, Sitnikova E, Turnbull B. Towards the development of realistic botnet dataset in the internet of things for network forensic analytics: Bot-IoT dataset. Futur Gener Comput Syst. 2019;100:779–96. https://doi.org/10.1016/j.future.2019.05.041.

    Article  Google Scholar 

  24. Moustafa N, Slay J. UNSW-NB15: a comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set). Mil Commun Inf Syst Conf (MilCIS). 2015. https://doi.org/10.1109/milcis.2015.7348942.

    Article  Google Scholar 

  25. Zoghi Z, Serpen G. Unsw-nb15 computer security dataset: analysis through visualization. 2021. arXiv preprint arXiv:2101.05067

  26. Moustafa N, Creech G, Slay J. Big data analytics for intrusion detection system: Statistical decision-making using finite dirichlet mixture models. In: Data analytics and decision support for cybersecurity. Berlin: Springer International Publishing; 2017. p. 127–56. https://doi.org/10.1007/978-3-319-59439-2_5.

    Chapter  Google Scholar 

  27. Moustafa N, Slay J. The evaluation of network anomaly detection systems: statistical analysis of the UNSW-NB15 data set and the comparison with the KDD99 data set. Inf Secur J. 2016;25(1–3):18–31. https://doi.org/10.1080/19393555.2015.1125974.

    Article  Google Scholar 

  28. Moustafa N, Slay J, Creech G. Novel geometric area analysis technique for anomaly detection using trapezoidal area estimation on large-scale networks. IEEE Trans Big Data. 2019;5(4):481–94. https://doi.org/10.1109/tbdata.2017.2715166.

    Article  Google Scholar 

  29. Sarhan M, Layeghy S, Moustafa N, Portmann M. Netflow datasets for machine learning-based network intrusion detection systems. 2020. arXiv preprint arXiv:2011.09144

  30. Koroniotis N. Designing an effective network forensic framework for the investigation of botnets in the internet of things. Ph.D. thesis, University of New South Wales, Sydney, Australia, 2020.

  31. Koroniotis N, Moustafa N. Enhancing network forensics with particle swarm and deep learning: The particle deep framework. Int Conf Artif Intell Appl. 2020. https://doi.org/10.5121/csit.2020.100304.

    Article  Google Scholar 

  32. Koroniotis N, Moustafa N, Schiliro F, Gauravaram P, Janicke H. A holistic review of cybersecurity and reliability perspectives in smart airports. IEEE Access. 2020;8:209802–34. https://doi.org/10.1109/access.2020.3036728.

    Article  Google Scholar 

  33. Koroniotis N, Moustafa N, Sitnikova E. A new network forensic framework based on deep learning for internet of things networks: a particle deep framework. Futur Gener Comput Syst. 2020;110:91–106. https://doi.org/10.1016/j.future.2020.03.042.

    Article  Google Scholar 

  34. Koroniotis N, Moustafa N, Sitnikova E, Slay J. Towards developing network forensic mechanism for botnet activities in the IoT based on machine learning techniques. In: Lecture notes of the institute for computer sciences, social informatics and telecommunications engineering. Berlin: Springer International Publishing; 2018. https://doi.org/10.1007/978-3-319-90775-8_3.

    Chapter  Google Scholar 

  35. AWS: a realistic cyber defense dataset (cse-cic-ids2018). 2018. https://registry.opendata.aws/cse-cic-ids2018/

  36. Kanimozhi V, Jacob TP. Artificial intelligence based network intrusion detection with hyper-parameter optimization tuning on the realistic cyber dataset CSE-CIC-IDS2018 using cloud computing. Int Conf Commun Signal Process (ICCSP). 2019. https://doi.org/10.1109/iccsp.2019.8698029.

    Article  Google Scholar 

  37. Sharafaldin I, Lashkari AH, Ghorbani AA. Toward generating a new intrusion detection dataset and intrusion traffic characterization. Proc Int Conf Inf Syst Secur Priv. 2018. https://doi.org/10.5220/0006639801080116.

    Article  Google Scholar 

  38. Biggio B, Fumera G, Roli F. Multiple classifier systems for robust classifier design in adversarial environments. Int J Mach Learn Cybern. 2010;1(1–4):27–41. https://doi.org/10.1007/s13042-010-0007-7.

    Article  Google Scholar 

  39. Tan TJL, Shokri R. Bypassing backdoor detection algorithms in deep learning. IEEE Eur Symp Secur Priv. 2020. https://doi.org/10.1109/eurosp48549.2020.00019.

    Article  Google Scholar 

  40. Warzynski A, Kolaczek G. Intrusion detection systems vulnerability on adversarial examples. Innov Intell Syst Appl (INISTA). 2018. https://doi.org/10.1109/inista.2018.8466271.

    Article  Google Scholar 

  41. Yang K, Liu J, Zhang C, Fang Y. Adversarial examples against the deep learning based network intrusion detection systems. IEEE Mil Commun Conf (MILCOM). 2018. https://doi.org/10.1109/milcom.2018.8599759.

    Article  Google Scholar 

  42. Martins N, Cruz JM, Cruz T, Abreu PH. Adversarial machine learning applied to intrusion and malware scenarios: a systematic review. IEEE Access. 2020;8:35403–19. https://doi.org/10.1109/access.2020.2974752.

    Article  Google Scholar 

  43. Arora A. Shantanu: a review on application of GANs in cybersecurity domain. IETE Tech Rev. 2020. https://doi.org/10.1080/02564602.2020.1854058.

    Article  Google Scholar 

  44. Sadeghi K, Banerjee A, Gupta SKS. A system-driven taxonomy of attacks and defenses in adversarial machine learning. IEEE Trans Emerg Top Comput Intell. 2020;4(4):450–67. https://doi.org/10.1109/tetci.2020.2968933.

    Article  Google Scholar 

  45. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, et al. Scikit-learn: machine learning in python. J Mach Learn Res. 2011;12:2825–30.

    MathSciNet  MATH  Google Scholar 

  46. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. 2016. arXiv preprint arXiv:1603.04467

  47. Chollet F, et al. Keras. 2021. https://github.com/keras-team/keras

  48. Pacheco Y, Sun W. Adversarial machine learning: a comparative study on contemporary intrusion detection datasets. Proc Int Conf Inf Syst Secur Priv. 2021. https://doi.org/10.5220/0010253501600171.

    Article  Google Scholar 

  49. Pujari M, Cherukuri BP, Javaid AY, Sun W. An approach to improve the robustness of machine learning based intrusion detection system models against the carlini-wagner attack. 2022 IEEE International Conference on Cyber Security and Resilience (IEEE CSR). IEEE (2022). (in press)

Download references

Funding

This study was not funded by any grant.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weiqing Sun.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Information Systems Security and Privacy” guest edited by Steven Furnell and Paolo Mori.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pujari, M., Pacheco, Y., Cherukuri, B. et al. A Comparative Study on the Impact of Adversarial Machine Learning Attacks on Contemporary Intrusion Detection Datasets. SN COMPUT. SCI. 3, 412 (2022). https://doi.org/10.1007/s42979-022-01321-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-022-01321-8

Keywords

Navigation