A data-driven attack against support vectors of SVM
Version 2 2024-06-06, 05:43Version 2 2024-06-06, 05:43
Version 1 2018-09-07, 10:09Version 1 2018-09-07, 10:09
conference contribution
posted on 2024-06-06, 05:43authored byS Liu, W Zhou, J Zhang, Y Xiang, Y Wang, O De Vel
Machine learning (ML) is commonly used in multiple disciplines and real-world applications, such as information retrieval, financial systems, health, biometrics and online social networks. However, their security profiles against deliberate attacks have not often been considered. Sophisticated adversaries can exploit specific vulnerabilities exposed by classical ML algorithms to deceive intelligent systems. It is emerging to perform a thorough security evaluation as well as potential attacks against the machine learning techniques before developing novel methods to guarantee that machine learning can be securely applied in adversarial setting. In this paper, an effective attack strategy for crafting foreign support vectors in order to attack a classic ML algorithm, the Support Vector Machine (SVM) has been proposed with mathematical proof. The new attack can minimize the margin around the decision boundary and maximize the hinge loss simultaneously. We evaluate the new attack in different real-world applications including social spam detection, Internet traffic classification and image recognition. Experimental results highlight that the security of classifiers can be worsened by poisoning a small group of support vectors.
History
Pagination
723-734
Location
Songdo, Korea
Start date
2018-06-04
End date
2018-06-08
ISBN-13
9781450355766
Language
eng
Publication classification
E Conference publication, E1 Full written paper - refereed
Copyright notice
2018, Association for Computing Machinery.
Editor/Contributor(s)
Unknown
Title of proceedings
ASIACCS 2018 - Proceedings of the 2018 ACM Asia Conference on Computer and Communications Security
Event
Computer and Communications Security. Asia Conference (13th : 2018 : Songdo, Korea)