Skip to content

Latest commit

 

History

History
23 lines (17 loc) · 2.99 KB

File metadata and controls

23 lines (17 loc) · 2.99 KB

USENIXSec2023

Number of papers: 3

  • Authors: Chen, Yizheng and Ding, Zhoujie and Wagner, David
  • Abstract: Machine learning methods can detect Android malware with very high accuracy. However, these classifiers have an Achilles heel, concept drift: they rapidly become out of date and ineffective, due to the evolution of malware apps and benign apps. Our research finds that, after training an Android malware classifier on one year's worth of data, the F1 score quickly dropped from 0.99 to 0.76 after 6 months of deployment on new test samples....
  • Link: Read Paper
  • Labels: static analysis, bug detection, empirical study
  • Authors: Sandoval, Gustavo and Pearce, Hammond and Nys, Teo and Karri, Ramesh and Garg, Siddharth and Dolan-Gavitt, Brendan
  • Abstract: Large Language Models (LLMs) such as OpenAI Codex are increasingly being used as AI-based coding assistants. Understanding the impact of these tools on developers' code is paramount, especially as recent work showed that LLMs may suggest cybersecurity vulnerabilities. We conduct a security-driven user study (N=58) to assess code written by student programmers when assisted by LLMs. Given the potential severity of low-level bugs as well as their relative frequency in real-world projects, we taske...
  • Link: Read Paper
  • Labels: code generation, program synthesis, empirical study
  • Authors: Zhang, Zhuo and Tao, Guanhong and Shen, Guangyu and An, Shengwei and Xu, Qiuling and Liu, Yingqi and Ye, Yapeng and Wu, Yaoxuan and Zhang, Xiangyu
  • Abstract: Deep Learning (DL) models are increasingly used in many cyber-security applications and achieve superior performance compared to traditional solutions. In this paper, we study backdoor vulnerabilities in naturally trained models used in binary analysis. These backdoors are not injected by attackers but rather products of defects in datasets and/or training processes. The attacker can exploit these vulnerabilities by injecting some small fixed input pattern (e.g., an instruction) called backdoor ...
  • Link: Read Paper
  • Labels: code model, code model security, code model, code model training, binary code model