This code is for our ICLR 2025 paper "Glimpse: Enabling White-Box Methods to Use Proprietary Models for Zero-Shot LLM-Generated Text Detection", where we borrow some code from Fast-DetectGPT.
Paper | Demo | OpenReview
We are working on the demo and will update the link soon.
Method | ChatGPT | GPT-4 | Claude-3 Sonnet |
Claude-3 Opus |
Gemini-1.5 Pro |
Avg. |
---|---|---|---|---|---|---|
Fast-DetectGPT (Open-Source: gpt-neo-2.7b) |
0.9487 | 0.8999 | 0.9260 | 0.9468 | 0.8072 | 0.9057 |
Glimpse (Fast-DetectGPT) (Proprietary: gpt-3.5) |
0.9766 (↑54%) |
0.9411 (↑41%) |
0.9576 (↑43%) |
0.9689 (↑42%) |
0.9244 (↑61%) |
0.9537 (↑51%) |
- Python3.12
- PyTorch2.3.1
- Setup the environment:
pip install -r requirements.txt
(Notes: the baseline methods are run on 1 GPU of Tesla A100 with 80G memory, while Glimpse is run on a CPU environment.)
Following folders are created for our experiments:
- ./exp_main -> experiments with five latest LLMs as the source model (main.sh).
- ./exp_langs -> experiments on six languages (langs.sh).
(Notes: we share the data and results for convenient reproduction.)
If you find this work useful, you can cite it with the following BibTex entry:
@articles{bao2024glimpse,
title={Glimpse: Enabling White-Box Methods to Use Proprietary Models for Zero-Shot LLM-Generated Text Detection},
author={Bao, Guangsheng and Zhao, Yanbin and He, Juncai and Zhang, Yue},
year={2024}
}