-
-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add note about using AI in "Review criteria" #1411
Conversation
The proposed text could do with some minor stylistic/grammatical improvements:
|
for the sake of bookkeeping, linking to prior discussion - #1297 for the record i am in favor of author disclosure of LLM use in code for the reasons described in that thread. I think it would be relatively straightforward to disclose "i used copilot in vscode for inline suggestions." or "copilot was used to generate the first draft of this" and so on - if ppl are using LLMs, it should be their responsibility to know how they're using them, and I don't think it's fair to ask reviewers to review LLM code without being able to know whether it is LLM code or not. This is both for ethical reasons, and also to prevent JOSS from becoming a way to outsource labor: autogenerate a crappy package, and then use JOSS reviewer labor to improve it. |
I agree with @sneakers-the-rat, and would go further, that people should know (and thus be defensive toward) the use of Copilot and similar products can automate what would assuredly be copyright violation if a human did it. For example, it reproduces substantial portions of SuiteSparse (an LGPL library) with copyright stripped and only trivial cosmetic changes (article from 2022), and this question of whether slight obfuscation is defense against copyright infringement is the subject of ongoing class-action litigation. (Obfuscation is not a defense against copyright infringement by a human, which is why clean-room design exists.) Companies are banking on use of such products being sufficiently ubiquitous they can obtain a legislative solution if courts rule against them. They may be successful, and yet I object on ethical grounds. Using products like Copilot to steal community work with plausible deniability is going to rack up technical debt while starving the high-quality libraries of resources to do original development and necessary maintenance. I don't think JOSS should encourage practices that harm the community JOSS was created to serve. There is a reality that many people use these products, often in lighter ways, and a blanket ban is not enforceable. But we can articulate that the products are ethically fraught, legally questionable, and threaten the goodwill of the community. |
for the sake of this PR, since this is a contentious topic and reviewers/authors are asking about it, maybe something like "JOSS does not have a policy re: use of LLMs yet, but authors are responsible for understanding and explaining submitted code and its provenance, and should respond in good faith to reviewer questions about LLM use as they would with any other topic." |
I like this. I feel like responsibility is a key concept here. |
Thank you for the discussion! Yes, I also feel like the responsibility of authors should be emphasized - it is their code, after all, AI cannot be put responsible for anything. |
change the description of AI use after discussion
I have updated the text now, thanks for the suggestion, @sneakers-the-rat! Is it OK to merge? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok by me :) (but i am not the boss of anything <3)
Since the other people in this thread did not have objections, I will merge this. This can be changed when needed. 👍 |
I have been asked by reviewers whether we have a policy on AI-generated code. Perhaps we should just add a short clarifying text in our docs. Here's a suggestion