Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add note about using AI in "Review criteria" #1411

Merged
merged 2 commits into from
Mar 5, 2025

Conversation

jromanowska
Copy link
Contributor

I have been asked by reviewers whether we have a policy on AI-generated code. Perhaps we should just add a short clarifying text in our docs. Here's a suggestion

@logological
Copy link

The proposed text could do with some minor stylistic/grammatical improvements:

JOSS does not require authors to declare whether they have used AI to generate code. (AI tools are now often integrated in programming environments and it would be difficult for editors to check whether they have been used.) However, authors should always check their code and be able to change it and adapt it to reviewers' comments.

@sneakers-the-rat
Copy link
Contributor

sneakers-the-rat commented Mar 4, 2025

for the sake of bookkeeping, linking to prior discussion - #1297

for the record i am in favor of author disclosure of LLM use in code for the reasons described in that thread. I think it would be relatively straightforward to disclose "i used copilot in vscode for inline suggestions." or "copilot was used to generate the first draft of this" and so on - if ppl are using LLMs, it should be their responsibility to know how they're using them, and I don't think it's fair to ask reviewers to review LLM code without being able to know whether it is LLM code or not. This is both for ethical reasons, and also to prevent JOSS from becoming a way to outsource labor: autogenerate a crappy package, and then use JOSS reviewer labor to improve it.

@jedbrown
Copy link
Member

jedbrown commented Mar 4, 2025

I agree with @sneakers-the-rat, and would go further, that people should know (and thus be defensive toward) the use of Copilot and similar products can automate what would assuredly be copyright violation if a human did it. For example, it reproduces substantial portions of SuiteSparse (an LGPL library) with copyright stripped and only trivial cosmetic changes (article from 2022), and this question of whether slight obfuscation is defense against copyright infringement is the subject of ongoing class-action litigation. (Obfuscation is not a defense against copyright infringement by a human, which is why clean-room design exists.) Companies are banking on use of such products being sufficiently ubiquitous they can obtain a legislative solution if courts rule against them. They may be successful, and yet I object on ethical grounds.

Using products like Copilot to steal community work with plausible deniability is going to rack up technical debt while starving the high-quality libraries of resources to do original development and necessary maintenance. I don't think JOSS should encourage practices that harm the community JOSS was created to serve. There is a reality that many people use these products, often in lighter ways, and a blanket ban is not enforceable. But we can articulate that the products are ethically fraught, legally questionable, and threaten the goodwill of the community.

@sneakers-the-rat
Copy link
Contributor

for the sake of this PR, since this is a contentious topic and reviewers/authors are asking about it, maybe something like "JOSS does not have a policy re: use of LLMs yet, but authors are responsible for understanding and explaining submitted code and its provenance, and should respond in good faith to reviewer questions about LLM use as they would with any other topic."

@danielskatz
Copy link
Collaborator

for the sake of this PR, since this is a contentious topic and reviewers/authors are asking about it, maybe something like "JOSS does not have a policy re: use of LLMs yet, but authors are responsible for understanding and explaining submitted code and its provenance, and should respond in good faith to reviewer questions about LLM use as they would with any other topic."

I like this. I feel like responsibility is a key concept here.

@jromanowska
Copy link
Contributor Author

Thank you for the discussion! Yes, I also feel like the responsibility of authors should be emphasized - it is their code, after all, AI cannot be put responsible for anything.

change the description of AI use after discussion
@jromanowska
Copy link
Contributor Author

I have updated the text now, thanks for the suggestion, @sneakers-the-rat! Is it OK to merge?

Copy link
Contributor

@sneakers-the-rat sneakers-the-rat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok by me :) (but i am not the boss of anything <3)

@jromanowska
Copy link
Contributor Author

Since the other people in this thread did not have objections, I will merge this. This can be changed when needed. 👍

@jromanowska jromanowska merged commit d093acb into openjournals:main Mar 5, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants