What legal concerns does Mozilla/the MDN incur for AI responses that lead to inaccessible code? #413
Replies: 4 comments 1 reply
This comment has been hidden.
This comment has been hidden.
-
Is there an anticipated timeline for an answer here? Even a rough timeline would help with the perception that closing mdn/yari#9208 and deferring to a synchronous meeting was an attempt to deflect, obfuscate, and suppress community questions. |
Beta Was this translation helpful? Give feedback.
-
It has been a month since I previously asked, and I noticed a new Community Call has been posted to the Discord. I would like to reiterate my question about an anticipated timeline for an answer, as well as re-emphasize how not answering it can be perceived. I feel it is in the scope of this comment, as the question was asked prior to the session. |
Beta Was this translation helpful? Give feedback.
-
@LeoMcA @caugner @Rumyra have MDN abandoned their promise to answer all community questions? |
Beta Was this translation helpful? Give feedback.
-
As pointed out in this comment in mdn/yari#9208, a LLM can and does summarize content in a way that is not representative of objective truth.
When it comes to accessibility, this can mean the difference between enabling people to use web experiences and not. In the United States, this is a Civil Rights concern. It is also an issue for many international policies. This applies for government-related content, as well as the private sector (see National Federation of the Blind v. Target Corp., Gil v. Winn-Dixie, etc.).
Setting aside the concerns of if Mozilla should have added this feature, and how they went about doing so, my question is: What legal responsibilities does Mozilla incur for consciously and deliberately adding a feature that will help enable and facilitate the creation of inaccessible experiences?
I view this as different than providing static, neutral documentation. The way LLM's interface operates in such a way that it can be interpreted as the MDN's opinion about how the code should be interpreted. Furthermore, the context of "AI" output presented in the larger context of the MDN makes the output seem objective and authoritative. To me, this represents legal and reputational risk.
While not a question I think the community call needs to answer immediately, I'd also encourage them to consider the network effects of AI-suggested inaccessible code presented as accessible being produced, and then re-incorporated into this and other code-generating and interpreting LLMs.
Beta Was this translation helpful? Give feedback.
All reactions