Why embed LLM nonsense in MDN? #409
Unanswered
MrLightningBolt
asked this question in
Community calls
Replies: 2 comments
This comment has been hidden.
This comment has been hidden.
-
Hi @MrLightningBolt Thanks for your feedback. We have taken down the AI Explain feature based on the community feedback. About AI Help feature, MDN has many different user personas: we have Experts, senior/experienced developers and Junior developers / learners. Experts know how to use MDN and where to look to find an answer whereas Junior developers struggle with finding the information they are looking for. AI Help was launched to help Junior developers by providing an answer to their query while providing MDN pages that were referenced to formulate that answer. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
LLMs are totally unsuitable as technical guides. They have no knowledge in any meaningful sense, they just chain text tokens together by a statistical model, meaning that there is no mechanism even theoretically possible to make AI Explain and the like reliably provide correct information. The people most likely to attempt to use such a misfeature are exactly the people without the knowledge and experience to be able to tell when the LLM is lying to them. It cannot even in principle be made safe to use. All that being the case: Why? Why make MDN irrevocably worse by deliberately embedding incorrect information that can never be corrected? What is the actual reason?
Beta Was this translation helpful? Give feedback.
All reactions