Should Wikiversity allow editors to post ChatGPT generated content?

This debate is phrased in terms of Wikiversity and ChatGPT, but can to some extent be generalized to other wiki projects and other "generative AI" tools such as Google Gemini and Claude AI.

Items: greater ease of text production, copyright risk, reliability/accuracy, reduced learning by doing, overwhelming of proofcheckers by volume.

A similar, more general question: Should Wikiversity allow editors to post LLM/generative AI generated content?

Pro

 * Using ChatGPT allows editors produce material that they would not be able to produce themselves or not so quickly.
 * Editors should be able to post ChatGPT content under the provision that the output is clearly marked as originating from ChatGPT, and then, the reader can know to take into account an increased risk of mistakes.
 * No need to post ChatGPT content at all, for the reasons: 1) textual AI responses often serve as starting points of an investigation; 2) GPT models rarely create something worth at a single prompt-response iteration, painstaking attribution of all the conditions turns this into a routine information that usually is not published (better to get limited with methodology description instead); 3) specific responses of an AI may change with time, or at other uncontrolled conditions.
 * Until the arguments made by the opposition in, wikipedia.org are addressed, a complete ban (rather than nuanced/selective) on the use of ChatGPT (and other generative AI) is draconian/overreactive.
 * Clear and laconic policy like "complete ban" of AI-generated content may increase the whole creditability of the platform among the wide audience. Tangled policy may vice versa disrupt the trust.

Con

 * The copyright status of ChatGPT output is not settled: there is an ongoing lawsuit. See Is the output of ChatGPT copyrighted?
 * This could also be interpreted as an argument for allowing such content i.e., until such time that it is determined to be copyrighted
 * Unless the editor is subject matter expert, it will be often hard for the editor to detect mistakes, as pointed out by StackOverflow.
 * Expanding on the above, ChatGPT, having "read" much more material than most humans, seems especially adept at feigning competence that it actually lacks.
 * An editor learns something by trying to research a topic and formulate text on it, even if the editor makes mistakes or the writing has inferior style. By contrast, by submitting a query to generative AI, the editor learns very little: there is no longer learning by doing.
 * ChatGPT editors can insert vast amounts of material into a Wikiversity page, at a rate so fast that humans will be unable to properly evaluate the quality of the project.
 * On the other hand the editors can try to use a generative AI for the content evaluation process.
 * Stack Overflow bans of the use of generative AI, even for rewording purposes . They are smart and competent, and their policy page contains ample discussion. Inconclusive yet suggestive.
 * Stack Overflow policy page indicates they had thousands of answers produced by genAI. By contrast, Wikiversity does not have this scale of the problem.
 * Comment: It does not have it yet, but it may have it eventually. On the other hand, one may wait until the problem becomes acute.
 * The Stack Overflow company decided not to ban genAI across their network (they operate other stack exchange sites than only Stack Overflow) . Thus, one would think there is something specific about Stack Overflow and its domain of software development.