As we've seen in recent moths, AI generated content posted to the wiki continues to be an issue. We currently have no formalized process for identifying AI generated content, nor do we have specific personnel tasked with handling this. As AI models become more sophisticated, we have an increasing need for vigilance on our end. So-
Proposal: Form a team dedicated to identifying AI generated content.
This team's duty would be simple. It would be a clear group of individuals who are tasked with identifying AI generated content using common, key tells (that we would establish in policy). This team would do the investigation, compiled evidence, and submit to Disc/Admins so the user in question can be revoked/banned in accordance to existing policy regarding AI generated materials.
Benefits: Having a clear group of individuals tasked with this will make reporting AI generated materials easier for both staff and users- they now know specifically who to contact. This will also free up staff spaces from current reporting practices that can clog chat, these reports are now handled in a separate space by the team in question. Reports no longer need to be a disc issue, and revokes/bans can be logged in a single master thread on O5 rather than individual disc/nondisc threads. Again, freeing staff spaces from clutter.
Downsides: Right now the innate downside is the fact that there will never be 100% solid evidence that a user's content is AI generated, at least in the case of text. As a result internal policy/guidelines for content identification runs the risk of false positives. Likewise I can see potential concerns for this team being moved out of general staffchat into an existing/their own team server- the investigations become less visible to staff as a result (even if said spaces would remain public to staffers.)
What I need: One, I need bodies. I need people interested in joining this hypothetical team who are willing to participate in this team's potential workload (i.e. reporting, investigation, discussion). Two, I need assistance forming policy related to this team and its operations.
Policy questions: I'll admit I've never been a policy guy, so please offer further policy questions that I miss.
1. Should this team stand as its own entity, or become a subteam of an existing group (MAST, Crit, Disc, etc.)
2. How should this team be structured? Is a simple Captain/Vice Captain structure sufficient?
3. How should this team formalize reporting of AI generated content, and what expectations should be held regarding investigation and compiling evidence?
4. How should final reports be made? Should they be made directly to Disc, or simply an Admin so the user may be revoked/banned?
5. How should finished cases be logged?
These are the questions I'm currently able to pose. Again, I'm not a policy wizard so while I do have thoughts on all the above I would like to see discussion from interested staffers first. I am also willing to serve as this hypothetical team's captain/section head.
This discussion will be open for one week. There is also an ongoing thread in the public channel of the official staff discord server.