o4 Mirror: https://scp-wiki.wikidot.com/forum/t-16976823/discussion-formation-of-an-ai-detection-team
The Problem:
In the past few months, Disciplinary has been tackling the various cases involving AI-generated content. While Disciplinary has been doing fine, there's a few oversights that remain in place due to the unofficial nature of how this system works.
As it is now, AI-generated content investigations and determinations (hereby referred to simply as AI Cases) begin when a user notes an article as possibly containing AI-generated material. From there, Disciplinary investigates the article before determining whether or not the article possesses AI-generated content or not.
The issue is, there's no clear distinction or requirement on how many members of Disciplinary need to agree that an article is AI or not. Unlike other Disciplinary matters, AI Cases are sometimes decided by no more than two members of Disciplinary, and in some extreme cases, just one member decided an article appeared as AI-generated resulting in the user's revocation. For example, this case here: https://05command.wikidot.com/forum/t-16959844/ai-record-candon-halcomb, where AI Content was not just obvious, but blatantly stated. However, despite this particular case being blatant and while I doubt anyone on the team would abuse this loophole, the loophole still exists.
Proposed Solution:
The formal creation of a dedicated team along with a strict procedure to process AI Cases.
The team as a whole will be a subteam of Disciplinary since determinations can lead to disciplinary action (including revokes and permanent bans).
The team will have a single team lead whose duty is to ensure all AI Cases reach one of two verdicts in a timely manner:
- AI Content Detected, where AI Content is detected and evident to members of the team.
- AI Content Not Detected, where AI Content is not detected or the content is determined unlikely to be AI-generated.
In the case of present AI Content, the offending user will be revoked on first offense and permanently banned on second offense.
Users that are determined to have used AI-generation but attempt to lie or fabricate evidence to prove to the contrary will be permanently banned regardless of first or second offense.
Articles that are determined to have AI Content are to be summarily deleted, a process requiring three (3) staff witnesses.
Each AI Case requires a quorum of three (3) and any verdict must receive a supermajority of 75% of the votes in favor of it in order to pass. In the event a supermajority is not reached but quorum is reached, the Team Lead/Vice Captain is required to gather the entirety of the AI Detection Team to vote/weigh in on the discussion. The AI Case will remain open for another week or until all members have voted, whichever comes first. If, at this point, a supermajority is still not reached, the verdict defaults to "AI Content Not Detected".
In the event quorum cannot be reached on a given AI Case with AI Detection Team members, standard Disciplinary Team members may be asked to weigh in and vote.
All cases are to be logged on O5, though specific details are to be discussed on the Staff Discord Server. All AI Cases are to keep an accessible copy of the suspected material's source. This should be done by storing the source material in a code block collapsible on the AI Case's specific forum thread on O5.
A list of common AI indicators will be maintained to assist in training new members and to assist established members.
Benefits/Concerns:
In terms of benefits, this formally codifies requirements of the team and the policies on which it acts on. It removes potential loopholes and angles of abuse. Establishing a dedicated team lead also ensures that no case goes unfinished and that potential investigations don't linger around without end, causing undue stress to users who are suspected of AI-generation.
In terms of concerns, there will always be the concern that AI Content is difficult to conclusively prove. This proposal hopes to alleviate those concerns by requiring at least 3 individuals to agree that there is AI Content.
What Would Need to be Done
- A team lead would need to be chosen, ideally from existing members of Disciplinary who regularly perform AI Detection.
- Existing Disciplinary members will need to determine whether they wish to be officially part of the proposed AI Detection Team or not.
This discussion will be open for one week.