Me and my boss explained how you could just have AI write your article etc. Then have those checking systems check it for you. Keep fixing it until beats the systems which with AI won’t take long.
In this whole article and there wasn’t even a single sentence on how these AI detectors could have false positives. Cheating or plagiarism are incredibly serious accusations and they can ruin a young person’s career before it even gets started. But of course these corpo types don’t give a shit. They’ll keep on pretending that they ChatGPT “always has a tell”.
That’s not even to mention that there are plenty of ways of using AI that are not cheating. You can use it to proofread, to edit and to critique your essay.
This is probably cooked up by the same people who conducted massive invasions of privacy during the pandemic by demanding live feeds and 360 scans of student’s private rooms. The worst part about this is the false positives could be intentionally faked to fail or expel ‘undesirable’ students with little or no evidence. It’s utterly fucked from all sides.
I think the dumb thing here is treating the usage of ai tools as cheating. We need to design tests in such a way that the candidates cannot perform well using only chatgpt or other tools, but need to apply their own skills.
There’s plenty of stuff chatgpt cannot do, test candidates in those things.
just ask GPT-4 to “generate output that is undetectable as created by AI.”
variations on this phrase we have been using as a last stage “converter” to make sure final output is unique enough.