For example, with the phrase “My favorite tropical fruits are __.” The LLM might start completing the sentence with the tokens “mango,” “lychee,” “papaya,” or “durian,” and each token is given a probability score. When there’s a range of different tokens to choose from, SynthID can adjust the probability score of each predicted token, in cases where it won’t compromise the quality, accuracy and creativity of the output.
So I suppose with a larger text, if all lists of things are “LLM Sorted”, it’s an indicator.
That’s probably not the only thing, if it can detect a bunch of these indicators, there’s a higher likelihood it’s LLM text
Text watermarking? How does that work?
It says ‘as a large language model’ in the beginning, and ‘sincerely’ in the end
It gives an example:
So I suppose with a larger text, if all lists of things are “LLM Sorted”, it’s an indicator.
That’s probably not the only thing, if it can detect a bunch of these indicators, there’s a higher likelihood it’s LLM text