Curiously it focuses on overly descriptive phrasing, and factually incorrect statements, as signs of AI.
I don't think this is accurate. AI has a flavour or tone we all know, but it could have generated factually plausible statements (that you could not diagnose in this test) or plausible text.
I could not tell the real from fake music at all.
I support (and pay for) Kagi, but wasn't overly impressed here. At worst I think it might give people too much confidence. Wikipedia has a great guideline on spotting AI text and I think the game here should integrate and reflect its contents: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
> I could not tell the real from fake music at all.
Not sure if it's in my head (haven't done a blind test or anything) but all AI music I've heard has painfully bad drum acoustics (very clicky). Seems like the most tell tale marker, although would love to make a "spot the AI song" game to prove myself right/wrong.
I found the music trivially distiguishable or at least I thought it was. Maybe I just got lucky. The AI songs had lyrics that, to me, seemed like something that wouldn't be in most songs. I've heard far better AI generated songs that I'd have a hard time distguishing if I wasn't told.
- AI slop is trivially factually wrong, and frequently overconfident.
- AI slop is verbose.
But, as you note, IRL this is not usually the case. It might have been true in the GPT-3.5 or early GPT-4 days, but things have moved on. GPT-5.1 Pro can be laconic and is rarely factually wrong.
The best way to identify AI slop text is by their use of special and nonstandard characters. A human would usually write "Gd2O3" for gadolinium oxide, whereas an AI would default to "Gd₂O₃". Chat-GPT also loves to use the non-breaking hyphen (U+2011), whereas all humans typically use the standard hyphen-minus character (U+002D). There's more along these lines. The issue is that the bots are too scrupulously correct in the characters they use.
As for music, it can be very tough to distinguish. Interestingly, there are some genres of music that are entirely beyond the ability of AI to replicate.
> whereas all humans typically use the standard hyphen-minus character (U+002D).
I made it a point to learn to type the em dash—only to have it stolen by the bots; it's forced me to become reacquainted with my long lost friend, the semicolon.
But I was referring to the special hyphen that the AIs frequently use today, and which is a hallmark of AI generated text, as it's not on regular keyboards and difficult to access: https://en.wikipedia.org/wiki/Wikipedia:Non-breaking_hyphen
I'm like 99.99999% sure that the usage of non standard characters, em dash, fancy quotes, "it's not X, it's Y" etc was clearly done on purpose from the very beginning and pushed for by various parties who have a strong vested interest in monitoring who is using AI and how it's being used.
It simply doesn't get it. This sort of thing probably wasn't in its training data.
The really interesting thing is that when I upload something like that track, and tell it to compose something similar, it usually gives me an error and refunds my credits.
Also, and this is far more mainstream, both Suno and ElevenLabs are totally incapable of generating anything like, e.g., Darkthrone's "Transylvanian Hunger." Music that is intentionally unpolished is anathema to them.
I could go on. There are lots. I think that they understand melody and harmony, but they don't understand atmosphere, just in general...
> Here's why: This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation.
This sounds to me like a message is "poor fakes are generated, and everything else is genuine", which I think would be a very counterproductive message, even now.
I will never understand why people are so obsessed with this. You don't like it, dont engage with it. If you can't tell the difference, and it's entertainment stop worrying about it.
If veracity matters, use authorative sources. Nothing has really changed about the skills needed for media literacy.
Correctness is a poor way to distinguish between human-authored and AI-generated content. Even if it's right, which I doubt (can humans not make wrong statements?), it doesn't do anything to help someone who doesn't know much about what they're searching.
Ironically, it seems the descriptions are AI-written?
(minor spoiler)
The text accompanying an image of a painting:
> This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation.
Meindert Hobbema. The Avenue at Middelharnis (1689, National Gallery, London)
What bugs me the most about nearly everyone selling AI products is that they apparently want or need to believe in the power of LLMs for everything, not just the product, and this means that they also generate the explanatory texts and descriptions and readmes and... it makes the product itself feel of a much worse quality.
I don't mind that you're selling an AI product if it's good but at least put some humanity on the marketing side.
I was just thinking this. The wikipedia "signs of ai writing" that another commenter linked to (https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing) mentions LLMs overuse the 'rule of three' (e.g. natural imperfections, consistent lighting, and realistic proportions), haha.
I feel like this is a good educational goal but a very poor execution.
We're meant to assume correct sentences were written by humans and AI adds glaring factual errors. I don't think it is possible at this point to tell a single human written sentence from an AI written sentence with no other context and it's dangerous to pretend it is this easy.
Several of the AI images included obvious mistakes a human wouldn't have made, but some of them also just seemed like entirely plausible digital illustrations.
Oversimplifying generative AI identification risks overconfidence that makes you even easier to fool.
Loosely related anecdote: A few months ago I showed an illustration of an extinct (bizarre looking) fish to a group of children (ages 10-13ish). They immediately started yelling that it was AI. I'm glad they are learning that images can be fake, but I actually had to explain that "Yes, I know this is not a photo. This animal is long extinct and this is what we think it looked like so a person drew it. No one is trying to fool you."
Kind of reminds me of the junk forensic fire science. "Slop Detective" might have been nice in 2022, now its slop itself. Maybe this is an old link? If someone just published this in the last 90 days, they are an idiot.
There's a lot of anti-AI sentiment in the art world (not news) but real artists are now actively accused of using AI and getting kicked off reddit or whatever. That tells me there is going to be 0 market for 100% human created art, not the other way around.
Not sure where to submit a bug report but I chose the option for kids and got this as the 'correct' message for a painting:
> Correct! Well done, detective!
> This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation.
> Albert Pinkham Ryder, Seacoast in Moonlight (1890, the Phillips Collection, Washington)
The image is not photography, I guess technically it's a photograph of a painting but still, confusing text.
I got tripped up by this one (sorry for spoilers):
> Bees collect pollen from flowers and make honey. They also drive tiny cars to get from flower to flower!
The explanation given is that it’s not factually correct, therefore it’s AI slop. Maybe I didn’t pay enough attention to the instructions, but aren’t humans also capable of creating text that is not factually correct, and at times is done so not out of ignorance for for artistic or humorous purposes? This example here sounds like something that would be written by a child with an active imagination, and not likely the kind of “seems plausible but is actually false” slop that LLMs come up with.
>Fake stuff made by computers that tries to look like it was made by real people.
It's everywhere online!
Tricking people is not what makes it slop. Being low quality is what makes it slop. This is a dangerous definition as it could mean that anything AI generated could be considered slop, even if it was higher quality than regular things.
The first music sample for 8th-12th graders I got which was like this charming instrumental chiptune-ey piece was so good (and real) but I can't find it again and forgot to try and shazam it or something. Did anyone else get it/know the song?
I like the idea, but I think the game progression needs another pass from a designer.
I started on "Level 1" and got 2 things wrong (both false positives if it matters) and instead of feeling like I learned anything, I felt as though I was set up to fail because the image prompt was missing sufficient context or the text prompt was too simple to be human. Either I was dumb or the game was dumb.
Maybe I'm just too old and 8-11 year-old kids wouldn't be so easily discouraged, but I'd recommend:
1. Picking on one member of the "slop syndicate" at a time.
2. Show some examples (evidence) before beginning the evaluation.
The idea is great; the actual implementation is, frankly, horrible.
First of all, there are only 27 "slop" image examples, but 200 real ones - very bad ratio. And almost all real examples are just dated photographs, paintings, photos of old books - there are genuinely 0 (not joking) modern photos or digital artwork. Also multiple "slop" image examples were actual screenshots of ChatGPT interface or clearly cropped screenshots.
Text is even worse - they somehow present it as if LLMs cannot write factually correct or simple text.
I genuinely believe that they should take this down immediately and do a major rework, because at this stage it will only do harm. It might teach the children or adults who complete this that AI can never write factually correct text or create very realistic-looking photos (good luck with with Nano Banana Pro).
Clicking through 4 examples I found it hard to understand even what I was looking at. All 4 appeared to just be garbage that a human could get Canva to shit out in a couple of minutes, but the features that put them in the AI Slop bucket were things that identified the "slop", not the AI.
Curiously it focuses on overly descriptive phrasing, and factually incorrect statements, as signs of AI.
I don't think this is accurate. AI has a flavour or tone we all know, but it could have generated factually plausible statements (that you could not diagnose in this test) or plausible text.
I could not tell the real from fake music at all.
I support (and pay for) Kagi, but wasn't overly impressed here. At worst I think it might give people too much confidence. Wikipedia has a great guideline on spotting AI text and I think the game here should integrate and reflect its contents: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
> I could not tell the real from fake music at all.
Not sure if it's in my head (haven't done a blind test or anything) but all AI music I've heard has painfully bad drum acoustics (very clicky). Seems like the most tell tale marker, although would love to make a "spot the AI song" game to prove myself right/wrong.
I found the music trivially distiguishable or at least I thought it was. Maybe I just got lucky. The AI songs had lyrics that, to me, seemed like something that wouldn't be in most songs. I've heard far better AI generated songs that I'd have a hard time distguishing if I wasn't told.
Right. Its examples fall into categories like:
- AI slop is trivially factually wrong, and frequently overconfident.
- AI slop is verbose.
But, as you note, IRL this is not usually the case. It might have been true in the GPT-3.5 or early GPT-4 days, but things have moved on. GPT-5.1 Pro can be laconic and is rarely factually wrong.
The best way to identify AI slop text is by their use of special and nonstandard characters. A human would usually write "Gd2O3" for gadolinium oxide, whereas an AI would default to "Gd₂O₃". Chat-GPT also loves to use the non-breaking hyphen (U+2011), whereas all humans typically use the standard hyphen-minus character (U+002D). There's more along these lines. The issue is that the bots are too scrupulously correct in the characters they use.
As for music, it can be very tough to distinguish. Interestingly, there are some genres of music that are entirely beyond the ability of AI to replicate.
> whereas all humans typically use the standard hyphen-minus character (U+002D).
I made it a point to learn to type the em dash—only to have it stolen by the bots; it's forced me to become reacquainted with my long lost friend, the semicolon.
Hah, well, there's that.
But I was referring to the special hyphen that the AIs frequently use today, and which is a hallmark of AI generated text, as it's not on regular keyboards and difficult to access: https://en.wikipedia.org/wiki/Wikipedia:Non-breaking_hyphen
They're also fond of this apostrophe: ’
Whereas almost every human uses: '
I'm like 99.99999% sure that the usage of non standard characters, em dash, fancy quotes, "it's not X, it's Y" etc was clearly done on purpose from the very beginning and pushed for by various parties who have a strong vested interest in monitoring who is using AI and how it's being used.
I kneel Hideo Kojima. You saw this all coming: https://youtu.be/PnnP4sA80D8
> But, as you note, IRL this is not usually the case.
Except for the huge the amounts of already generated slop that is combined with SEO to pop up in search results
Oh, if you finetune GPT-4 on an author it assumes the style so well that people prefer it to human experts doing the same job
> "Readers Prefer Outputs of AI Trained on Copyrighted Books over Expert Human Writers"
Interestingly, there are some genres of music that are entirely beyond the ability of AI to replicate.
Sounds interesting, what are some of those genres?
You'll find this surprising, but Suno is completely and utterly incapable of generating ambient music like this: https://www.youtube.com/watch?v=SsA-jxQsx8I
It simply doesn't get it. This sort of thing probably wasn't in its training data.
The really interesting thing is that when I upload something like that track, and tell it to compose something similar, it usually gives me an error and refunds my credits.
Also, and this is far more mainstream, both Suno and ElevenLabs are totally incapable of generating anything like, e.g., Darkthrone's "Transylvanian Hunger." Music that is intentionally unpolished is anathema to them.
I could go on. There are lots. I think that they understand melody and harmony, but they don't understand atmosphere, just in general...
> I support (and pay for) Kagi, but wasn't overly impressed here
This website strikes me as merely a marketing gimmick.
Most likely they see AI as a competitor to search and are trying to survive by pandering to the anti-AI movement
> I could not tell the real from fake music at all.
Perhaps this is just a sign for you to listen to more (human) music is all!
But then it wouldn't classify as slop?
> Here's why: This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation.
This sounds to me like a message is "poor fakes are generated, and everything else is genuine", which I think would be a very counterproductive message, even now.
Just because information is wrong doesn't mean it's AI generated, people can make up wrong answers too.
I will never understand why people are so obsessed with this. You don't like it, dont engage with it. If you can't tell the difference, and it's entertainment stop worrying about it.
If veracity matters, use authorative sources. Nothing has really changed about the skills needed for media literacy.
Correctness is a poor way to distinguish between human-authored and AI-generated content. Even if it's right, which I doubt (can humans not make wrong statements?), it doesn't do anything to help someone who doesn't know much about what they're searching.
Ironically, it seems the descriptions are AI-written?
(minor spoiler)
The text accompanying an image of a painting:
> This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation. Meindert Hobbema. The Avenue at Middelharnis (1689, National Gallery, London)
What bugs me the most about nearly everyone selling AI products is that they apparently want or need to believe in the power of LLMs for everything, not just the product, and this means that they also generate the explanatory texts and descriptions and readmes and... it makes the product itself feel of a much worse quality.
I don't mind that you're selling an AI product if it's good but at least put some humanity on the marketing side.
I was just thinking this. The wikipedia "signs of ai writing" that another commenter linked to (https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing) mentions LLMs overuse the 'rule of three' (e.g. natural imperfections, consistent lighting, and realistic proportions), haha.
I feel like this is a good educational goal but a very poor execution.
We're meant to assume correct sentences were written by humans and AI adds glaring factual errors. I don't think it is possible at this point to tell a single human written sentence from an AI written sentence with no other context and it's dangerous to pretend it is this easy.
Several of the AI images included obvious mistakes a human wouldn't have made, but some of them also just seemed like entirely plausible digital illustrations.
Oversimplifying generative AI identification risks overconfidence that makes you even easier to fool.
Loosely related anecdote: A few months ago I showed an illustration of an extinct (bizarre looking) fish to a group of children (ages 10-13ish). They immediately started yelling that it was AI. I'm glad they are learning that images can be fake, but I actually had to explain that "Yes, I know this is not a photo. This animal is long extinct and this is what we think it looked like so a person drew it. No one is trying to fool you."
Kind of reminds me of the junk forensic fire science. "Slop Detective" might have been nice in 2022, now its slop itself. Maybe this is an old link? If someone just published this in the last 90 days, they are an idiot.
There's a lot of anti-AI sentiment in the art world (not news) but real artists are now actively accused of using AI and getting kicked off reddit or whatever. That tells me there is going to be 0 market for 100% human created art, not the other way around.
Not sure where to submit a bug report but I chose the option for kids and got this as the 'correct' message for a painting:
> Correct! Well done, detective!
> This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation.
> Albert Pinkham Ryder, Seacoast in Moonlight (1890, the Phillips Collection, Washington)
The image is not photography, I guess technically it's a photograph of a painting but still, confusing text.
Hi, dev from Kagi here. You can submit the bug report on https://kagifeedback.org.
I got tripped up by this one (sorry for spoilers):
> Bees collect pollen from flowers and make honey. They also drive tiny cars to get from flower to flower!
The explanation given is that it’s not factually correct, therefore it’s AI slop. Maybe I didn’t pay enough attention to the instructions, but aren’t humans also capable of creating text that is not factually correct, and at times is done so not out of ignorance for for artistic or humorous purposes? This example here sounds like something that would be written by a child with an active imagination, and not likely the kind of “seems plausible but is actually false” slop that LLMs come up with.
>What is "Slop":
>Fake stuff made by computers that tries to look like it was made by real people. It's everywhere online!
Tricking people is not what makes it slop. Being low quality is what makes it slop. This is a dangerous definition as it could mean that anything AI generated could be considered slop, even if it was higher quality than regular things.
Even if AI poetry/music/movies gets really high quality, it's still gonna be slop to me.
The first music sample for 8th-12th graders I got which was like this charming instrumental chiptune-ey piece was so good (and real) but I can't find it again and forgot to try and shazam it or something. Did anyone else get it/know the song?
I like the idea, but I think the game progression needs another pass from a designer.
I started on "Level 1" and got 2 things wrong (both false positives if it matters) and instead of feeling like I learned anything, I felt as though I was set up to fail because the image prompt was missing sufficient context or the text prompt was too simple to be human. Either I was dumb or the game was dumb.
Maybe I'm just too old and 8-11 year-old kids wouldn't be so easily discouraged, but I'd recommend:
1. Picking on one member of the "slop syndicate" at a time.
2. Show some examples (evidence) before beginning the evaluation.
The idea is great; the actual implementation is, frankly, horrible.
First of all, there are only 27 "slop" image examples, but 200 real ones - very bad ratio. And almost all real examples are just dated photographs, paintings, photos of old books - there are genuinely 0 (not joking) modern photos or digital artwork. Also multiple "slop" image examples were actual screenshots of ChatGPT interface or clearly cropped screenshots.
Text is even worse - they somehow present it as if LLMs cannot write factually correct or simple text.
I genuinely believe that they should take this down immediately and do a major rework, because at this stage it will only do harm. It might teach the children or adults who complete this that AI can never write factually correct text or create very realistic-looking photos (good luck with with Nano Banana Pro).
P.S. To see how bad it is, just scrape https://slopdetective.kagi.com/data/images/not_slop/{file} from image_001.webp to 200 and slop/image_001.webp to 027.
Also see https://slopdetective.kagi.com/data/text/slop/l3_lines.json and https://slopdetective.kagi.com/data/text/not_slop/l3_lines.j... for real vs LLM-written text.
Clicking through 4 examples I found it hard to understand even what I was looking at. All 4 appeared to just be garbage that a human could get Canva to shit out in a couple of minutes, but the features that put them in the AI Slop bucket were things that identified the "slop", not the AI.
A better art test?
https://www.astralcodexten.com/p/ai-art-turing-test
Though maybe these are not examples of "slop" but instead good use of AI?
>Water is wet. Wetness is what water has. What makes water water is that it's wet. The wetness of water means water is wet. So water has wetness.
>This was actually AI-generated slop! Repeats 'water is wet' multiple times.
I didn't know writing "water is wet" repeatedly was enough to de-humanize you.
>In many situations, it could be argued that grass may sometimes appear to have a greenish quality, though this might not always be the case.
>This was actually AI-generated slop! Won't commit to 'grass is green' and uses uncertain words.
What? Not all grass is green.
Fun times ahead.
the whole site itself is slop... lol
hey kids, learn about ai slop by reading this guide to ai slop written by ai and full of ai slop mistakes. sheesh
We wrote the paper on how to remove slop from LLMs.
https://arxiv.org/abs/2510.15061
Also somewhat tangentially relevant video: https://www.youtube.com/watch?v=Tsp2bC0Db8o
[dead]
[dead]