But it doesn't look human. Read the text, it is full of pseudo-profound fluff, takes way too many words to make any point, and uses all the rhetorical devices that LLMs always spam: gratuitous lists, "it's not x it's y" framing, etc etc. No human person ever writes this way.
A human can write that way if they're deliberately emulating a bot. I agree however that it's most likely genuine bot text. There's no telling how the bot was prompted though.
People keep saying this but it's simply untrue. AI inference is profitable. Openai and Anthropic have 40-60% gross margins. If they stopped training and building out future capacity they would already be raking in cash.
They're losing money now because they're making massive bets on future capacity needs. If those bets are wrong, they're going to be in very big trouble when demand levels off lower than expected. But that's not the same as demand being zero.
those gross profit margins aren't that useful since training at fixed capacity is continually getting cheaper, so there's a treadmill effect where staying in business requires training new models constantly to not fall behind. If the big companies stop training models, they only have a year before someone else catches up with way less debt and puts them out of business.
Only if training new models leads to better models. If the newly trained models are just a bit cheaper but not better most users wont switch. Then the entrenched labs can stop training so much and focus on profitable inference
Well thats why the labs are building these app level products like claude code/codex to lock their users in. Most of the money here is in business subscriptions I think, how much savings would be required for businesses to switch to products that arent better, just cheaper?
Stop this trope please. We (1) don't really know what their margins are and (2) because of the hard tie-in to GPU costs/maintenance we don't know (yet) what the useful life (and therefore associated OPEX) is of GPUs.
> If they stopped training and building out future capacity they would already be raking in cash.
That's like saying "if car companies stopped researching how to make their cars more efficient, safer, more reliable they'd be more profitable"
Science fiction is as old as fiction. The Epic of Gilgamesh (2000BC) and Ramayana (500BC) have sci-fi elements. There's nothing innovative or unique about stories that imagine a future instead of a past, present, or alternate reality.
Genres are too vague and generic to be ownable by anybody. Inspiration is not plagiarism.
The cynical part of me says that the people who share this link with that summary are the cheaters trying to avoid getting caught, on the basis of the fact that they are patently abusing the numbers presumably because they didn't pay attention in math class.
The tests are 90% SENSITIVE. That means that of 100 AI cheaters, 10 won't be caught.
The paper you linked says the tests are 100% SPECIFIC. That means they will *never* flag a human-written paper as mostly AI.
> There is also the mix: if I write two pages and I used two sentences by AI (because I was tired and I couldn't find the right sentence), I may be flagged for using AI.
None of these tools are binary. They give a percentage score, a confidence score, or both.
If you include one ai sentence in a 100 sentence essay, your essay will be flagged as 1% AI and nobody will bat an eye.
It's not, but the fact that one sentence deserves a high score doesn't automatically mean that entire thing will flag false positive. Unless it's like, two sentences in total.
No, read the paper. They're going to pass 10% of students who cheated. The 90% figure is the false negative rate, how many AI essays it says are human.
The false positive rate is 0. The tool *never* says human writing is AI.
> The false positive rate is 0. The tool never says human writing is AI.
That cannot be true as it would be easy for a human to write in the style of AI, if they choose to. Whoever is making that claim is lying, because money...
Read the paper dude. It's not an advertisement, it's an investigation. They performed an experiment including 29 human written papers. One of them got a score of 11% likely to be AI, the rest got a score of 0% likely to be AI. The tool never labeled any human writing as AI with high confidence.
> That cannot be true as it would be easy for a human to write in the style of AI, if they choose to.
Is that the nightmare scenario that everybody in this thread is freaking out about?
Students who go to great effort to deliberately try to make it look like they are cheating, they're the ones you're afraid of being falsely accused of cheating?
We're on our way to dystopia because people who go out of their way to look suspicious on purpose, arouse suspicion?
The reliability of all AI tools with potentially severe consequences for people needs to be tested using adversarial patterns. This is nothing new, yet the mentioned article fails to do that. They test the happy paths and find the results to be satisfactory for themselves.
It is very common in academic investigations to achieve results with more than 95% accuracy, let alone 90%, when in the real world the same AI tools fail miserably.
So, yes, this is the nightmare scenario that I am afraid of where a simplistic "investigation" will be used to justify the use of unproven AI tools with real life consequences to people.
That's not what 90% effective means. Tests don't work that way.
Tests can be wrong in two different ways, false positive, and false negative.
The 90% figure (which people keep rounding up from 86% for some reason, so I'll use that number from now on) is the sensitivity, or the abitity to not have false negatives. If there are 100 cheaters, the test will catch 86 of them, and 14 will get away with it.
The test's false positive rate, how often it says "AI" when there isn't any AI, is 0%, or equivalently, the test's "specificity" is 100%
> Turnitin correctly identified 28 of 30 samples in this category, or 93%. One
sample was rated incorrectly as 11% AI-generated[8], and another sample was
not able to be rated.
The worst that would have happened according to this test is that one student out of 30 would be suspected of AI generating a single sentence of their paper. None of the human authored essays were flagged as likely AI generated.
No rebellion or revolt had ever been successful without arms supplied from outside sponsors.
Random personal small arms that a bunch of people just happen to have at home are not enough to win a revolutionary war against a professional military.
> Random personal small arms that a bunch of people just happen to have at home are not enough to win a revolutionary war against a professional military.
They're absolutely enough to tip the scales in favor of those within that professional military who would rather support the prospective revolution. Such people will definitely exist given a widespread revolt against a violently oppressive regime.
Yes random small arms make quite a difference in many scenarios. I can say this with zero commentary on whether one feels society broadly should have more guns.
September 9, 2025 - Protesters storm the Nepalese parliament, ransacking it and setting it on fire. Homes of leading politicians are also torched and the politicians themselves attacked.
Soon thereafter, the prime minister resigned along with other ministers and the president dissolved the parliament and scheduled a new election.
I think that counts as a successful rebellion or revolt.
The Gen Z revolution would have gone nowhere had the Nepalese Military not launched a coup, removed the existing government from power at gunpoint, and asked the protesters who to replace them with.
So, fine, there's a third condition: when the entire military mutinies, leaving the regime with no armed defenders.
This is an absurd take. The meaning of "selling" is extremely broad, courts have found such language to apply to transactions as simple as providing an http request in exchange for an http response. Their lawyers must have been begging them to remove that language for the liability it represents.
For all purposes actually relevant to privacy, the updated language is more specific and just as strong.
If they were only selling data in such an 'innocent' way, couldn't they clearly state that, in addition to whatever legalese they're required to provide?
That's literally exactly what they do. This is why you should consider reading beyond headlines from time to time.
> You give Mozilla the rights necessary to operate Firefox. This includes processing your data as we describe in the Firefox Privacy Notice. It also includes a nonexclusive, royalty-free, worldwide license for the purpose of doing as you request with the content you input in Firefox. This does not give Mozilla any ownership in that content.
> (from the attached FAQ) Mozilla doesn’t sell data about you (in the way that most people think about “selling data”), and we don’t buy data about you. Since we strive for transparency, and the LEGAL definition of “sale of data” is extremely broad in some places, we’ve had to step back from making the definitive statements you know and love. We still put a lot of work into making sure that the data that we share with our partners (which we need to do to make Firefox commercially viable) is stripped of any identifying information, or shared only in the aggregate, or is put through our privacy preserving technologies (like OHTTP).
The courts have found providing an http request in exchange for an http response- where both the request and response contains valuable data, is selling data? Well that’s interesting because I too consider it selling of data. I’m glad the courts and I can agree on something so simple and obvious.
> All of these AI outputs are both polluting the commons where they pulled all their training data AND are alienating the creators of these cultural outputs via displacement of labor and payment
No dispute on the first part, but I really wish there were numbers available somehow to address the second. Maybe it's my cultural bubble, but it sure feels like the "AI Artpocalypse" isn't coming, in part because of AI backlash in general, but more specifically because people who are willing to pay money for art seem to strongly prefer that their money goes to an artist, not a GPU cluster operator.
I think a similar idea might be persisting in AI programming as well, even though it seems like such a perfect use case. Anthropic released an internal survey a few weeks ago that was like, the vast majority, something like 90% of their own workers AI usage, was spent explaining allnd learning about things that already exist, or doing little one-off side projects that otherwise wouldn't have happened at all, because of the overhead, like building little dashboards for a single dataset or something, stuff where the outcome isn't worth the effort of doing it yourself. For everything that actually matters and would be paid for, the premier AI coding company is using people to do it.
I guess I'm in a bubble, because it doesn't feel that way to me.
When AI tops the charts (in country music) and digital visual artists have to basically film themselves working to prove that they're actually creating their art, it's already gone pretty far. It feels like the even when people care (and they great mass do not) it creates problems for real artists. Maybe they will shift to some other forms of art that aren't so easily generated, or maybe they'll all just do "clean up" on generated pieces and fake brush sequences. I'd hate for art to become just tracing the outlines of something made by something else.
Of course, one could say the same about photography where the art is entirely in choosing the place, time, and exposure. Even that has taken a hit with believable photorealistic generators. Even if you can detect a generator, it spoils the field and creates suspicion rather than wonder.
> more specifically because people who are willing to pay money for art seem to strongly prefer that their money goes to an artist, not a GPU cluster operator.
Look at furniture. People will pay a premium for handcrafted furniture because it becomes part of the story of the result, even when Ikea offers a basically identical piece (with their various solid-wood items) at a fraction of the price and with a much easier delivery process.
Of course, AI art also has the issue that it's effectively impossible to actually dictate details exactly like you want. I've used it for no-profit hobby things (wargames and tabletop games, for example), and getting exact details for anything (think "fantasy character profile using X extensive list of gear in Y specific visual style") takes extensive experimentation (most of which can't be generalized well since it depends on quirks of individual models and sub-models) and photoshopping different results together. If I were doing it for a paid product, just commissioning art would probably be cheaper overall compared to the person-hours involved.
Yeah but if they, for example use AI to do their design or marketing materials then the public seems to dislike that. But again, no numbers that's just how it feels to me.
After enough time, exposure and improvement of the technology I don’t think the public will know or care. There will be generations born into a world full of AI art who know no better and don’t share the same nostalgia as you or I.
What's hilarious is that, for years, the enterprise shied away from open source due to the legal considerations they were concerned about. But now... With AI, even though everyone knows that copyright material was stolen by every frontier provider, the enterprise is now like: stolen copyright that can potentially allow me to get rid of some pesky employees? Sign us up!
Yup, there's this angle that's been a 180, but I'm referring to the fact that the US Copyright Office determined that AI output isn't anyone's IP.
Which in itself is an absurdity, where the culmination of the world's copyrighted content is compiled and used to then spit out content that somehow belongs to no one.
I think most businesses using AI illustrations are not expecting to copyright the images themselves. The logos and words that are put on top of the AI image are the important bits to have trademarked/copyrighted.
I guess I'm looking at it from a software perspective, where code itself is the magic IP/capital/whatever that's crucial to the business, and replacing it with non-IP anyone can copy/use/sell would be a liability and weird choice.
Art is political more than it is technical. People like Banksy’s art because it’s Banksy, not because he creates accurate images of policemen and girls with balloons.
I think "cultural" is a better word there than "political."
But Banksy wasn't originally Banksy.
I would imagine that you'll see some new heavily-AI-using artists pop up and become name brands in the next decade. (One wildcard here could be if the super-wealthy art-speculation bubble ever pops.)
Flickr, etc, didn't stop new photographers from having exhibitions and being part of the regular "art world" so I expect the easy availability of slop-level generated images similarly won't change that some people will do it in a way that makes them in-demand and popular at the high end.
At the low-to-medium end there are already very few "working artists" because of a steady decline after the spread of recorded media.
Advertising is an area where working artists will be hit hard but is also a field where the "serious" art world generally doesn't consider it art in the first place.
Not often discussed is the digital nature of this all as well. An LLM isn't going to scale a building to illegally paint a wall. One because it can't, but two because the people interested in performance art like that are not bound by corporate. Most of this push for AI art is going to come from commercial entities doing low effort digital stuff for money not craft.
Musicians will keep playing live, artists will keep selling real paintings, sculptors will keep doing real sculptures etc.
The internet is going to suffer significantly for the reasons you point out. But the human aspect of art is such a huge component of creative endeavours, the final output is sometimes only a small part of it.
Mentioning people like Banksy at all is missing the point though. It makes it sound like art is about going to museums and seeing pieces (or going to non-museums where people like Banksy made a thing). I feel like, particularly in tech circles, people don’t recognize that the music, movies and TV shows they consume are also art, and that the millions of people who make those things are very legitimately threatened by this stuff.
If it were just about “the next Banksy” it would be less of a big deal. Many actors, visual artists, technical artists, etc make their living doing stock image/video and commercials so they can afford rent while keeping their skills sharp enough to do the work they really believe in (which is often unpaid or underpaid). Stock media companies and ad agencies are going to start pumping out AI content as soon as it looks passable for their uses (Coca Cola just did this with their yearly Christmas ad). Suddenly the cinematographers who can only afford a camera if it helps pay the bills shooting commercials can’t anymore.
Entire pathways to getting into arts and entertainment are drying up, and by the time the mainstream understands that it may be too late, and movie studios will be going “we can’t find any new actors or crew people. Huh. I guess it’s time to replace our people with AI too, we have no choice!”
I’d say in this context that politics concerns stated preferences, while culture labels the revealed preferences. Also makes the statement «culture eats policy for breakfast» make more sense now that I’ve thought about it this way.
I'd distinguish between physical art and digital art tbh. Physical art has already grappled with being automated away with the advent of photography, but people still buy physical art because they like the physical medium and want to support the creator. Digital art (for one off needs), however, is a trickier place since I think that's where AI is displacing. It's not making masterpieces, but if someone wanted a picture of a dwarf for a D&D campaign, they'd probably generate it instead of contracting it out.
Right, but the question then is, would it actually have been contracted out?
I've played RPGs, I know how this works: you either Google image search for a character you like and copy/paste and illegally print it, or you just leave that part of the sheet blank.
So it's analogous to the "make a one-off dashboard" type uses from that programming survey: the work that's being done with AI is work that otherwise wouldn't have been done at all.