A refreshing view of AI, this excerpt I particulary enjoyed:
> I mentioned in my previous half-year update, Open AI came up with a transformer based language model called GPT-2 and refused to release the full version fearing horrible consequence that may have to the future of humanity. Well, it did not take long before some dude - Aaron Gokaslan - managed to replicate the full model and released it in the name of science. Obviously the world did not implode, as the model is just a yet another gibberish manufacturing machine that understands nothing of what it generates. Gary Marcus in his crusade against AI hubris came down on GPT-2 to show just how pathetic it is. Nevertheless all those events eventually forced Open AI to release their own original model, and much to nobody's surprise, the world did not implode on that day either.
As someone who has posted an GPT-2 excerpt and accidentally had people confuse it for a human comment—I thought the context would make it obvious—calling a language model ‘pathetic’ for only occasionally getting math right hardly strikes me as a reasonable sort of complaint. Nor does disingenuously putting false words in OpenAI's mouth.
Just wait until your GPT2-generated MBA homework gets you full points first time, then you either won't compute, start weeping, shake rapidly or laugh like a madman. Automated essays scoring is already reality, now you get GPT2-automated ones as well.
HN crowd is often intellectual elite; imagine regular persons reading what GPT-2 produces when they can't understand what a regular grad student writes. I can use e.g. talktotransformer.com to complete some quotes like "Intel CEO said that the new 10nm CPUs will...", then post that to some Reddit thread, it would get picked up by search engines, and at some point somebody would use it in some serious work or it would spread like wildfire on sites that don't check their references.
God forbid that it takes more than a few days for decent chat bots to appear on Reddit from a troll farm in eastern europe/china/wherever based on these new models. Or has that already happened, and we're simply unaware?
Already done. It really feels like new dark times are upon us, this time not because of a lack of writings, but because of automated garbage arriving quickly. Previously one had to hire some writers to write crappy ad-driven garbage articles, soon you can do a 1-person operation for that.
> I mentioned in my previous half-year update, Open AI came up with a transformer based language model called GPT-2 and refused to release the full version fearing horrible consequence that may have to the future of humanity. Well, it did not take long before some dude - Aaron Gokaslan - managed to replicate the full model and released it in the name of science. Obviously the world did not implode, as the model is just a yet another gibberish manufacturing machine that understands nothing of what it generates. Gary Marcus in his crusade against AI hubris came down on GPT-2 to show just how pathetic it is. Nevertheless all those events eventually forced Open AI to release their own original model, and much to nobody's surprise, the world did not implode on that day either.