"Hi! Thank you for your feedback. You are absolutely right. I am new to GitHub and I uploaded the project as a ZIP file initially to include the assets and binaries.I am currently extracting the source files (C++ .cpp and .h) so you can browse the 'Bit-Plane sequencing' logic directly in the repository without downloading anything. It will be ready in a few minutes!In the meantime, you can find the core logic in the src/ folder inside the ZIP."
“Here’s a friendly message that will perfectly convey what you want to say”.
A double PhD friend says she has to talk to chatGPT for all sort of advice and can’t feel safe not doing it, “because you know I’m single and don’t have a companion to spitball my ideas”. She let chatGPT decide which way to take to get to a certain island, and she got stranded because the suggested service didn’t exist.
I've lost count of the interactions Ive had or witnessed in which a living breathing authority on a subject is ignored in favour of using voice recognition to ask an LLM the question first, or worse, second.
How is the getting stranded example different than asking on a travel forum how to get somewhere, and an active and well intentioned user that isn't familiar with your area of travel answers, gives you wrong instructions, and you get lost?
It's because we spent that last 50 years training people that computers are algorithmic, cold, and don't make human mistakes. Your calculator can't tell you the meaning of life, but it will never get 2 + 2 wrong.
Well, now the calculator can tell you a meaning of life, but it'll get 2 + 2 wrong 10% of the time.
cunningham's law [0] [1] increases the likelihood that at least one other person will point out the error and correct it. chances are you'll probably get more than one person posting.
LLMs don't do this. they give confident language output, not correct answers.
Because the vast and overwhelmingly majority of the time, if you ask a question into the ether that nobody has a good answer to, most people will gloss over it and not bother answering, as attested by decades of relatable memes ( https://xkcd.com/979/ ). In contrast, the chatbot is trained to always attempt to give an answer, and is seemingly disincentivized via its training set to just shrug and say "I don't know, good luck fam".
It is supposed to indicate Microsoft cares only about money, which to me too, seems in the same league as microslop, i.e. mildly insulting but really not rude enough to be worth censoring.
And other insults are just words as well. It's the intention, history, connotation etc. behind words that give them meaning. M$ is meant as an insult, hence it's insulting. https://en.wiktionary.org/wiki/M$
reply