In principle yes, but all metrics so far suggest they are losing money every user interaction. There is very little network effect with these tools so It's not like they can start cutting back on staff and feature deployment.
I don't 'like' Jira, but it gets the job done. It's so easy to onboard users and assign tasks/issues across orgs. Structure is fairly simply and the filters with subscriptions is powerful. Android app that I use on my work phone just works.
The growth is across the family of products (inc Instagram and WhatsApp) not Facebook itself. Facebook itself is a zombie, and I don't believe they have a way to innovate out of it. I'm not going to predict the end of Meta, they have more than enough products, but agreed that it's actually quite difficult to understand who's really left.
It's less about paint a picture yourself, arguably there is little to no value there. OpenAI et al, sell the product of creating pictures in the style of their material. I see this as a direct competition to Studio Ghibli's right to produce their own material with their own IP.
I agree with this. I don't know how to create artistic styles by hand or using any creative software for that matter. All the LLM tools out there gave me the "ability" and "talent" to create something "good enough" and, in some cases, pretty close to the original art.
I rarely use these tools (I'm not in marketing, game design, or any related field), but I can see the problem these tools are causing to artists, etc.
Any LLM company offering these services needs to pay the piper.
I have a thought that whilst LLM providers can say "Sorry" - there is little incentive and it will expose the reality that they are not very accurate, nor can be properly measured.
That said, there clearly are use cases where if the LLM can't a certain level of confidence it should refer to the user, rather than guessing.
This is actively being worked on my pretty much every major provider. It was the subject of that recent OpenAI paper on hallucinations. It's mostly caused by benchmarks that reward correct answers, but don't penalize bad answers more than simply not answering.
E.g.
Most current benchmarks have a scoring scheme of
1 - Correct Answer
0 - No answer or incorrect answer
I don't think users understand the risks. I'm broadly accepting of the protection of end users through mechanisms. Peoples entire lives are managed through these small devices. We need much better sandboxing to almost create a separate 'VM' for critical apps such as banking and messaging.
The whole notion of "Vibe Coding" was to accept the output regardless and prompt forward. Anything else is moving the goalposts. If you can't accept the outputs and you need an in-depth knowledge of code then these LLMs are not ready for this task.
reply