Hacker Newsnew | past | comments | ask | show | jobs | submit | josephg's commentslogin

No it doesn't. The people with the lowest self perception also have the lowest actual skill. Look at the chart:

https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect#...


I guess you linking to it was a self fulfilling prophecy

If you read your own reference (not the picture, but where you took it from on Wikipedia) really really carefully, you might be able to tell why it so perfectly applies to you

The person with little knowledge overestimates they're capability, and the person which actually knows how complicated [the thing] is , usually isn't as confident they mastered it.

Your take on that makes absolutely no sense


You’re talking about a confidence and ability gap. I have heard of the Dunning-Kruger effect. I accept all of that.

But the claim above was that having low confidence was correlated to higher skill. Ie, skill and confidence are anti correlated. The chart does not show that. The lowest data point for confidence is the point on the left of the chart. This is also the data point corresponding to people who have the least competence. Having low confidence is not evidence that you’re secretly an expert. Confidence and competence are still positively correlated according to that chart.

The Dunning-Kruger effect is not so strong that there are scores of novices convinced they are experts in a field. But in your case, I admit the data may not tell the full story.


"Nor would a wise man, seeing that he was in a hole, go to work and blindly dig it deeper..." ( The Washington Post dated 25 October 1911 )

"Baloney Detection Kit"

https://www.youtube.com/watch?v=aNSHZG9blQQ

Best regards =3


That isn't what that shows, and the article you linked to even warns:

> In popular culture, the Dunning–Kruger effect is sometimes misunderstood as claiming that people with low intelligence are generally overconfident, instead of denoting specific overconfidence of people unskilled at particular areas.

Dunning-Kruger has also been discredited with suggestion they may have been over confident themselves:

The Dunning-Kruger Effect Is Probably Not Real (2020) https://www.mcgill.ca/oss/article/critical-thinking/dunning-...

Debunking the Dunning‑Kruger effect – the least skilled people know how much they don’t know, but everyone thinks they are better than average (2023) https://theconversation.com/debunking-the-dunning-kruger-eff... the Dunning‑Kruger effect – the least skilled people know how much they don’t know, but everyone thinks they are better than average


Are you replying to the wrong comment? The person you're responding to seems to make the same point

Self-reported studies are arguably weaker evidence, but are common in some areas for ethics reasons. In general, if errors are truly random, than they will cancel out over larger/frequent population samples.

The study conclusion inferred the skills needed to be effective at some task, are the same skills needed to correctly evaluate if you are actually proficient at the same tasks.

Or put another way, the <5% population of narcissists by their nature become evasive when their egos are perceived as threatened. Thus, often will pose a challenge in a team setting, as compulsive lying or LLM turd-polishing is orthogonal to most real world tasks.

People are not as unique as they like to believe, and spotting problems is trivial after you meet around 3000 people. Best to avoid the nonsense, and get outside to enjoy life. Have a great day =3


No idea why we all get negative karma on this thread, as I do respect a cited source opinion even if we disagree. Do have a look around for papers rather than editorialized content in the future, and note account LLM agent output is a violation of YC usage policy. Have a great day =3

https://arxiv.org/abs/2505.02151


> Outside of being forced to use a game launcher to launch their games, what was the real crime?

To me, this was the crime. Me and my friends played mass effect 3 multiplayer around launch, which was an EA Origin exclusive. It was a total pain! All of us needed to download and install the launcher, then buy & download the game through it. Then add each other as "EA origin friends". The whole process was riddled with bugs at the time - including payment problems and download problems. Origin would crash sometimes. Sometimes we couldn't see each other in multiplayer, and needed to restart origin to fix it. Sometimes another of our friends would join us - and it was always "oh god, what do I have to do to make this work??".

I really love mass effect 3. But the experience was traumatic enough that I never bought or played anything through EA Origin ever since then. The quality of Steam is table stakes now. And there's so many good games coming out that game exclusivity usually isn't enough to get you over that initial hump.

The biggest gripe I have with the origin launcher (and to a lesser extent, the epic launcher) other than "why does it exist at all?" is how laggy all UI actions are. Game developers can render a 3d world at 120+fps. Why on earth does it take multiple seconds for the UI to respond to a button press sometimes? Its completely inexcusable. The blizzard launcher is (IMO) the best launcher by this metric. You can tell competent people made it, because everything responds instantly. (The EA launcher might be good now, I wouldn't know. I mostly only play games that release on steam.)


They have ~5000 employees.

Most game companies are a tiny fraction of that size. Even most AAA games are made by teams of hundreds. Not teams of thousands.


Epic Games does way more than just purely making games.

They also have their own Steam competitor (Epic Games Store) and, more importantly, they develop and support Unreal Engine used by tons of other game dev companies.

If you want an apples to apples comparison (i.e., other big live-service game companies) in terms of the employee count, you got:

Mihoyo (Genshin Impact, Honkai Star Rail) - ~5,000-6,000

Riot Games (League of Legends, Valorant) - 4,500

Roblox - 3,500


What about Valve itself? They have ~350 employees. They make Steam, SteamOS, Steam Deck, Steam Machine, Steam Frame, the Source engine, and run four actively successful live service games: CS2, Dota2, TF2, Deadlock.

Last I've heard Valve makes use of a lot of contractors however. So the number of people working on their projects is a bit higher than their employee count suggests. Anyone's guess how many though.

I know they're sponsoring a bunch of ARM and Linux projects as well.


Every studio uses a surprisingly large amount of contractors including Epic Games, Riot, etc.

The small size of Valve is simultaneously mind boggling but also not, given its very intentional independence. I would have to imagine that they must contract out or have partners at least for their hardware relationships if not for their massively multiplayer online games. At just 350 people that's enough annual revenue to make everyone there a millionaire several times over. Simultaneously plausible but mind boggling.

It's well-known that most of the work on SteamOS is done by vendors on behalf of Valve (both individual kernel authors and agencies like Igalia).

They contract out all the time, they've admitted to it in lots of interviews. So I think through the amount of contracting they're able to keep their core hires down.

Yeah but Valve is not publicly traded, so that comparison is of course totally unfair! /s

Having skilled and happy employees that aren't constantly changing and do not spend all of their time on ways to fuck over customers and chase trends is simply impossible. Releasing a piece of hardware and leaving it open for customers to do with what they want? Linux? Not hiring people the second line goes up and then immediately firing them when line stagnates? Preposterous.


The game store doesn't need a lot of employees. A few years ago it was reported that Valve only needed about 70 employees to run Steam while it generated billions of dollars in Steam fees (30% per game). It's basically free money for Valve. I bet the situation is similar for the AppStore and Google Play.

Though Unreal Engine does indeed need quite a few developers. Additionally, using UE is much cheaper (5% on games exceeding 1 milion USD gross revenue) than using Steam (30% on every game). So they not only need more developers than Valve, they also earn less money.


Steam doesn’t really attempt to gatekeep submitted content the same way that Apple or Google do so I would expect those companies to have much larger teams supporting, in mostly non-development roles. Steam support has also historically been kind of a joke (not sure if it’s improved in the last 5 years) though I don’t know if Google/Apple provide a better experience

You know what contractors are?!

Yes. Do you have any evidence they use contractors for Steam?

Can you do basic math on the number of support tickets Valve is handling to answer that question for yourself?

if not:

https://www.valvesoftware.com/en/jobs?job_id=113#:~:text=The...


> Can you do basic math on the number of support tickets Valve is handling to answer that question for yourself?

Can you? Do you even know that number?


Epic games store is likely a main culprit as they really have not succeeded while spending tons for free games

Mihoyo literally prints money with predatory gacha

Riot has had several layoffs in recent years

Roblox loses tons of money every year


The biggest competitor to Unreal engine, Unity, once had ~8000 employees. And Unity doesn't even make games.

(Not saying this is justified, of course. I think Unity is pretty much doomed.)


Don't make games, but Unity does operate worldwide and has a LOT of supports for ads (their main money maker, unless something recent happened).

That globalization is a big reason many tech companies swell. When you need a team to work in and around every region's laws and regulations, you get big quickly.

But also, unity has slimmed down and scaled down on a lot of initiatives.


> I won’t be using the money I’m withdrawing for any illegal activities.

My guess is that this is so they can ban any drug dealers from their site without consequence. "They violated our terms of service your honour!"


In my opinion, 4 is the best size. 7-10 is horrible - meetings and conversations use up so much time.

You want to break a team of 10 in half if you can. Not always easy. But if you can manage it, do it.


Where abouts are you located?

Smack center of Europe (southern Germany) Got >100ms pings.

Heh I’m sorry to hear that. The whole internet is that slow for us here in Australia.

I'm aware. I'm worried we'll get an Aussie customer at work and I have to fix their access to our systems...

Granted, we already have US/EU/Asia as distinct regions. AUS would just make fail over even worse.


The benefit of using a crdt for this is that you can get better merge semantics. Rebase and merge become the same thing. Commits can’t somehow conflict with themselves. You can have the system handle 2 non conflicting changes on the same line of code if you want. You can keep the system in a conflict state and add more changes if you want to. Or undo just a single commit from a long time ago. And you can put non text data in an crdt and have all the same merge and branching functionality.

> CRDT picks 2

They don’t have to.

The crdt library knows that value is in conflict, and it decides what to do about it. Most CRDTs are built for realtime collab editing, where picking an answer is an acceptable choice. But the crdt can instead add conflict marks and make the user decide.

Conflicts are harder for a crdt library to deal with - because you need to keep merging and growing a conflict range. And do that in a way that converges no matter the order of operations you visit. But it’s a very tractable problem - someone’s just gotta figure out the semantics of conflicts in a consistent way and code it up. And put a decent UI on top.


I bet you can make a small, beautiful implementation of this algorithm in most languages. Most algorithms - even ones that take generations of researchers to figure out - end up tiny in practice if you put the work in to understand them properly and program them in a beautiful way. Transformers are the same. Genius idea. But a very small amount of code to implement.

This is an implementation of FugueMax (Weidner and Kleppmann) done using a bunch of tricks from Yjs (Jahns). There’s generations of ideas here, by lots of incredibly smart people. And it turns out you can code the whole thing up in 250 lines of readable typescript. Again with no dependencies.

https://github.com/josephg/crdt-from-scratch/blob/master/crd...


I'm not familiar with CRDT but the code does look pretty nice. I actually have been thinking myself of streaming my development, but just the terminal without camera or microphone. (So I think I want to wait until I'm doing something that will look pretty in the terminal.)

CRDTs should be able to give you better merge and rebase behaviour. They essentially make rebase and merge commits the same thing - just different views on a commit, and potentially different ways to present the conflict. CRDTs also behave better when commits get merged multiple times in complex graphs - you don’t run into the problem of commits conflicting with themselves.

You should also be able to roll back a single commit or chain of commits in a crdt pretty easily. It’s the same as the undo problem in collaborative editors - you just apply the inverse of the operation right after the change. And this would work with conflicts - say commits X and Y+Z conflict, and you’re in a conflicting state, you could just roll back commit Y which is the problem, while keeping X and Z. And at no point do you need to resolve the conflict first.

All this requires good tooling. But in general, CRDTs can store a superset of the data stored by git. And as a result, they can do all the same things and some new tricks.


This is the key point. Once your data structure carries the full edit history instead of reconstructing it from DAG traversal, rebase and merge become different views of the same operation. Not fundamentally different operations with different failure modes.

The weave approach moves ordering into the data itself. That's the same insight that matters in any system that needs deterministic ordering across independent participants: put the truth in the structure, not in the topology of how it was assembled.


Afaik pijul already does that though

In theory, maybe. In practice… last write wins (LWW) is a CFDT operator, so replace every mention of CRDT with LWW and issues will more obvious.

Really though, the problem with merges is not conflicts, it’s when the merged code is wrong but was correct on both sides before the merge. At least a conflict draws your attention.

When I had several large (smart but young) teams merging left and right this would come up and they never checked merged code.

Multiply by x100 for AI slop these days. And I see people merge away when the AI altered tests to suit the broken code.


> In practice… last write wins (LWW) is a CFDT operator, so replace every mention of CRDT with LWW and issues will more obvious.

Yeah. A lot of people are also confused by the twin meanings of the word "conflict". The "C" in CRDT stands for "Conflict (free)", but that really means "failure free". Ie, given any two concurrent operations, there is a well defined "merge" of the two operations. The merge operation can't fail.

The second meaning is "conflict" as in "git commit conflict", where a merge gets marked as requiring human intervention.

Once you define the terms correctly, its possible to write a CRDT-with-commit-conflicts. Just define a "conflict marker" which are sometimes emitted when merging. Then merging can be defined to always succeed, sometimes emitting conflict markers along the way.

> Really though, the problem with merges is not conflicts, it’s when the merged code is wrong but was correct on both sides before the merge.

CRDTs have strictly more information about whats going on than Git does. At worst, we should be able to remake git on top of CRDTs. At best, we can improve the conflict semantics.


> CRDTs have strictly more information about whats going on than Git does. At worst, we should be able to remake git on top of CRDTs. At best, we can improve the conflict semantics.

That is a worthwhile goal, but remember that code is just a notation for some operation, it's not the operation itself (conducted by a processor). Just like a map is a description of a place, not the place itself. So semantics exists outside of it and you can't solve semantics issue with CRDTs.

As code is formal and structured, version control conflict is a signal, not a nuisance. It may be crude, but it's like a canari in a mine. It lets you know that someone has modified stuff you've worked on in your patch. And then it's up to you to resolve the probable semantics conflicts.

But even if you don't have conflicts, you should check your code after a synchronization as things you rely on may have changed since your last one.


being able to customize the chunking/diffing process with something analogous to an lsp would greatly improve this. In my experience a particularly horribly handled case is when eg two branches add two distinct methods/functions in the same file location (especially if there is some boilerplate so that the two blocks share more than a few lines).

a language aware merge could instead produce

>>>> function foo(){ ... } ===== function bar(){ ... } <<<<<<


If you haven't heard of it yet, Mergiraf uses tree-sitter grammars to resolve merges using syntax-aware logic and has a pretty good success rate for my work.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: