Hacker Newsnew | past | comments | ask | show | jobs | submit | jacquesm's commentslogin

AI is extremely good at producing well formatted bullshit. You need to be constantly on guard against stuff that sounds and looks right but ultimately is just noise. You can also waste a ton of time on this. Especially OpenAI's offering shows poorly in this respect: it will keep circling back to its own comfort zone to show off some piece of code or some concept that it knows a lot about whilst avoiding the actual question. It's really good at jumping to the wrong conclusions (and making it sound like some kind of profound insight). But the few times that it is on the money make up for all of that noise. Even so, I could do without the wasted time and endless back and forths correcting the same stuff over and over again, it is extremely tedious.

AI does not give you knowledge. It magnifies both intelligence and stupidity with zero bias towards either. If you were above average intelligent then you may be able to do a little bit more than before assuming you were trained before AI came along. And if you were not so smart then you will be able to make larger messes.

The problem, and I think the article indirectly points at that, is that the next generation to come along won't learn to think for themselves first. So they will on average end up on the 'B' track rather than that they will be able to develop their intelligence. I see this happening with the kids my kids hang out with. They don't want to understand anything because the AI can do that for them, or so they believe. They don't see that if you don't learn to think about smaller problems that the larger ones will be completely out of reach.


Maybe the solution is for an AI that acts as an instructor instead of just trying to solve everything itself. I do this with my kids, they ask me how to do something. I will give them hints, but not outright do it all for them. The article writer in the first part mentioned that this is how they would instruct too.

I recently heard that a professor said to the class, you can use an ai to solve the assignments. However I'll see if you really understand the material on the final exam.

Students are given student-level problem, not because someone wants the result, but because they can learn how solving problems works. Solving those easy problems with LLM does not help anyone.

Precisely. The first 10 rungs of the ladder will be removed, but we still expect you to be able to get to the roof. The AI won't get you there and you won't have the knowledge you'd normally gain on those first 10 rungs to help you move past #10.

People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.

The determining factor is always "did I come up with this tool". Somehow, subsequent generations always manage to find their own competencies (which, to be fair, may be different).

This isn't guaranteed to play out, but it should be the default expectation until we actually see greatly diminishing outputs at the frontier of science, engineering, etc.


I think that's too easy an analogy, though.

Calculators are deterministically correct given the right input. It does not require expert judgement on whether an answer they gave is reasonable or not.

As someone who uses LLMs all day for coding, and who regularly bumps against the boundaries of what they're capable of, that's very much not the case. The only reason I can use them effectively is because I know what good software looks like and when to drop down to more explicit instructions.


> Calculators are deterministically correct

Calculators are deterministic, but they are not necessarily correct. Consider 32-bit integer arithmetic:

  30000000 * 1000 / 1000
  30000000 / 1000 * 1000
Mathematically, they are identical. Computationally, the results are deterministic. On the other hand, the computer will produce different results. There are many other cases where the expected result is different from what a computer calculates.

A good calculator will however do this correctly (as in: the way anyone would expect). Small cheap calculators revert to confusing syntax, but if you pay $30 for a decent handheld calculator or use something decent like wolframalpha on your phone/laptop/desktop you won't run into precision issues for reasonable numbers.

He’s not talking about order of operations, he’s talking about floating point error, which will accumulate in different ways in each case, because floating point is an imperfect representation of real numbers

Yeap, the specific example wasn't important. I choose an example involving the order of operations and an integer overflow simply because it would be easy to discuss. (I have been out of the field for nearly 20 years now.) Your example of floating point errors is another. I also encountered artifacts from approximations for transcendental functions.

Choosing a "better" language was not always an option, at least at the time. I was working with grad students who were managing huge datasets, sometimes for large simulations and sometimes from large surveys. They were using C. Some of the faculty may have used Fortran. C exposes you the vulgarities of the hardware, and I'm fairly certain Fortran does as well. They weren't going to use a calculator for those tasks, nor an interpreted language. Even if they wanted to choose another language, the choice of languages was limited by the machines they used. I've long since forgotten what the high performance cluster was running, but it wasn't Linux and it wasn't on Intel. They may have been able to license something like Mathematica for it, but that wasn't the type of computation they were doing.


I didn't consider it an order of operations issue. Order of operations doesn't matter in the above example unless you have bad precision. What I was trying to say is that good calculators have plenty of precision.

But floating point error manifest in different ways. Most people only care about 2 to 4 decimals which even the cheapest calculators can do well for a good amount of consecutive of usual computations. Anyone who cares about better precision will choose a better calculator. So floating point error is remediable.

Good languages with proper number towers will deal with both cases in equal terms.

Determinism just means you don't have to use statistics to approach the right answer. It's not some silver bullet that magically makes things understandable and it's not true that if it's missing from a system you can't possibly understand it.

That's not what I mean.

If I use a calculator to find a logarithm, and I know what a logarithm is, then the answer the calculator gives me is perfectly useful and 100% substitutable for what I would have found if I'd calculated the logarithm myself.

If I use Claude to "build a login page", it will definitely build me a login page. But there's a very real chance that what it generated contains a security issue. If I'm an experienced engineer I can take a quick look and validate whether it does or whether it doesn't, but if I'm not, I've introduced real risk to my application.


Those two tasks are just very different. In one world you have provided a complete specification, such as 1 + 1, for which the calculator responds with some answer and both you and the machine have a decidable procedure for judging answers. In another world you have engaged in a declaration for which the are many right and wrong answers, and thus even the boundaries of error are in question.

It's equivalent to asking your friend to pick you up, and they arrive in a big vs small car. Maybe you needed a big car because you were going to move furniture, or maybe you don't care, oops either way.


Yes. That is the point I was making.

Calculators provide a deterministic solution to a well-defined task. LLMs don't.


Furthermore, it is possible to build a precise mathematical formula to produce a desired solution

It is not possible to be nearly as precise when describing a desired solution to an LLM, because natural languages are simply not capable of that level of precision... Which is the entire reason coding languages exist in the first place


If you hand a broken calculator to someone who knows how to do math, and they entered 123 + 765 which produced an answer of 6789; they should instantly know something is wrong. Hand that calculator to someone who never understood what the tool actually did but just accepted whatever answer appeared; and they would likely think the answer was totally reasonable.

Catching an LLM hallucinating often takes a basic understanding of what the answer should look like before asking the question.


One time when I was a kid I was playing with my older sister's graphing calculator. I had accidentally pressed the base button and now was in hex mode. I did some benign calculation like 10+10 and got 14. I believed it!

I went to school the next day and told my teacher that the calculator says that 10+10 is 14, so why does she say it's 20?

So she showed me on her calculator. She pressed the hex button and explained why it was 14.

I think a major problem with people's usage of LLMs is that they stop at 10+10=14. They don't question it or ask someone (even the LLM) to explain the answer.


Totally on a tangent here, but what kind of calculator would have a hex mode where the inputs are still decimal and only the output is hex..?

I probably got the actual numbers wrong in telling the story. But I do remember seeing a shift key on her calculator that would let you input abcde.

> Catching an LLM hallucinating often takes a basic understanding of what the answer should look like before asking the question.

We had the same problem in the early days of calculators. Using a slide rule, you had to track the order of magnitude in your head; this habit let you spot a large class of errors (things that weren't even close to correct).

When calculators came on the scene, people who never used a slide rule would confidently accept answers that were wildly incorrect (example: a mole of ideal gas at STP is 22.4 liters. If you typo it as 2204, you get an answer that's off by roughly two orders of magnitude, say 0.0454 when it should be 4.46. Easy to spot if you know roughly what the answer should look like, but easy to miss if you don't).


The calculator analogy is wrong for the same reason. Knowing and internalizing arithmetic, algebra, and the shape of curves, etc. are mathematical rungs to get to higher mathematics and becoming a mathematician or physicist. You can't plug-and-chug your way there with a calculator and no understanding.

The people who make the calculator analogy are already victims of the missing rung problem and they aren't even able to comprehend what they're lacking. That's where the future of LLM overuse will take us.


> People would have said the same about graphing calculators or calculators before that.

As it happens, we generally don't let people use calculators while learning arithmetic. We make children spend years using pencil and paper to do what a calculator could in seconds.


This is why I don’t understand the calculator analogy. Letting beginners use LLMs is like if we gave kids calculators in 1st grade and told Timmy he never needs to learn 2 + 2. That’s not how education works today.

I think this is exactly why calculators are a great analogy, and a hint toward how we should probably treat LLMs.

Unfortunately there are many posters here who believe we should, in fact, let children use calculators and not bother with learning arithmetic. It's foolishness, but that argument does get made. So I wouldn't be surprised if people also think we should let students use LLMs.

> People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.

Well, we still make people calculate manually for many years, and we still make people listen to lectures instead of just reading.

But will we still have people to go through years of manual coding? I guess in the future we will force them, at least if we want to keep people competent, just like the other things you mentioned. Currently you do that on the job, in the future people wont do that on the job so they will be expected to do it as a part of their education.


What do people mean exactly when they bring up “Socrates saying things about writing”? Phaedrus?

> “Most ingenious Theuth, one man has the ability to beget arts, but the ability to judge of their usefulness or harmfulness to their users belongs to another; [275a] and now you, who are the father of letters, have been led by your affection to ascribe to them a power the opposite of that which they really possess.

> "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

Sounds to me like he was spot on.


But did this grind humanity to a halt?

Yes - specific faculties atrophied - I wouldn't dispute it. But the (most) relevant faculties for human flourishing change as a function of our tools and institutions.


Someone brought up Socrates upthread:

> People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.

If the conclusion now becomes “actually, Socrates was correct but it wasn’t that bad”, then why bring up Socrates in the first place?


> The determining factor is always "did I come up with this tool". Somehow, subsequent generations always manage to find their own competencies (which, to be fair, may be different).

In a sense, I think you are right. We are currently going through a period of transition that values some skills and devalues others. The people who see huge productivity gains because they don't have to do the meaningless grunt work are enthusiastic about that. The people who did not come up with the tool are quick to point out pitfalls.

The thing is, the naysayers aren't wrong since the path we choose to follow will determine the outcome of using the technology. Using it to sift through papers to figure out what is worth reading in depth is useful. Using it to help us understand difficult points in a paper is useful. On the other hand, using it as a replacement for reading the papers is counterproductive. It is replacing what the author said with what a machine "thinks" an author said. That may get rid of unnecessary verbosity, but it is almost certainly stripping away necessary details as well.

My university days were spent studying astrophysics. It was long ago, but the struggles with technology handling data were similar. There were debates between older faculty who were fine with computers, as long as researchers were there to supervise the analysis every step of the way, and new faculty, who needed computers to take raw data to reduced results without human intervention. The reason was, as always, productivity. People could not handle the massive amounts of data being generated by the new generation of sensors or systematic large scale surveys if they had to intervene any step of the way. At a basic level, you couldn't figure out whether it was a garbage-in, garbage-out type scenario because no one had the time to look at the inputs. (I mean no time in an absolute sense. There was too much data.) At a deeper level, you couldn't even tell if the data processing steps were valid unless there was something obviously wrong with the data. Sure, the code looked fine. If the code did what we expected of it, mathematically, it would be fine. But there were occasions where I had to point out that the computer isn't working how they thought it was.

It was a debate in which both sides were right. You couldn't make scientific progress at a useful pace without sticking computers in the middle and without computers taking over the grunt work. On the other hand, the machine cannot be used as a replacement for the grunt work of understanding, may that involves reading papers or analyzing the code from the perspective of a computer scientist (rather than a mathematician).


We notably teach people how to do arithmetics by hand before we hand them calculators.

We still expect high school students to learn to use graph paper before they use their TI-83, grade school students to do arithmetic by hand before using a calculator. This is essentially the post's point, that LLMs are a useful tool only after you have learned to do the work without them.

When doing college we can only start using those tools when we understand the principles behind them.

Socrates does not say this about the written word. Plato has Socrates say it about writing in the beginning sections of the Phaedrus, but it is not Socrates opinion nor the final conclusion he arrives at.

And yes yes you can pull up the quote or ask your AI, but they will be wrong. The quote is from Socrates reciting a "myth", as is pretty typical in a middle late dialogue like this.

But here, alas we can recognize the utter absurdity, that this just points out why writing can be bad, as Socrates does pose. Because you get guys 2000 years in future using you and misquoting you for their dumb cause! No more logos, only endless stochastic doxa. Truly a future of sophists!


But AI might actually get you there in terms of superior pedagogy. Personal Q&A where most individuals wouldn't have afforded it before.

There are a lot of people in academia who are great at thinking about complex algorithms but can't write maintainable code if their life depended on it. There are ways to acquire those skills that don't go the junior developer route. Same with debugging and profiling skills

But we might see a lot more specialization as a result


Do they need to write maintainable code? I think probably not, it's the research and discovering the new method that is important.

They can’t write maintainable code because they don’t have real world experience of getting your hands dirty in a company. The only way to get startup experience is to build a startup or work for one

Duh, the only way to get startup experience is indeed to get startup experience.

My point is that getting into the weeds of writing CRUD software is not the only way to gain the ability to write complex algorithms, or to debug complex issues, or do performance optimization. It's only common because the stuff you make on the journey used to be economically valuable


> write complex algorithms, or to debug complex issues, or do performance optimization

That’s the stuff that ai is eating. The stuff I’m talking about (scaling orgs, maintaining a project long term, deciding what features to build or not build etc) is stuff very hard for ai


AI is only eating some of that though. For instance, everyone who does performance work knows that perhaps the most important part of optimization is constructing the right benchmark. This is already the thing that makes intractable problems tractable. That effect is now exacerbated — AI can optimize anything given a benchmark —- but AI isn’t making great progress at constructing the benchmark itself.

I dont know if id call it "hard for ai" so much as "untreaded ground"

agents might be better at it than people are, given the right structure


What. Are you saying maintainable code is specifically related to startups? I can accept companies as an answer (although there are other places to cut your teeth), but startups is a weird carveout.

Writing maintainable code is learned by writing large codebases. Working in an existing codebase doesn't teach you it, so most people working at large companies do not build the skill since they don't build many large new projects. Some do but most don't. But at startups you basically have to build a big new codebase.

That’s a good analogy but I think we’ve already went from 0 to 10 rungs over the last couple of years. If we assume that the models or harnesses will improve more and more rungs will be removed. Vast majority of programmers aren’t doing novel, groundbreaking work.

The ridiculous amount of focus on this one individual vs the complete lack of attention for the thousands of Iranians already dead is very disturbing.

And apparently they killed more during the mission to retrieve this guy

> striking Iranian military-aged males believed to be a threat who got within three kilometer” according to a correspondent with the US Air & Space Forces Magazine, who said he had been briefed on the operation.


There was apparently a large multi hour firefight, which oh so conveniently no one is covering

[flagged]


No, it does not show either. You completely fail at understanding my quite brief comment just jumped to delivering your talking point.

[flagged]


> win this war ASAP

How can we define what this even means? I don’t think any of the naive initial aims were ever attainable - and the entire impulsive and irresponsible adventure has spiraled down into what looks like impatient and petty spite: smashing the toys because the big baby didn’t get the present he wanted.


We don't have to define it because we've won every day since the war started. Glorious leader said so

I know who my enemies are. They make it very clear.

[flagged]


[flagged]


How twisted as it sounds, I do believe that the outcome is beneficial to the US. China and EU are weakened by this mess big time. The US will bear costs in the future.

EU does not expect to be paid back for Ukraine, they are paying with their lives and by being a lab for testing weapons and strategies. I'm just flabbedgasted by pro russian europeans thinking it'll be great to live under Putin. It's not like it hasn't been the case already.


EU aid for Ukraine (yes this is a chatGPT overview):

Ukraine Facility (2024–2027): €33 billion of €50 billion is loans → 66%

MFA+ for 2023: €18 billion, all loans

Exceptional MFA backed by Russian-asset revenues (2024 package): €18.1 billion disbursed so far, loan

New EU package for 2026–2027: €90 billion, loan

Adding those together gives about €159.1 billion in loans out of €176.1 billion, or roughly 90% loans.

Oh, and before you ask, 2 out of 3 are not low-interest loans (only the one where the interest would go to Russia). If you're in France or Germany, your mortgage is cheaper than what Ukraine is paying the EU to defend itself from Russia "with EU help". And, yes, one might even point out that this means that Trump is correct when he says the US has given more to Ukraine than the EU. Given more, loaned (much) less. EU help consists 90% of allowing these loans in the first place.


I just start AppImages from the command line and put them in my /home/$username/bin that seems to take care of most of the annoying edge cases. Snaps are ridiculously hostile abusing the mount system and polluting all kinds of places where they have no business going, I've completely purged the whole snap subsystem from my machine. Flatpak I've managed to avoid so far.

I unpack them because the app runs faster, no container/fs problems, and I use apps where I want to access the files in the app. Kicad in particular has a load of component files that I always want to copy from /usr/lib into each project so that the project is fully self contained and in a nicer way than the way kicad does it itself.

freecad has a problem where it uses python, and python3 defaults to shitting turds (pycache directories) everywhere it's executed. There is a way to tell python not to do that, but that mechanism is not possible when the app is an appimage. But is possible if the appimage is unpacked.

It's a simple, but totally manual process to unpack and integrate into the desktop fully so that file types autolaunch the app and displays the icon in the file manager. I started to write a script to automate it but there is just enough variation in the way appimages are constructed that it's annoying to try to script even though it's pretty easy to do manually.


I came across a top tier compliance auditor doing the same thing recently. I tried to talk to them about it and rather than approaching this from a constructive point of view they wanted to know the name of the company that got certified so they could decertify them and essentially asked me to break my NDA. That wasn't going to happen, I wanted to have a far more structural conversation about this and how they probably ended up missing some major items (such as: having non-technical auditors). They weren't interested. They were not at all interested in improving their processes, they were only interested in protecting their reputation.

I'm seriously disgusted about this because this was one of the very few auditors that we held in pretty high esteem.

Pay-to-play is all too common, and I think that there is a baked in conflict of interest in the whole model.


Have you considered whistleblowing?

Yes. But I'm not working at either company and I'm 99.9% sure that it would lead to absolutely nothing other than a lot of misery for myself. The NDA's I sign have some pretty stiff penalties attached. I was actually hoping to see my trust in the auditing company confirmed and I'm still more than a little bit annoyed that they did not respond in a more constructive way.

My response however is a simple one: I used to steer (a lot of) business their way and I have stopped doing that.


Similar boat. Seen the same shenanigans being played with actors who really should know better - everything from military secrets to medical data, and absolutely YOLOing it with an audit mill. I have it on good authority that there are superuser credentials floating around for their production systems that they’ve lost track of.

And no, I won’t whistleblow either, as it would mostly be me that would face repercussions, and I am unafraid to say that I am a coward.

We choose the battles we fight, and I’d like to believe that ultimately, entropy will defeat them without me lifting a finger.


Wouldn't it require a huge leap of faith for them to admit the audit was improper in order to have that discussion? Who's to say you aren't recording?

I've already established that it was improper. It's up to them to make the most of that knowledge and then to determine of this is a singleton or an example of a class that has more representation. In that sense it is free to them, I'm under absolutely no obligation to provide them with a service. But I'm willing to expend the time and effort required to get them to make the most of it. What I'm not going to do is to allow them to play the blame game or 'shoot the messenger'.

I didn't mean it as a criticism, I think giving them the opportunity to improve and refusing to offer a scapegoat were both standup things to do. I'm just wondering if they were ever in a position to take that opportunity.

Hard to tell. But given that it was their legal department contacting me I think you know the answer to that one.

I'd called out fraud (blatant lying in investor updates) at a VC backed startup where I was a technical co-founder, once. I emailed all the investors and presented all the evidence to them. They decided to not rock the boat and keep my charlatan co-founder. So, I left. Now, the company is slowly bleeding to death.

> Now, the company is slowly bleeding to death.

There are thousands of companies where the shady practices are rewarded, the companies thrive and make money for the investors. So the investors are incentivized to reward this behavior just on the chance that they are rewarded back.

Whistleblowing sinks those chances and the investors and VCs know it. It doesn' just take away the money, it even takes away the plausible deniability. They put a lot of effort to absolutely punish any whistleblower to discourage the rest. Anything for a dollar. and this is probably all you'll ever need to know about almost every VC out there. Beyond the witty "I'm rich so I'm smart" blog posts and tweets, they're very much just the "anything for a dollar" type of people.


To be fair, I’m not sure blatant lying in investor updates alone constitutes fraud. There needs to be harm (or the intent thereof) AFAIK. The other party needs to be using that information to make a decision. If you give me a dollar and then later I tell you I’m actually Beyonce, is that fraud? Or am I just a lying sonofabitch?

If I give you a dollar and you say it’s being spent wisely, Beyonce loves the product, you’re about to land Taylor Swift as pro bono public ambassador… yeah that’s fraud.

It's encouraging future investment on a false pretext. I'd say that's fraud.

Lying in investor update was merely the tip of the iceberg. There was lots more, fabricating customer traction pre-investment, paying oneself back-pay for months spent twiddling thumbs pre-investment (before I was involved), etc.

My lesson from the whole kerfuffle was that investors (at least the ones I’d dealt with) prefer hustle over integrity and execution abilities.


It's auditing, nobody that is good at doing anything goes to auditing, unfortunately its one of those jobs. I haven't interacted with any auditor that actually understood all they were auditing, some are better than others but the average is worse than almost any other job description I have dealt with.

If you care about this stuff you need to in-house auditing and do your own audits with people who care. Then get certified by an external auditor for the paper.

You can start very lightweight with doing spec driven development with the help of AI if you're at a size where you can't afford that. It's better than nothing.

But the important part is you, as a company, should inherently care.

If you rely on an auditor feedback loop to get compliant you've already lost.


This function exists in every publicly traded public company, and is called internal audit.

It has the potential to be incredibly impactful, but often devolves into box ticking (like many compliance functions).

And it's really hard to find technical people to do the work, as it's generally perceived as a cost centre so tends not to get budget.


Nobody really tries to get technical people to do the work.

Like cool, it's a great idea and would potentially produce positive results if done well, but the roles pay half the engineering roles, and the interviews are stacked towards compliance frameworks.

There's very little ability to fix a large public company when HR is involved


Maybe it should be treated like on-call duty and have the load spread between existing engineers on some kind of schedule, maybe with some extra comp as incentive because it's boring and will take more effort/time in the "easy case" compared to pager duty.

Speaking as a technical (data) person currently working in internal audit for a not quite public company, it's not entirely uncommon.

I do agree that the pay isn't great, but it's the fact that it's considered a cost centre that's been the issue for me.


Everything except for sales tends to be seen as a cost centre. It's ridiculous.

To be honest, I would even go further: if you think certification equals security, you are even more lost.

So many controls are dubious, sometimes even actively harmful for some set-ups/situations.

And even moreso, it's also perfectly feasible to pass the gates with a burning pile of trash.


And they do not track the industry at all, at best they'll help you win the war of five years ago.

Imagine my face when I had to take periodic backups of stateless, immutable read-only filesystem, non-root containers for "compliance".

Maybe that's just a goid moment to review your _policy_. About a half of our compute is exactly that, and we just don't have to do this sort of backups, that'd be silly.

We don't deal with the military though, only fintech (prime brokers and major banks, funds) some government. Plenty of certifications (have someone all site all year round),!no silliness.


That's hilarious :)

Ook goeiemorgen...


But companies don't care. They don't want compliance for feel goods, they want compliance because their partners require it. They do the minimum amount required to check the box

Caring about security and comparing about some of the arbitrary hoops you have to jump through for some of these compliance regimes don’t always overlap as much as you’d expect.

I’ve been at companies where we cared deeply about security, but certain compliance things felt like gimmicks on the side. We absolutely wanted to to do the minimum required to check that box so we could get back to the real work.


You should check out the banking industry sometime if you'd like to interact with a competent auditor.

Compliance gets taken quite seriously in an industry where one of your principal regulatory bodies has the power to unilaterally absorb your business and defenestrate your entire leadership team in the middle of the night.


They could. But they don't.

I've seen this up close. The regulatory bodies as a rule are understaffed, overworked and underpaid. I'm sure they'd love to do a much better job but the reality is that there are just too many ways to give them busywork allowing the real crap to go unnoticed until it is (much) too late.


Because they’re put there as a box ticking exercise without ever being given the power or resources to be able to do damage or negatively impact the bottom line of the big rule breakers. It’s just supposed to maintain the appearance of doing something without ever supporting these activities for real. For the most part they are a true Potemkin village. If the risk is diffuse (just some average Joe suckers will lose money) I wouldn’t hold my breath that anyone is controlling for real.

I hate to say this but I suspect you are right.

The industry is paid to provide a fig leaf for shady practices. Everyone knows what's going on, no one is going to do anything about it unless governments step in and give regulators more resources and more teeth, and "errors" lead to prosecutions and jail time.

None of those are likely.

This is the industry that missed Enron, WorldCom, Wirecard, Lehman, and many others.


> Wirecard

Don't get me started. That hasn't even properly ended yet, the fall-out is continuing to today.


I suspect many AI startups will be on that list in 2-5 years.

That's the problem: the cost doesn't really go down. You can only operate nuclear if you guarantee the prices a decade ahead. That's just not realistic and the end result is that you'll end up subsidizing ever KWh produced and then you still have to factor in decommissioning costs. Nuclear is fantastic technology, but we can do so much better.

Sweden builts a lot of nuclear reactors and they have been amazingly profitable for us. Unfortunately, many were dismantled before their end of life and we are now stuck with high energy prices.

There is no natural law that says nuclear must be expensive. Correctly managed, it is an excellent power source.


How many trillions in subsidies should we hand out to "try one more time" with nuclear power when renewables and storage already is the cheapest energy source in human history?

What “trillions”?

Achieving putative cost reduction from Nth-of-a-kind plants would involve large subsidies of large numbers of plants. The cost could well be in 13 digits.

No, what they should do instead is decentralize energy generation to the point that we're in cockroach mode. And if that means that transportation of goods gets priority over transportation of people then so be it until we've figured that one out.

The sooner we get this over with the better. Install as much solar and wind as we can and get to the point where we have a glut and then back the up with decentralized storage.


> cockroach mode

What does this mean?


Get decentralized to the point that no single point of failure will result in wholesale outages: resilient as cockroaches. You can't do that if you have interconnects that have to work for society to work. The centralized electrical grid was a great idea and it got us very far. But it is just too fragile. Much better if you can have many (millions) of points of generation, storage and consumption and a far more opportunistic level of interconnect.

> decentralized to the point that no single point of failure will result in wholesale outages

This is a good goal. But it needs to be more rigorously defined. Autarky can be done. But then you need to accept North Korean living standards.

> Much better if you can have many (millions) of points of generation, storage and consumption and a far more opportunistic level of interconnect

Again, to a degree. You can't decentrally power a modern city. So that means either no more cities, which is expensive, or ruinously-expensive power in cities, which again, in practice, means de-industrialisation.


> But then you need to accept North Korean living standards.

I'm not sure that's true.

> Again, to a degree. You can't decentrally power a modern city.

I'm not sure that that is true either, but it will take a lot more work than to do this for less densely populated areas. In general I'm not sure if 'modern cities' are long term sustainable.


> not sure that's true

To be clear, I'm not either. But decentralisation requires sacrificing economies of scale. And total autarky is a proven failure. Between that and complete integration is probably a more-independent equilibrium for Europe. But it will require paying a price.

> In general I'm not sure if 'modern cities' are long term sustainable

Sure. Maybe. Until then, the economies that field them will call the shots. (Based on everything I've read, cities are far more sustainable than dispersed living.)


> But it will require paying a price.

I don't doubt that it requires paying a price. The only relevant question is whether that price is substantially lower or substantially higher than continuing on our current track. I'm open to be convinced that it is higher but I strongly believe that it is lower because with increased fragility you're playing the dice and one day they'll come up in a way that hurts you. The more people there will be in those baskets that harder it will hurt.

As for the future of cities: the internet has given us one thing: independence from having to go to cities to work. Combine that with the ridiculous energy expense on commuting and it seems like a complete no-brainer that we should just stop doing that. COVID has already shown us that this is far more possible than we ever thought it was.


> relevant question is whether that price is substantially lower or substantially higher than continuing on our current track

It's higher than prevailing prices. And it gets higher the more autarkic and decentralised the system needs to be.

> with increased fragility you're playing the dice and one day they'll come up in a way that hurts you

Agree. It looks like insurance pricing. How much extra are your citizens willing to pay every year to reduce supply disruptions?


> It's higher than prevailing prices. And it gets higher the more autarkic and decentralised the system needs to be.

I don't actually think that that is true. If I look at the cost / KWh + the network costs + various subsidies you can probably supply a house for a lifetime if you the energy consumption costs for that same lifetime and spent them up front on decentralized generation + storage.

It's all about the density, not so much about the cost and as the density goes up so do the complications and the costs. But if you have enough ground (which really isn't all that much) it is perfectly doable today, and probably you'll be in the black in a surprisingly low number of years. The higher the cost of oil the higher the cost of gas, and the higher the cost of gas the higher the cost per KWh (this may vary depending on where you live).

> How much extra are your citizens willing to pay every year to reduce supply disruptions?

That's a very good question. Probably not much until it starts to happen regularly, so I would expect that problem to solve itself over time. Energy has been a hot topic for the last decade and with every price shock it is getting easier to convince people that if they had more autonomy they would be less affected. Solar + heatpumps have exploded in Europe in the last decade and that trend has not stopped, in spite of a reduction in net metering. Ironically, the biggest stumbling blocks are the governments that want to tax energy but see no way of doing this if it is generated and consumed on the spot.


> This is a good goal. But it needs to be more rigorously defined. Autarky can be done. But then you need to accept North Korean living standards.

I'm not sure that's a diss you think it is. They still live better than most societies did at the beginning of the 20th century.

And their current standard of living would also be lifted if not for economic sanctions. The reality is that North Korea is generally a very resource poor geographic location, which ultimately limits your development without trade.


With centralized electrical generation, you get massive economies of scale. It would be very costly to duplicate generation when you can extend lines from a current grid so cheaply. The efficiency of large power plants also results in a reduced carbon footprint. Duplication would be paying much more for a decentralized grids, while producing less electricity from inputs at higher cost.

Centralization of power distribution is a national security risk in every country.

The only problem is that we have to convince the centralized power industries to give up their complete control of our local and global economies.

I have been thinking about this for decades, as the path forward has been obvious for that long. Those in control just keep doubling down.

It appears that they would rather destroy our ecosystems, and risk economic collapse, instead of just adjusting their investment strategies.


Precisely. But if it isn't clear now then the only way it will become clear will be through catastrophe.

Then catastrophe it is!

But seriously, that appears to be the trajectory.


Unfortunately, agreed.

I once joked to some friends that the Mennonites would be the only people that would get through the next energy crisis without so much as blinking.


As they don't depend on fossil fuels to power their businesses or lighting at home.

This sounds nice and all, but also very very hard in Scandinavia. Not impossible though, there's at least one guy who's done it!

https://h2roadtrip.com/mr-hydrogen-sweden-lives-almost-a-dec...

Rather extreme, but technically possible!


I did it in Canada where the winters and latitude are very comparable to Scandinavia.

That very much depends on where in Canada you are! Canada is huge, and parts of it are further north than even Northern Sweden! But from what I understand most Canadians live in the southern end of the country, which is comparable to Germany. Stockholm is at ~60N for your reference

Northern Ontario (St. Joseph's Island).

According to Google, that's 46N, which puts you south of Paris(which is at 48N)!

That's not comparable to Scandinavia at all!

This video does a good job of illustrating just how much further north Europe is than most people think

https://youtube.com/shorts/C7-t_Ya6gI4?si=3EnxpFce59-VZb8B


Oh come off it. I'm North of Paris right now (and South of Sweden) and it is absolutely nothing Like Northern Ontario, which has a 3 month growth season and winter temperatures go below -40 on some days. Paris is smack in the sweet spot for the Atlantic conveyor.

If you want to become self sufficient in electricity, the number of sun hours matters more than anything else.

Low temperatures just means you need more insulation, and possibly geothermal of it gets too cold for a regular heatpump.But try to generate enough power with a solar panel when you get 3 consecutive months of almost no sun at all!

The temperature isn't even the issue, the darkness is.


Low temperature even makes PV slightly more efficient.

YC backing. That's all it takes. Taken an existing idea that has legs (preferably one you find in Europe or Asia), then take it to the US, apply to YC and say you already have validation see 'startup x'.

> Adding to the awkwardness: Sim.ai was actually a Delve customer, Karabeg told TechCrunch. Both startups were grads of the startup accelerator Y Combinator, and Y Combinator alumni frequently buy each other’s products. So while Sim.ai paid Delve, Delve did not do the same for Sim.ai.

So it’s not all it takes.

<s>Cheating</s> sorry hustling and <s>bullshitting</s> sorry storytelling are more important.


It's a special level of disgusting, that's for sure. And I though Installmonetizer was pretty bad, this one goes well beyond that.

In this case though morality is their product. So they go down hard.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: