Use AI! It'll make you a 10x engineer! (cost-wise) /s
I've recently had the displeasure of Opus 4.6 hallucinating an API. It would have been great if that API had existed, but it did not. Still, it then looped until I manually terminated it while trying to make tests pass. In my case, I used up about $12 of usage in 30 minutes. My guess would be mostly through the (pretty verbose) thinking tokens.
But it's not just Anthropic. I had the same issue with Gemini 3.1 Pro.
You can’t compare those numbers because the population in 1993 and today comprises different groups who are materially different in terms of fertility rate. Last year, the fertility rate for women with German citizenship was 1.23.
The other major change is that in 1990, you had a reunification of east and west germany. Fertility rates in East Germany were low before reunification and collapsed right after reunification. But they recovered from the early 1990s to the late 2000s. So the 1993 aggregate average is artificially low. In neighboring France, the fertility rate in 1993 was 1.7.
Certainly not as much as the U.S or Canada, but not as little as Tokyo or Netherlands. I found various cities easy to navigate without one, but they seemed oddly car-centric as well. Berlin is very car and very public transportation somehow
"Wouldn't the real cause of the depressed birthrates be the requirement to own a car in order to have children?"
Yes. The one-time setup costs for "properly" raising kids are probably around $30k. All the kids stuff is extra expensive (in the west) and for the kids seats you need a large car (in the west) and there's social stigma against kids sharing a room (in the west), so you also need a larger apartment.
Can confirm, the "stuff" costs basically nothing in the scheme of things and nearly all of it can be had used. A bunch of it's also not really all that necessary. Clothes and toys can all be had for very little, without even that much time investment, folks are drowning in this stuff and lots of it just gets thrown away.
The real money goes to:
1) Healthcare (in the US).
2) Childcare or foregone wages.
3) School/housing location (same thing; either tuition, or spending 20+% more for the same amount & quality of house in a nicer school district [and the ongoing cost of servicing the extra mortgage on that]; you can skip this, but if you can at all afford it, you'll not feel like it's optional)
4) Space. Larger housing and larger cars. You can skip this kind of (larger car is less-optional if you have more than three kids) but at significant cost to QOL.
“the apex of internal combustion engines means they’re doing more than just propelling a vehicle around roads and highways. They’d also be air purifiers, removing CO2 as they go.”
which sounds amazing, but is followed by 0 facts or information on how that might work.
They are putting Carbon Capture into some concept cars and a race car [1]. Some more info in the system at [2]. I don't see any benefit for this technology being in a car vs at a stationary large scale facility.
This is annoying marketing bs that the motor press has been touting for decades with every new generation of efficient engines. An ICE (that's not running on H2) always generates CO2.
If it did the reverse, it would be a Fischer-Tropsch reactor. /s
At operating temperatures, a modern ICE w/ a 3-way catalytic converter driving at highway speeds through the right environment (L.A. on a bad smog day) could easily have NOx and VOC levels at the tailpipe that are lower than what's going into the intake manifold.
I think the instability is mostly due to the CEO running away at the same time as a forced Azure migration where the VP of engineering ran away. There’s only so much stability you can expect from a ship that’s missing 2 captains.
I mean the fish rots from the head, but at the end of the day that rot translates into an engineering culture that doesn't value craftsmanship and quality. Every github product I've used reeks from sloppiness and poor architecture.
That's not to say they don't have people who can build good things. They built the standard for code distribution after all. But you can't help but recognize so much of it is duct taped together to ship instead of crafted and architected with intent behind major decisions that allow the small shit to just work. If you've ever worked on a similar project that evolved that way, you know the feeling.
Same here. While LLMs sometimes work surprisingly well, I also encounter edge cases where they fail surprisingly badly multiple times per day. My guess is that other people maybe just don't bother to check what the AI says which would cause them to not notify omission errors.
Like when I was trying to find a physical store again with ChatGPT Pro 5.4 and asked it to prepare a list of candidates, but the shop just wasn't in the list, despite GPT claiming it to be exhaustive. When I then found it manually and asked GPT for advice on how I could improve my prompting in the future, it went full "aggressively agreeable" on me with "Excellent question! Now I can see exactly why my searches missed XY - this is a perfect learning opportunity. Here's what went wrong and what was missing: ..." and then 4 sections with 4 subsections each.
It's great to see the AI reflect on how it failed. But it's also kind of painful if you know that it'll forget all of this the moment the text is sent to me and that it will never ever learn from this mistake and do better in the future.
"I also encounter edge cases where they fail surprisingly badly multiple times per day. "
If 80% of the time they 10x my output, and the other 20% I can say "well they failed, I guess this one I have to do manually" - that's still an absolutely massive productivity boost.
But they don’t 10x my output - they write some code for a problem I/you have already thought about. The hard part isn’t writing the code, it never has been. It’s always been solving and breaking down the problem.
>It's great to see the AI reflect on how it failed. But it's also kind of painful...
Keep in mind that the AI is not reflecting, and it has no idea it made a mistake. It's just generating statistically-likely text for "apology" * "ai did not find results".
You can rent access to nearly real-time custom satellite targeting for <$3k per image. That means while you're correct that not all countries can afford it, most can.
Planet Labs PBC, a leading provider of high resolution images taken from space, said Friday it would hold back for 96 hours images of Gulf states targeted by Iranian drone attacks.
It did not say if it had acted at the request of US authorities.
> Planet Labs PBC, a leading provider of high resolution images taken from space, said Friday it would hold back for 96 hours images of Gulf states targeted by Iranian drone attacks.
To get a naval fix, you usually define an "area of uncertainty" around the last confirmed location of the ship. The area is usually a circle with the radius being the maximum distance the ship/group could travel at full speed.
So, you don't exactly "know" where the ship is, but you can draw a hypothetical geofence around where it's likely to be, and scan that area.
So the satellite can know where the ship is, because it knows where it isn't? Then it's a simple matter of subtracting the isn't from the is, or the is from the isn't (whichever is greater)?
I’m a happy user of both Kagi Search and Assistant. I would totally support a “even more than Pro” subscription for Assistant, but then it should also have its own proper API if I pay for it like a proper product.
That said, I wonder how things would work if you subscribe to the assistant but not to search? Because to me, the deep search integration is precisely what makes this specific AI assistant better than, for example, ChatGPT.
I've recently had the displeasure of Opus 4.6 hallucinating an API. It would have been great if that API had existed, but it did not. Still, it then looped until I manually terminated it while trying to make tests pass. In my case, I used up about $12 of usage in 30 minutes. My guess would be mostly through the (pretty verbose) thinking tokens.
But it's not just Anthropic. I had the same issue with Gemini 3.1 Pro.
reply