I think it depends on how the car ran over the children (or anyone in that case). If it was an unavoidable freak accident, then you can't really blame the car.
But, if the car literally didn't see a person in front of it (where reasonably a human would 99.9999% of the time) because it's cameras malfunctioned or the LLM read it as something else, then those cars should not be on the road.
I actually disagree, if it can be shown to be safer overall then I don’t think it should be required to be a strict superset of human abilities.
AI will always behave differently to human cognition, we should expect them to have different illusions and different failure cases. That doesn’t mean worse, if they perform better than a legal human driven car on average then I don’t think there’s a justification to exclude them.
That said, they will probably have to be _significantly_ better than human drivers in order to survive the media and public perception, so this might be irrelevant in the end.
To play devil's advocate. Say that on average it was 10x or 100x safer than human drivers, but for whatever reason, there was a 1/1,000,000 chance that the self driving car would plow into traffic at a red light because some black-box instruction told it it was the thing to do, very likely injuring or killing you. But overall it was better in all other cases. Would you take that risk?
But, if the car literally didn't see a person in front of it (where reasonably a human would 99.9999% of the time) because it's cameras malfunctioned or the LLM read it as something else, then those cars should not be on the road.