Hacker Newsnew | past | comments | ask | show | jobs | submit | aggie's commentslogin

I've long expected Waymo's approach to prevail simply because - aside from whether vision-only proves good enough to some standard - it will be easy to lobby for regulations that favor the more conservative approach.

But I also don't think we can take anything from what Waymo claims about the feasibility of vision-only.


Waymo has posted videos of accidents they've avoided purely because their lidar picked up on a pedestrian before their cameras saw anything.

A favorite of mine: https://x.com/dmitri_dolgov/status/1900219562437861685


They're very public about the data that makes them look good, but they went to court to keep their safety data from the public. (https://techcrunch.com/2022/02/22/waymo-to-keep-robotaxi-saf...)


That lawsuit was about trade secrets shared with DMV. And DMV advised them to file a restraining order against a third party seeking redacted info.


It was actually about all kinds of information about the safety of their cars, how the company handles collisions, how they decide to make human drivers take over, details of injuries caused by their cars, etc.

The DMV required them to submit their safety data before allowing the vehicles on the road. Waymo claims that the data the DMV needed about the public safety of their cars, and the emails they exchanged about it with regulators, were entirely "trade secrets" to keep them hidden from the public who understandably felt like they should be able to access that information since those cars are going to be on their streets.


I think past experience shows that the US prefers a wait and see approach - owning in part I think to it federal structure, where states compete for companies good graces and money, so if State A bans something, State B will allow it and gain an advantage in that area.


Moreover, why draw a hard line on vision only when there is existing technology is available to augment it? It's not like they have to develop 3 novel technologies.


How does #3 square with Anthropic's literal warehouse full of books we've seen from the copyright case? Did OpenAI scan more books? Or did they take a shadier route of training on digital books despite copyright issues, but end up with a deeper library?


I have no idea, but I suspect there's a difference between using books to train an LLM and be able to reproduce text/writing styles, and being able to actually recall knowledge in said books.


I think they bought the books after they were caught that they pirated the books and lost that case (because they pirated, not because of copyright).


Why do I see so many critiques of the abundance movement based on factional motives and ad hominem, and so little critique of the arguments they make?

If progressives want to expand government, government needs to be effective. Abundance liberalism is about learning from the mistakes of the past and making government better able to achieve to goals both liberals and progressives want.

If we're talking about factional infighting, leftists are "afraid" the neoliberals will offer a compelling path forward instead of pivoting to socialism, so they resist arguments that would otherwise be beneficial to their political goals.


The value of AI is easy to see personally, concretely. But there is always a gap between concrete value in your hands and how that plays out in larger systems. The ability to work remotely could intuitively project to outsourcing of almost all knowledge work to cheaper labor markets, and yet that has only happened at the margins. The world is complex and complicated, reserve a measure of doubt.


This is a great point.

In-person work has higher bandwidth and lower latency than remote work, so for certain roles it makes sense you wouldn't want to farm it out to remote workers. The quality of the work can degrade in subtle ways that some people find hard to work with.

Similarly, handing a task to a human versus an LLM probably comes with a context penalty that's hard to reason about upfront. You basically make your best guess at what kind of system prompt an LLM needs to do a task, as well as the ongoing context stream. But these are still relatively static unless you have some complex evaluation pipeline that can improve the context in production very quickly.

So I think human workers will probably be able to find new context much faster when tasks change, at least for the time being. Customer service seems to be the frontline example. Many customer service tasks can be handled by an LLM, but there are probably lots of edge cases at the margins where a human simply outperforms because they can gather context faster. This is my best guess as to why Klarna reversed their decision to go all-in on LLMs earlier this year.


> In-person work has higher bandwidth and lower latency than remote work, so for certain roles it makes sense you wouldn't want to farm it out to remote workers

This is just not true. Specially if your team exists within an organization that works in world wide solutions and interacts with the rest of the world.

Remote can be so much faster and efficient because it's decentralized by nature and it can make everyone's workflows as optimized as possible.

Just because companies push for "onsite" work to justify their downtown real estate doesn't mean it's more productive.


That is like saying concurrent programming is far superior to sequential programming so lets stop doing sequential programming completely. Many tasks are just easier to do in a centralized environment.


I didn't propose forcing remote work. I simply countered faulty arguments in favor of onsite.

Your analogy with programming makes little sense and distracts. It's better to discuss the actual point.


It's far from simple. There is no clear delineation between food and non-food offered by saying Doritos are a science experiment. Is an ear of corn real food? It has of course been significantly modified by selective breeding. And your grandma ate GMO corn.


It’s extremely simple.

Only buy things with one ingredient. There, now you’re not eating the science experiment.


You're just doing the science experiment yourself when you cook it? Your view is simple, just wrong.


No, I’m eating what mammals have been eating for a few hundred thousand years, not what a single one of them concocted in a lab in the last 50 years, which has lead to skyrocketing illnesses that basically didn’t exist before. Diabetes, heart disease, hypertension and on and on.


Actually it is really simple, there’s no “doritos plant” corn is turned into cornmeal, dried, mixed with oils, salt, then fried in more oil then sprinkled in chemical flavorings and more salt. Roast some corn cobs and that’s real food.


At what point in the process does it stop being food? Is cornmeal with oil and salt still food, even though there's not a "cornmeal with oil and salt" plant? I think the point of the parent comment is that it's not clear that there's a single place to draw the line between "food" and "not food" in a way that everyone will immediately agree on.


Of course, if you’ll go to the semantic approach then anything edible would be “food”, foods are no simply “good” or “bad”, they’re “better than…” “worst than…”

As a side observation and not particularly referring to your reply, I see that people tend to put food or nutrition if you want, almost in the same scope of politics and religion, many people I know who I consider very open-minded will get very on the defensive whenever I tell them that the best thing to do is to reduce the intake of proteins of animal origin and increase plant consumption. That goes to show that years of propaganda have worked very well.


Okay, but what's the alternative to the "semantic approach"? People keep alluding to it in this thread, but no one seems to actually be explaining it.

I'd argue that the main part of the confusion here is that people don't seem to agree on what "food" means, and I'll personally admit that I don't have any confidence in my ability to define it in a clear way that would satisfy most people here. I don't have any particular strong feelings against what you're saying about animal proteins and plant consumption (despite knowing that I fall far short of what anyone on either side of that argument would consider a healthy diet), but that's because it's at least clear to me what you're arguing. The issue to me is that saying something edible is "food" or "not food" implies a binary that's really hard for me to wrap my head around, and it really doesn't seem like anyone is willing to define that in a way that I can understand; as soon as I try to ask questions to understand, it feels to me like the people using this stricter definition of "food" are the ones who get defensive. I want to keep an open mind, since I'm well aware of my lack of knowledge when it comes to nutrition, but it's hard not to feel confused when someone argues that there's a strict boundary where processing something makes it no longer food, but no one seems to be willing to elaborate on what it is. From my perspective, it seems like some people might have a more nuanced understanding of what they consider food, but because it's inconsistent with the previous idea I had of what "food" is, I don't have any way of understanding what their understanding is. If there are people treating nutrition the same way as politics and religion, it's the ones who make bold claims without further explanation and reject any disagreement as the result of illogical forces.


Have you read Michael Pollen’s books?

It’s actually trivially simple to define what is food, and what you should be eating. Every vegetable and fruit on the planet. Grains, rice, nuts. Chicken , meat, seafood, etc Put simply: it grows.

This is what all mammals have been eating for hundreds of thousands of years. We know it works. They have one ingredient. They have existed for at least as long as humans.

Then it is trivially simple to define what is not good, and no mammal should ever consume. (But realistically we will just minimize as much as possible). Coke. Fairy floss. MSG. HFCS. These things were grown in a lab. They have very minimal nutrition value, and often a host of negatives. They have only been concocted in the last century, and a massive number of severe health problems have come along with them. These things have many ingredients. They did not exist 100 years ago.

Then there is an enormous grey zone in the middle that people will argue about till the end of time.

It’s not worth your time. Just eat what is clearly food and pretend the rest doesn’t exist. The fact twizzlers and KFC exist have no impact on my life, and I am much healthier for it.


There are plenty of plants that aren't safe to eat though, right? I don't think anyone would advocate going out and eat random grass or wood from trees. Given that, it seems like you'd have to extend the definition to be something that's both edible and grows. At least in my experience, most people equate the idea of being edible with being food, which seems to me like what the parent comment way up the thread was talking about with redefining what "food" means being wacky; from a purely linguistic perspective, it's kind of wild to redefine a term to be a subset of the former usage but still require the old definition in order to state the new one. Essentially, you're trying to argue that what a lot of people have considered to be synonyms their entire lives should be distinct terms, with one of them remaining the same but redefining the other, and I don't think that will make sense to a lot of people. It would be like telling people that the word "quick" should only refer to speedy things that existed before 100 years ago, and anything else is "speedy" but not "quick". Even if you had a compelling argument for why this was better, it would still be kind of wacky.


Yes, exactly. The world has changed around us. Only 200 years ago it was impossible to move faster than a horse, or go higher than climbing a tree. So “quick” and “high” were well know for hundreds of thousands of years across all languages. Now we have invented race cars and spaceships, that go a lot faster and higher, so those words don’t mean what they used to mean.

Similarly, We have now invented new lab concoctions that companies want to call “food” because it helps them make money, but it is a very, very different thing than what “food” meant 200 years ago.

Go to an uncontacted tribe and give them fizzy black liquid. No way in hell they’ll drink it, because that ain’t food, and it wasn’t for a few hundred thousand years.


> At what point in the process does it stop being food?

    Just because you cannot see a clearly defined boundary does not mean it is not there.  A corn cob is food, a processed piece of corn mixed with <insert flavoursome/colourful/preserving chemical> sealed in a plastic bag with a six month shelf life is not


What you're saying sounds reasonable, but it completely avoids the question I asked about what point in the process it stopped being food by only mentioning the first and last steps. I'd argue that in order for a boundary to be clear, there needs to be specific step in the discrete process where the output of that step isn't considered food anymore. I honestly can't tell if you're saying that I personally can't see the boundary but you are able to, or if you're using "you" generically when you say "you cannot see", but if you mean the former, I don't think it's particularly helpful to assert that you know what it is without explaining further, and if you mean the latter, I don't think I agree with you on what constitutes a boundary being "clear".


Someone better tell the Mexicans about tostadas not being food


40% of Mexico population is over the obesity level.


If you're implying the marginal cost of a 6-hour AV errand is almost zero, I think you're describing a prosperous future.

This is also easily managed with congestion pricing.


I've seen it discussed on Twitter for a while, mostly on the right. As best I can tell it is related to a general trend toward distrust of big agriculture, big pharma, big institutions in general as seed oils are painted as a product of that whereas animal fats are a "retvrn" type of diet (or from another POV, a Michael Pollan diet). Funny to see the political factions shifting around and the horseshoe connecting left and right on nutrition.


Most people do not have elite resumes and most people are not hiring people with elite resumes. There's plenty of uncertainty in hiring in general, and that being the case with bootcamps isn't much different than a typical resume with a 4-year degree.


I agree with the position that most people are not coming from "elite" schools, as someone who hires in the midwest. I still much prefer someone with a four year school (and, as the other poster mentioned, internships) to a bootcamp. I have had one bootcamp graduate of five total that was at a useful starting skill level, compared to probably 90% (don't have a count for this one) "base useful" skill out of college.


I used Stanford as an example, but plenty of companies focus on CS grads from big state schools like Purdue, Michigan, Ohio State, etc that have similar resumes. In my experience, graduates from 4-year CS programs with some internship experience vastly outperform bootcamp grads as a group. I have hired and worked with some outstanding bootcamp grads, but you would never know that they stood out before actually interviewing them since most bootcamps have standard resume templates they tell their grads to use. In an era of 200+ applicants/day for every junior engineering role, you need to be able to tell that someone probably has what it takes to succeed after a 30 second resume scan.


As a hiring manager, I can say that the quality of candidates from any sort of degree program versus any sort of bootcamp is really chalk and cheese. It's not a question of only hiring people with elite resumes it's the difference between someone who has had to think deeply about a subject and learn how to solve hard problems versus someone who has had some rote knowledge about a specific tech stack drilled into them by repetition and the minute they get outside their (narrow) zone they really struggle.

That's not to say I never hire bootcamp candidates, it is that going to a bootcamp is not really a positive in my assessment.


You would be wrong. The difference between a code camper (3-6 months) vs a bachelor's in compsci is the difference between a paramedic and a doctor. If all you need is a CRUD dev who can make use of a JS framework and a CSS library then it might be sufficient, but the rigorous fundamentals underpinning a compsci degree (discrete maths, data structures, algorithms, etc.) makes for a far more knowledgable and solid engineer.


Unless EMG signal processing has had some breakthrough in the past 10 years, it is not a very precise interaction mode. I worked in a lab developing it for quadriplegics to use with the muscle on their temple (we tested above the thumb as well). You can get rough 2-axis control with some practice, but that's with an adhesive EMG pad. Can a wrist band get a clean signal?

For typing, I'd expect you need to combine with eye tracking. So you're back to the Vision Pro UI.

On its own, EMG makes a good button, I'd expect. Maybe 1-axis control.


Thanks for the reality check. Wait some more, use voice for now, is what I hear... Although a decade is a long time in signal processing and Meta has dumped a boatload of cash into this.

No 6dof either ?

Sorry for using you as the 'say something wrong, get corrected' research method, but kudos for jumping in. ;}


Microplastics seem bad. The evidence of adverse effects seems marginal. But in the bigger picture, plastics provide enormous wealth to the world. We should look for ways to mitigate any adverse environmental effects, of course, but it's not at all obvious to me that the ROI of doing so in my personal life is positive.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: