Hacker Newsnew | past | comments | ask | show | jobs | submit | sethev's commentslogin

> They clearly did something crazy at corona

They acquired Cerner, which had ~30k employees.


Cool to be part of history I used to go into that office Innovations campus

Saw someone had a license plate say MPAGES ha


Program generation from a spec meant something vastly different in 2007 than it does now. People can and are generating programs from underspecified prompts. Trying to be systematic about how prompts work is a worthwhile area to explore.


I don't see how it's different. You could always describe what you want to a team lead or consultant and pay them to build it.

That's still the best way to turn a spec into a program and comes with all the downsides it entails.


Sure, but Joel isn't saying that's impossible or that people who do that are crackpots. In fact, he was an advocate of writing specs ahead of time [1] - for people.

At the time "generating a program from a spec" was an idea floating around that you could come up with a "spec language" that was easier than regular programming languages but somehow still had the same power and could be compiled directly into a program. That's the crackpot idea that Joel is referencing - but that's not what a spec language used with an LLM is doing.

[1]: https://www.joelonsoftware.com/2000/10/02/painless-functiona...


This is an excellent observation and puts into words something I have barely scratched the surface of. Along with specifications, formal verification is another domain that received the "just automate it" treatment in the before times.

And because formal verification with LLMs is an active area of open research, I have some hope that the old idea of automated formal verification is starting to take shape. There is a lot to talk about here, but I'll leave a link to the 1968 NATO Software Engineering Conference [1] for those who are interested in where these thoughts originated. It goes deeply into the subject of "specification languages" and other related concepts. My understanding is that the historical split between computing science and software engineering has its roots in this 1968 conference.

[1]: http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PD...


I've built an AI compiler that has my take on this: https://github.com/jfilby/intentcode


Why don't you simply point your agent to your Jira tickets? It's easier than paying another third party for their magic LLM incantation loop.


So is that what CodeSpeak does? It formalizes the vocab/structure of prompts?


Isn't that how it always goes? First its just the crackpots. Then it's a fringe. Soon it's the way things have always been done.


Might look like it, might also just be survivorship bias. Alot of crackpot ideas hit the wall instead of beeing a success. We only notice the successors and might think of them as the default, not the exception.


I was commenting from that perspective, basically any thing we consider today to be “the way it’s done” was once something only crazy people did. I think maybe it was pg who said something like if you’re only working on safe things you’ll never have a breakthrough because if breakthroughs came from safe ideas then there would be more of them. I’m not saying every crazy idea changes the world but if you want to change the world you need a crazy idea.


It leans on tree-sitter for language handling, so i wonder if they're actually Concrete Syntax Trees.


LLMs were trained on stuff that people wrote. I get there are "tells", but don't really think people are as good at identifying AI generated text as they think they are...


I wouldn't have picked this article as AI until I got an agent to do some writing for me and read a bunch of it to figure out if I can stand behind it. Now I see the tells everywhere "It's not this. It's that." is particularly common and I can't unsee it. (FWIW I rewrote most of the writing it generated, but it did help me figure out my structure and narrative)

The problem I think with AI generated posts is that you feel like you can't trust the content once it's AI. It could be partly hallucinated, or misrepresented.


Yeah, but "it's not X. It's Y" is a common idiom that LLMs picked up from people. That's the point i was making. And it's starting to feel like every post has at least one comment claiming that it was AI generated.


Good chunks of the article don't trigger this for me, but I would bet money on the final paragraph involving AI:

> That's not a technical argument. It's a values argument. And it's one that the filesystem, for all its age and simplicity, is uniquely positioned to serve. Not because it's the best technology. But because it's the one technology that already belongs to you.


Is there research showing if and under what conditions LLM output is detected accurately. What are the false positive and false negative rates?


You don't have to be good at identifying AI generated text to detect low-effort slop.


Contractions


And gain experience


In every field where competence can be objectively measured, experience does not endlessly correlate with competence. There's always a growth phase but then there's a bell curve of age vs competence, that reaches a peak and then there's a constant decline from there. So for instance chess is primarily a mental game, yet the decline comes as early as one's mid thirties for world class players.

I'm fully willing to accept that for a field where scenarios are fuzzier and intuition more important, it may well be that peak on the bell curve comes somewhat later. But I think it's essentially inconceivable that one is near, or even remotely near, their peak, in their 80s, in anything.


>There's always a growth phase but then there's a bell curve of age vs competence

A bell curve tracks the distribution of a single random variable. You're mixing statistical metaphors.


That’s true, but it’s not always good—Americans have stark examples of the risks of octogenarian leaders whose experience leads them astray by discounting how much the world has changed since they were young.

I think of mental faculties and experience as two separate overlapping curves where there’s a sweet spot in the middle where both are high but either one being low can become a big problem.

They also just don’t have the same energy they used to so even if they have a good idea they’ll be less effective at motivating people to embrace it, and the younger people behind them are going to be acting with more thought to succession politics.


Biden's surely a poster child for the value of experience and connections in the Presidency. Whatever you think of him (and I would certainly agree that he should never have considered a second term), he was quite successful in furthering his agenda while in office.


Yes, I agree that he used his experience well for many things (and had competent staff he could trust to get things done) but I will say he made a huge mistake continuing to back Israel's actions in Gaza to an extent which I don't think someone too young to remember the Six Days War would have done. I think you could also make a solid argument that earlier in his career he probably would have had more energy to put into getting a few of the close votes in Congress over the line.


But dude was 86, how many people in nursing homes would you trust to run a country?


Probably one in a thousand.

But as one whom the Ayatollah has sworn to eliminate, I can still state that man was sharp and brilliant and extremely well spoken. His worldview was internally consistent. He had vision and experience and knew how to motivate people. He was a one in ten million leader.

I give him that praise and more, even recognising that his stated mission was to exterminate myself and my children.


> But as one whom the Ayatollah has sworn to eliminate

What does this mean — did you stick him with the bill at all restaurant or something?


Even If you don't recognize the last name you can just click on the user info and make 2 + 2.


He's doing what's called "Hasbara". Regular people call it "lying".


I love this basically pointing out that racists call it "Hasbara" and regular people call it "lying".

Don't agree it applies in this situation, but it's nice to see someone break down regular people don't give a special jewish name to something that already has a common name/definition, and that the common name better communicates the intended concept so the purpose of using the word is to convey something different than basic understanding.


Please quote which part of my comment you feel is a lie.


It's.. an amusing exchange, especially as a response to the actual aknowledgement of the other party - the thing which people there nor have nor want.


And as we can see with Biden and Trump, 86 is past the the optimal compromise between experience and cognition.


>Contained conflagration, short targeted exchanges, probability of contamination low, material possibility of nuclear escalation.

That's describing something that's not a world war, though. The Russian invasion of Ukraine is already far worse than what you're describing as WW3. (setting aside nuclear escalation)


Folks at Jack's level are just as susceptible to flawed reasoning and trend following as anyone else. Sometimes it feels like more so, possibly because they have so much buffer to absorb the consequences of bad ideas (see how Twitter ended up). A person living paycheck to paycheck has less leeway to veer too far away from reality.

All of this to say. I suspect a lot 10k person companies made up of white collar workers could significantly cut their staff and still survive. By the time you get to that size, there's a large middle management that is constantly looking for reasons to increase their 30 person org to 40, and who will be overbooked whether they have 20 people or 100.


I don't know why but it makes me smile that he did this experiment by having a grad student type the questions for chatgpt and copy the results.


He's not. But he just dismissed a question at a conference, that then somehow got turned into a whole article and a front page story on HN.


The registry did come up, as soon as they had enough information for it to be useful. They were looking for a specific child, starting just from the images that her abuser was sharing on the internet (in which he intentionally tried to hide identifying details).

The registry is just a big list of names and addresses.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: