Hacker Newsnew | past | comments | ask | show | jobs | submit | coderaptor's commentslogin

The output can be explicitly constrained to a formal syntax (see outlines.dev).

For many cases this is more than enough to solve some hard problems well enough.


I haven’t heard a definition of “reasoning” or “thinking” that proves humans aren’t doing exactly that same probabilistic regurgitation.

I don’t think it’s possible to prove; feels like a philosophical question.


I won't define reasoning, just call out one aspect.

We have the ability to follow a chain of reasoning, say "that didn't work out", backtrack, and consider another. ChatGPT seems to get tangled up when its first (very good) attempt goes south.

This is definitely a barrier that can be crossed by computers. AlphaZero is better than we are at it. But it is a thing we do which we clearly don't simply do with the probabilistic regurgitation method that ChatGPT uses.

That said, the human brain combines a bunch of different areas that seem to work in different ways. Our ability to engage in this kind of reason, for example, is known to mostly happen in the left frontal cortex. So it seems likely that AGI will also need to combine different modules that work in different ways.

On that note, when you add tools to ChatGPT, it suddenly can do a lot more than it did before. If those tools include the right feedback loops, the ability to store/restore context, and so on, what could it then do? This isn't just a question of putting the right capabilities in a box. They have to work together for a goal. But I'm sure that we haven't achieved the limit of what can be achieved.


these are things we can teach children to do when they don't do it at first. I don't see why we can't teach this behavior to AI. Maybe we should teach LLM's to play games or something. or do those proof thingys that they teach in US high school geometry or something like that. To learn some formal structure within which they can think about the world


Instead of going bank you can construct a tree of different reasonings with an LLM then take a vote or synthesise see Tee of thought prompting


It feels like humans do do a similar regurgitation as part of a reasoning process, but if you play around with LLMs and ask them mathematical questions beyond the absolute basics it doesn’t take long before they trip up and reveal a total lack of ‘understanding’ as we would usually understand it. I think we’re easily fooled by the fact that these models have mastered the art of talking like an expert. Within any domain you choose, they’ve mastered the form. But it only takes a small amount of real expertise (or even basic knowledge) to immediately spot that it’s all gobbledygook and I strongly suspect that when it isn’t it’s just down to luck (and the fact that almost any question you can ask has been asked before and is in the training data). Given the amount of data being swallowed, it’s hard to believe that the probabilistic regurgitation you describe is ever going to lead to anything like ‘reasoning’ purely through scaling. You’re right that asking what reasoning is may be a philosophical question, but you don’t need to go very far to empirically verify that these models absolutely do not have it.


On the other hand, it seems rather intuitive we have a logic based component? Its the underpinning of science. We have to be taught when we've stumbled upon something that needs tested. But we can be taught that. And then once we learn to recognize it, we intuitively do so in action. ChatGPT can do this in a rudimentary way as well. It says a program should work a certain way. Then it writes it. Then it runs it. Then when the answer doesn't come out as expected (at this point, probably just error cases), it goes back and changes it.

It seems similar to what we do, if on a more basic level. At any rate, it seems like a fairly straight forward 1-2 punch that, even if not truly intelligent, would let it break through its current barriers.


LLMs can be trained on all the math books in the world, starting from the easiest to the most advanced, they can regurgitate them almost perfectly, yet they won't apply the concepts in those books to their actions. I'd count the ability to learn new concepts and methods, then being able to use them as "reasoning".


Aren't there quite a few examples of LLMs giving out-of-distribution answers to stated problems? I think there are two issues with LLMs and reasoning:

1. They are single-pass and static - you "fake" short-term memory by re-feeding the questions with it answer 2. They have no real goal to achieve - one that it would split into sub-goals, plan to achieve them, estimate the returns of each, etc.

As for 2. I think this is the main point of e.g. LeCun in that LLMs in themselvs are simply single-modality world models and they lack other components to make them true agents capable of reasoning.


its those kinds of examples that make it hard to cleave a measurement of success.

Based on those kinds of results an LLM should, in theory, be able to plan, analyze and suggest improvements, without the need for human intervention.

You will see rudimentary success for this as well - however, when you push the tool further, it will stop being... "logical".

I'd refine the point to saying that you will get some low hanging fruit in terms of syntactic prediction and semantic analysis.

But when you lean ON semantic ability, the model is no longer leaning on its syntactic data set, and it fails to generalize.


It’s possible to prove.

Use an LLM to do a real world task that you should be able to achieve by reasoning.


> Use an LLM to do a real world task that you should be able to achieve by reasoning.

Such as explaining the logical fallacies in this argument and the one above?


Take anything, see how far you get before you have to really grapple with hallucination.

Once that happens, your mitigation strategy will end up being the proof.


I mean I know you're joking but yes, it would be able to do that.


It’s called painting with too wide a brush.

A lot of us are simply interested in promoting and continuing to evolve and understand that well-reasoned ideology. We feel its application can improve the human condition.


But it seems to be doing it imperatively. I’d expect something like ‘nginxConf = pkgs.file “nginx.conf”, “contents”’ instead of ‘nginxConf = pkgs.writeText “nginx.conf”, “contents”’.

Not saying the system doesn’t apply this declaratively, but I find it difficult to intuit the above is checking for a state and applying changes only if necessary.


One distinction in Nix vs Docker is that Nix has a dag structure as opposed to a singlely linked list structure of layers.

The "writeText" function produces a derivation (basically an atomic build recipe) that produces that file. The crux of nix is that you make deterministic derivations, and then you can always refer to the results of a derivation from the hash of the derivation and its inputs.

What nix adds is glue logic to chain these derivations together in a way that preserves reproducibility of the individual imperative, but deterministic, components.

Unless you are using something like recursive-nix, you can completely evaluate the nix expression without building any of the derivations.


Also relevant to note that although Nix builds individual derivations imperatively (call this compiler, write this file, rename this directory), it completely controls all the inputs to that imperative process.

This is fundamentally different from a Dockerfile or Ansible script which have no idea what the "starting point" of the target environment is and are pretty much just mindlessly imposing mutations on top of whatever happens to already be there.


These are not the same things. You are not necessarily capable of protecting yourself from their use by another party.

If these magnets are used incorrectly, why should the responsibility for that not be the users?


The user I was responding to didn't make that distinction.

As far as your question, I've spent enough time with corporate legal to know many of these protections are industry driven to scaffold up a legal framework to protect themselves.

That's what "self regulation" actually looks like in material reality. I wouldn't be surprised if that's what was done here.

It's producer protection - doing say $100mil in sales on a legally grey product makes attorneys nervous


Pick a project you’re passionate about and find a technology you’re curious to learn. Then figure out how to make it happen. You can chart your own course (and from what I’ve seen from most bootcamp project results during interviews, you probably should).

I’m a self-taught dropout - I learned how to do this to the detriment of other endeavors including my assigned coursework (which happened to be for another degree).

It sounds like you’re not doing this because you’re driven to. That’s fine, but you’ll probably struggle doing it on your own. I’m guessing those self-taught folks were just more inclined to the field.


I was a software developer before, but in an obscure language and I was taught on the job, no CS degree, and there are plenty of gaps in my knowledge.

I had a little side project going, I didn't want to spend money on the software normally used and started figuring out how to write my own, kept me out of trouble in the evenings, but then I got made redundant. I'd pretty much lost all interest in my career field, and getting a new job in the same field would have meant moving to another country. I had enough.

So I just threw myself into learning and developing software that I could publish, tried to make money, unfortunately that did not work out, but I did publish apps, and even got some fans of it, but it wasn't paying the bills. However a published pretty complex app is a good referral when applying for jobs. I eventually landed a job doing something much more up to date and interesting,

I still kept a new passion for just writing random apps on the side, wrote one to interface with software my employer makes to do something I was interested in then started seeing it could be really useful to them, and started pushing it to them, now I work on that full time with a team.


> Pick a project you’re passionate about and find a technology you’re curious to learn

This is the absolute best way to learn anything in my eyes, as you build it you will be forced to face problems and forced to find solutions for those problems yourself.


The hard part is coming up with project. But you should have that in mind before you learn programming. Otherwise what's the point?


If you can't come up with (and be driven by) a project on your own, find someone (a friend, a prospective cofounder, a colleague) who does have an idea, then work with them on making that idea come to life.


I think the project is the easy part, especially if making money isn't the goal. I have dozens or hundreds that I will never get to, and sometimes come up with multiple new ones a day.


Blame the fools who believe the misinformation (or better yet, take responsibility for publishing truth in a more compelling form). It takes two to tango, and one of these tango dancers was raised in a country where they were given a right to dance.


Would you give a couple examples?


The Legislature is still ultimately capable of changing the law the Court rules on.


The legislature is ultimately not capable of anything. Passing legislation requires enormous barriers: the House and the Senate (usually by a wide margin) and the President and not having the Supreme Court just sweep it away.

In some cases, it takes only a single Congressman to prevent a law from being passed. If the minority party is dead set a bill -- if only for political reasons -- it often requires absolute unanimity on the other party to pass it, an unreasonably high bar to pass.

Legislation is nearly always trivial. They are only barely capable of passing even the most basic, crucial, mandatory law appropriating funds for the executive branch -- and that's only possible because the filibuster does not apply to appropriations bills. Real legislation is sometimes bundled into appropriations bills precisely to piggyback on that exception.

Theoretically, the legislature can do lots of stuff. Pragmatically, you can't simply say "well, the legislature should act". There is an enormous thumb on the scale in favor of the status quo.


And the Court can say the new law is unconstitutional on a flimsy pretext, as they’ve done with critical components of the Voting Rights Act.


Did you consider the impact on those responsible for your long term care?


Brain dead is dead. [1] There is no long term care. They turn off the machines pumping air in your chest and that's it.

I think you are confusing brain death with vegetative state caused by severe brain damage.

1: https://www.nhs.uk/conditions/brain-death/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: