Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's also another thing. AI may not need to be superhuman, it may be close-but-not-quite human and yet be more effective than us - simply because we carry a huge baggage of stuff that a mind we build won't have.

Trust me, if I were to be wired directly to the Internet and had some well-defined goals, I'd be much more effective at it than any of us here - possibly any of us here combined. Because as a human, I have to deal with stupid shit like social considerations, random anxiety attacks, the drive to mate, the drive of curiosity, etc. Focus is a powerful force.



What about consciousness or intelligence implies that it would be 'pure' in the sense that you describe? Wouldn't a fully conscious being have a great deal of complexity that might render it equivalent to the roommate example? Couldn't it get offended after crawling the internet and reading that a lot of people didn't like it very much?

The idea that 'intelligence' is somehow an isolatable and trainable property ignores all examples of intelligence that currently exist. Intelligence is complex, multifaceted, and arises primarily as an interdependent phenomena.


It doesn't ignore those examples. The idea pretty much comes from the definition of intelligence used in AI, which (while still messy at times) is more precise than common usage of the world.

In particular, intelligence is a powerful optimization process - it's agent's ability to figure out how to make the world it lives in look more like it wants. Values on the other hand, describe what the agent wants. Hence the orthogonality thesis, which is pretty obvious from this definition. 'idlewords touches on it, but only to try and bash it in a pretty dumb way - the argument is essentially like saying "2D space doesn't exist, because the only piece of paper I saw ever had two dots on it, and those dots were on top of each other".

You could argue that for evolution the orthogonality thesis doesn't hold - that maybe our intelligence is intertwined with our values. But that's because evolution is a dynamic system (a very stupid dynamic system). Thus it doesn't get to explore the whole phase space[0] at will, but follows a trajectory through it. It may be so that all trajectories starting from the initial conditions on our planet end up tightly grouped around human-like intelligence and values. But not being able to randomize your way "out there" doesn't make the phase space itself disappear, nor does it imply that it is inaccessible for us now.

--

[0] - https://en.wikipedia.org/wiki/Phase_space


They same could be said of flight, but now we have machines whose sole purpose is flight.

"Pure" is a bit of an extreme word, but clearly designed things are purer to particular goals than biological systems typically are.

This is partly due to the essences of what design means and partly because machines don't have to do all the things biology does, such as maintain a metabolism that continually turns over their structural parts, reproduce, find and process their own fuel, etc.


Not to mention lack of motivation and tiredness. That being said, I'm sure we can make an AI that can think much faster than a human. I also think we can build an AI that can keep more than seven items in its working memory, as humans can.


Just the fact that a machine mind will be able to interface with regular digital algorithms directly will give it huge advantage over us.

Imagine if we had calculus math modules built into our brains. Now add modules for every branch of math and physics, languages, etc. The "dumb" AI and algorithms of today will be the superintelligence mental accelerators of tomorrow.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: