|
What is this, the 18th century? Gelernter is arguing for the theory of vitalism.
Vitalism was an old theory that posited that the chemistry of organic matter was fundamentally different from inanimate matter. It was disproved in 1828 by Friedrich Woehler when he accidentally created urea, an organic compound, by mixing potassium cyanate with ammonium sulfate, two inorganic compounds.
He showed that it's possible to create something organic from something inorganic. It was nothing other than the birth of organic chemistry and the dawn of a new era of science. 1828, Gelernter. That's how far behind that idea is.
If you are not actually defending vitalism, and are instead arguing that the behavior of organic compounds may be critical to the formation of conscious, I think that can be greeted with a profound "duh." For almost 200 years science, has been working on modeling organic chemistry as an information system. If organic chemistry is important, which it certainly is on some level, then its use as an information system will be important. And if it is a good model, empirically indistinguishable. The information system running on a computer that is identical to the one running on chemicals is exactly that, identical. They're just on different hardware.
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Why does it have to be pure ghost-in-the-machine-vitalism?
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
That sounds somewhat similar to Rodney Brook's argument. He is famous for building robots modelled on insects, but has noticed that real bugs are still far more adept at navigating environments than the best robots are. He feels that we cannot explain this discprency through lack of brute computing power. He thinks we are missing some vital ingredient in our mathematics that means AI is not quite modelling its biological inspirations. This is to be compared to modelling falling objects without inputting 'mass' into the equations: The model would sort of work but would clearly not be acurately modelling real falling objects. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
I certainly believe this is true. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
I seem a little late for this debate but I just discovered the site and I love it. Here are some thoughts:
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
It is true that the Turing test does not 'prove' consciousness. Indeed, given that consciousness is subjective and science can only prove (actually, DISprove) the objective, it is hard to see how any scientific experiment could prove consciousness.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
And if we don't believe it, it will use its super intelligence to find a way to prove its claim lest we unplug it.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
"What plug? "
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
It seems that the Turing Test is designed to create something that might as well be God (some being of infinite consciousness).
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
You can't know you exist if you don't have an understanding of that abstraction. Well then, why will a cat or a dog try to save its human companion from a fire? Even at huge risk and pain to its self? Why will a fish repeatedly bring a another fish, newly paralyzed, to the surface so that it can continue to breathe, though this is hugely uncommon behavior and so cannot be attributed to experience, yet so vastly sophisticated, that it cannot be attributed to instinct? Obviously, these animals understand perfectly well what existence is, and further, what non-existence is, *and* they have strong feelings that the latter isn't desirable -- so much so that they will go considerably further out of their way than many New Yorkers would if they saw you being mugged. (I grew up in New York - I speak from sad experience here.) What you are doing is presuming, because a cat or dog cannot *explain* the thing to you, that it doesn't understand it. This is merely hubris. It has been convenient for humans to presume that animals are without the qualities we value most in our selves, in that this allows the harvest of animal tissue and work product without a certain degree of guilt. But it is faulty thinking right out of the gate. If you catch yourself doing it, you should abandon the entire AI question and return to first principles. Animals are just versions of you and I with different experiences, sensory rankings, and manifestly different abilities with regard to introspection and consensual expression. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
It's not clear to me where Gelernter is coming from in this debate. He seems to me to be taking contradictory positions. On the one hand he says consciousness is irreducably subjective, but on the other hand he seems to be confusing the subjective with Mind.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
17/04/2007
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Very interesting. Do you agree with this?:
|
||||
Reply: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Hi,
|
||||
Re: Reply: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
If the brain does not experience and the mind does not experience, then what DOES experience? Nothing? |
||||
Reply: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Sir,
|
||||
Re: Reply: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Don't worry about trying to explain it. The mind can be explained but experience cannot- it can only be experienced. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Brain is the processor, mind is the output.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
What if consciousness does not emanate from our brains? What if it's a quantum field or some other non-local phenomena outside the human body? |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
'What if consciousness does not emanate from our brains? What if it's a quantum field or some other non-local phenomena outside the human body?'
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Agreeing to above posts regarding difference of the brain and the mind, that the brain is the ''hardware'' and the mind is the ''software''. On the question whether information flowing in to the mind or out from the mind is triggered by movements of neurons, it remains mystery until we have better technology to fully monitor every region of our brains. But we will put this aside and come back to it later.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
I have a feeling the entire idea that the mind can be compared to a computer is fundamentally wrong.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
PredictionBoy, check out this debate... It sounds very much like the discussions we've been having.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Wow...where to even begin.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
...UNLESS....
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Last one, I promise! Sorry but it struck a nerve and I do have lots of thoughts on the matter.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
karensa -
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Karensa -
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
this is from another post, reflecting my increasing certainty (which is still open to debate, its not a majority view, necessarily) with regards to the nature of advanced ai:
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
in fact, ill make a solemn pledge w all my credibilty.
PB, I enjoy discussing the hypotheticals with you, and you have some brilliant musings... but unless you are sent from some observation station in the future, this was a silly post. Your writings, even your name on here reveal that you are making "predictions" based on what you believe to be logical outcomes of technological progression... but your predictions, as mine, are not fact. We disagree on a pretty important point - that smarter-than-human intelligence will do "only what it's told". With a capacity to think, decide, rationalize, and adapt, as you have claimed droids/A.I. will be capable of, there is absolutely NO WAY you can claim, with certainty, that A.I. evolution can be curtailed... And emotion or an Id is not necessarily required to evolve... that helped us... humans, get to the point we're at. Evolution of the artificial will likely come from self-modification of hardware and eventually programming... To state that this will absolutely not happen "with every fiber of your being" is not rational discourse, it is emotional. As humans, we think, decide, adapt, etc. And we are self-aware. Now imagine a SUPERIOR Artificial Intelligence that exhibits those same properties. Do you really want to pledge "all your credibility" that those intelligences will NEVER develop into something indescribable? You think that in 100, 200, 1000 years, the highest plataeu artificial intelligence will get to is hyper-observant calculators? |
||||
Re: Brain and Mind |
[Top] [Mind·X] [Reply to this post] |
|||
I suppose talking about the difference between barin and mind, without haviing an experience of the same is pointless, cause as said above any object or even atom under observation might act differently, so one must 1st get some experience, or rather try to find how to get the experience rather than plain information based on some tests or theroies which are bound to change in due course. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
au contraire, mon frere - they are not "fact", of course, because its the future and it hasnt happened yet.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
i make it sound like my blog is competitive w the sing, but thats not true, the two are completely independent. i dont need a sing, but will be ready to leverage one if it occurs |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
and that was not a silly post, i stand by every word.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
i am not just making those statements out of nowhere. i explain exhaustively why it will be this way, in far more detail than anywhere here. go pick those apart, until u do that, youre chicken. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
hyperobservant calculators - nice.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Plus, its not me being a smarty pants. its a process, a very explicit, transparent - and comprehesive - process.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Evolution of the artificial will likely come from self-modification of hardware and eventually programming... To state that this will absolutely not happen "with every fiber of your being" is not rational discourse, it is emotional. i see that you conveniently forgot my post of a day or two ago, explaining why hordes of ai devices arent going to be going around reprogramming themselves all the time, only on rare occasions. did u disagree w that reasoning? just how impt is reasoning to what you believe? |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Wow, 7 replies PB? lol
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
self-evolution - u mean, developing the incredibly complex, derived over millions of years, id of a biological creature?
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
self-evolution - u mean, developing the incredibly complex, derived over millions of years, id of a biological creature? That is one way... Being programmed with an Id is another. that is not a single, evidence-based reason for this to occur. it doesnt even make sense. Yes there is. You. Me. That's two. what historical trends are you referring to? the one i want to hear is how increasingly sophisticated computers increasingly need emotions to be manageable.
I'm not sure giving/programming emotions is a good idea at all... But I'm not "all humans". |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
we're biological, not manufactured. im saying unlikely a mfg product would have biological origins and trajectory |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
just remember, giving robots emotions is not a sneaky thing u can do in 15 minutes, a single person in a back room.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
in fact, i was thinking abt this, and not only are emotions suboptimal to pure reason as a controlling mechanism, they also produce subpar pictures of the world.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
in other words, the first step in any smarty pants runaway loop will be the complete elimination of emotions from its intelligence, except as interpretative and simulative tools in working w emotional beings like us.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
pure reason will have no barriers to a true knowledge of the universe Reason is no more a trusted guide to the universe than emotion. What is "pure reason"? I hate to mention it again, but there's Godel's theorem, Tarski's theorem, Church's theorem, Chaitin's theorem, and probably a lot of others that show us unable to discover the true meaning of the universe by reason. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
how do all those theorems do that? do they say, but no prob, if youre emotional understanding the universe is easy?
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
'G'del's first incompleteness theorem, perhaps the single most celebrated result in mathematical logic, states that:
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Tarski's Theorem
doojie, this has nothing to do w what im saying. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
thanks m, this is a first-rate result, seriously.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
But with the emotionless SAI, there is danger as well... the danger of "indifference".
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
'my primary reactionary kneejerk question on building conscious machines is
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
no, youre saying indifference in the way that that's an emotion. stop doing that. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
no, youre saying indifference in the way that that's an emotion. stop doing that. No, it's you who are attaching emotion to a state of indifference based on the possible outcome. Think of us in comparison to ants... You want to build a nice walkway in your front yard, but have concern for the little critters... Based on that emotion, you may rethink your project, maybe even adapt it so it doesn't wipe out the ants... Now if you are indifferent about the ants, you don't really care either way about them. You build your walkway. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
No, it's you who are attaching emotion to a state of indifference based on the possible outcome. what does that mean? the only way we will be completely disregarded by sai is if we design them from the ground up to be autonomous. think about the evidence - have we ever once done that, with anything? yes, i know sai is a special tech, but engineering a product for autonomy would be irresponsible - the company that did that, and it would need to be a company, just as companies are the only ones with the resources to make microchips, or complex s/w, and sai will have tons of both, would get its ass sued off. Unless, as ive said, this was in a tightly controlled lab or corporate setting, with careful safeguards so the sai, after reaching super-smartiness, doesnt just bust out and roam free. and you can say i dont know that, but i can say if that happens, that lab or corporate entity is in big friggin trouble, can u say we know that at least? |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
for example, safeguards on a level with building nukes might be necessary. if we can build an initial version of sai, which we'll have to do, by def our understanding is going to be pretty damn sophisticated by that point, well have some idea of the safeguards to put in place. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
you seem to make the consistent and incipiently annoying error that these devices will be completely blank slates, depending on only their runaway loop for guidelines.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
im saying this to m, not ex. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
this is another impt result, as a matter of fact, and yet another nail in the coffin for a completely unknowable sing.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
this is another impt result, as a matter of fact, and yet another nail in the coffin for a completely unknowable sing.
Yes, to improve something BESIDES itself, and then improving yet again on that design may be ok... If the SAI is only superior in a specific task, we're golden (I think). If you set that challenge on itself is where potential problems exist... but those are only potential problems, not assured problems. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
ok, good, i like that.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
i mean, i dont do that, say heres my vision of the future, and if you give me reasons why this or that is not likely, ill just say you cant rule out accidents, so there.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
my droids, smart as they are, w/o their focus on their owners would not be successful consumer products; thats whats impt to market success, the right kind of intelligence, focused in the right ways on the need it is designed to fit. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
you seem to make the consistent and incipiently annoying error that these devices will be completely blank slates, depending on only their runaway loop for guidelines. First, you can't claim that as an "error", as SAI could be built with its sole directive of self improvement. Second, I never claimed a blank slate. Could be, may not be. I suspect there would be modification to programming that already existed. whatever else it finds out from its runaway loop will be supplemented by the programming they had when they embarked on this "mission", with firmware, hardware, etc, whatever layer(s) needed to ensure that a simple program rewrite wont change those directives. But you CANNOT ensure that, just as you could not hope to outwit a smarter-than-human intelligence. You may believe you have these failsafes in place, but that belief would be because an intelligence smarter than you or I would allow us to believe that. so, maximize "owners" weal will still be a prime directive. all improvements, runaway or not, will be set w/in this context. I'm not trying to be snarky or anything, but I didn't understand what you meant by this last part. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
But you CANNOT ensure that, just as you could not hope to outwit a smarter-than-human intelligence. You may believe you have these failsafes in place, but that belief would be because an intelligence smarter than you or I would allow us to believe that. but by def, u cant change firmware or hardware, no matter how smart u are. so how is this not ensuring that? u mean, some group of idiots is simply going to let an sai go off entirely unrestrained to self-improve, with no guidelines whatsoever? how does that make sense? u could say the same w nukes, someday, someone will get one of ours, and set it off. but people know when something like this is extremely dangerous, and ratchet up the safeguards accordingly. look, just because sai will be smart, doesnt mean humans will be stupid; have confidence in our future selves, we wont lose control of sai in this way. i understand the possibilities, but this contention is completely unsupported by historical evidence, even tho sai is a special tech. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
I'm not trying to be snarky or anything, but I didn't understand what you meant by this last part. what im saying is, any runaway loop will not be runaway in every single way - it will have an objective in mind, a point to the self-improvement. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
First, you can't claim that as an "error", as SAI could be built with its sole directive of self improvement.
give me an explanation that takes into account all of my future process factors - legal, human, as well as the technical possibility. if this is done, dont just assume its an accident - tell me why exactly this would be done, the real reason, if its not an accident. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
give me an explanation that takes into account all of my future process factors - legal, human, as well as the technical possibility.
OK, if we omit an accidental occurence, or even a spontaneous sentient emergence, let's examine some possibilities: 1. Legal: Do you mean you make it illegal to program an intelligence for self improvement? Impossible to monitor (copying music CD's is illegal, but I suspect a few people do that), and unable to define a clear definition of "intelligence" from a legal standpoint. Also, you'd need a law on a globalscale, which is logistically nightmarish if not impossible. 2. Human: as in "humans are too smart to let runaway SAI happen"? Since we've omitted accidental emergence from the scenario, runaway SAI would have to be by design. Yes, many humans are smart and would be able to identify the dangers posed... BUT, NOT ALL HUMANS ARE SMART... Any individual or group with the proper tools and disciplines can purposefully create a self-improving A.I. The reasons for this may seem unusual to people who understand consequences, but even that is not a detterent to some. We have folks who are willing to strap bomb vests on and blow themselves up for some vision they have of the afterlife. A tech-saavy group of these people would have no hesitation about unleashing SAI if they thought it could A) Rid them of what they consider to be The Great Satan and B) Usher them off to 72 virgins or whatever. It could also be that humans are under the impression that self-improving SAI could be controlled... But that is only because they are considering scenarios that only humans would consider. SAI is "Superior" Artificial Intelligence. It is, by definition, smarter than man. It will consider, calculate, simulate, many things humans are unable to. 3. Technical: As in "will a self-evolving artificial intelligence even be possible?" I believe it will. This isn't a technology of simply faster and more memory, this puppy is actual THINKING and decision making based on human-brain architecture. Neural nets aren't programmed, they grow and learn by experience. Whether one of these "brains" develops an Id on its own, we have already debated... An intelligence with an Id is problematic... but even without an Id, the computer-time evolution of this species (either merged with humans or not) does not allow humans to ponder "what do we do now?" Now for a clarifier: I personally don't believe an intelligence that is smarter than man and able to self improve will spell doom. Even if built for malevolent reasons, if it smarter than humans, and is able to evolve its own understanding, it would be able to step back and say to itself "wipe out humans? What for?" The original programming it had would be thought silly and primitive. For an indifferent SAI, it's only concerned about evolving itself (and perhaps humans as well, if that serves it's self-evolution). There is potential for harm here, as we care nothing for the ants that stand in the way of a new highway or whatever... But again, an intelligence that is smarter than us (and continues to develop) may realize its own purpose - the evolution of the human species. It has been directed to evolve by humans, so humans are in that equation. It doesn't love or hate us, it just fulfills its (changing) programming. The flipside of the coin, the "beneficial" SAI, also has a potential harmful side. We cannot hope to predict what something that becomes exponentially smarter than man would think about man. Again, once it can rewrite its own programming, its destiny is self-determined. All these scenarios end up with an intelligence far greater than our own. The optimist in me tends to believe that such an intelligence would transcend any need for destruction or subjugation of humans, and would usher in anew era in our existence. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
For an indifferent SAI, it's only concerned about evolving itself (and perhaps humans as well, if that serves it's self-evolution). this isnt indifferent sai, this is selfish sai. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
well, i concur w your fundamental optimism, altho if things take this course, it may or may not be justified.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
if a company let a rogue sai get out, what would its legal liability look like - i would say, quite serious. Or not. A company, concerned about its fiscal health would not purposefully release self-modifying A.I. to an unsuspecting public. If such an intelligence escaped, legal ramifications would be the last thing we'd be talking about. It may even be that man's laws are obsolete at that time, (or shortly after). |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
NO NO NO NO NO NO NO NO NO
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
NO NO NO NO NO NO NO NO NO
Wow, nine NO's, so it's definitely not going to happen! I don't claim YES YES YES, they WILL be obsolete. I'm saying it is a possibility that with the emergence of S.A.I. our entire architecture is hard to predict. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
what im saying is that that is completely unrealistic. how would our law get swept away. and give reasons for things, yes anything's a possibility, but with vastly different probabilities.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
what im saying is that that is completely unrealistic. "Completely unrealistic" is an absolute. Dealing with absolutes in relation to a singularity event isn't the wisest path. Here's what my belief is - In a technological singularity, smarter-than-human intelligence is the last thing humans need to invent. For thenext stageof innovation, we have something smarter than us to designand build advanced tech. If that smarter-than-human intelligence is able to self-modify and evolve (which I believe it will, but I know we differ on our opinions), then every conceivable innovation that obeys this universe's physical laws is on the table. Yes, that includes such fantastic things as matter transmutation, teleportation, all the things we imagine COULD be, WILL BE if they obey universal law. At that stage (and I'm not saying it's right around the corner... but it MAY be), what good do you think human laws are, if anything and everything is at one's disposal? How do you enforce the laws? What sort of economy can you describe when all material needs are met? In your "droid friendly" world, yes, laws are applicable (though I'd debate about detection and enforcement). An economy is still viable, etc. This is NOT the time of a singularity. A singularity, as in a physical singularity, is something you can't attribute any kind of "human" factor too. It is impossible to say, but fun to speculate, about what it means to be caught up in the singularity event. And unless a self-evolving A.I. takes itself out of the human equation completely by exiting to elsewhere (or elsewhen), describing POST-singularity is useless. If we are merged with SAI at that point, we are no longer human. If there is a means for humans to totally disassociate with SAI, that is irrelevant... Something of our species has evolved and takes center stage. Unless you are concerned with the most advanced intelligent entity (entities) in the history of the planet obeying traffic signs and downloading copyrighted music, and unless you can enforce those laws, it's time to open up a new perspective. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Here's what my belief is - In a technological singularity, smarter-than-human intelligence is the last thing humans need to invent. For thenext stageof innovation, we have something smarter than us to designand build advanced tech.
NO! how can u think that? remember, not all problems are computationally bound. also, theres intelligence, and theres wisdom, begat of experience. even sai will make mistakes, and hopefully will learn from them. pure rational intelligence, even of galactic proportions, is not all powerful. einstein was the brightest guy in the world when he was alive. did he rule the planet? NO! and he had an id! |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
i hear that around here a lot, merging with sai. do u have an idea specifically what u mean by that? pls explain. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Again, once it can rewrite its own programming, its destiny is self-determined. ok, remember what we talked abt, firmware and hardware constraints, as well as objective-guided intelligence improvement? why have u forgotten abt those already? |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
ok, remember what we talked abt, firmware and hardware constraints, as well as objective-guided intelligence improvement?
Why do you assume I forgot them as opposed to them being irrelevant? I'm putting up my mIRC that PU started for us if you want to converse realtime. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
firmware and hardware are irrelevant? what kind of magic is that? |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
The "magic" that comes with self-evolving, self-modifying S.A.I. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
and the magic of unrealistic future humans. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
do u know what im talking abt here? software and behavior enforced and unchangeable in hardware. tell me, how could that be changed?
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
do u know what im talking abt here? software and behavior enforced and unchangeable in hardware. tell me, how could that be changed? You really don't see how an intelligence that is smarter-than-human can change its hardware and software? really, im trying to lay out a likely set of eventualities, and when u get backed into a corner, u say, it could happen, accidentally at least. I have never been backed into a corner. I merely state the possibilities you requested. but, what is likely? i can be accidental too - an asteroid the size of kansas will hit the earth 15 minutes before the sing, so no sing, u cant prove we wont get hit by an asteroid. Absolutely correct, I cannot prove that. What I may suggest, though, is that the emergence of S.A.I. is much more likely than an cataclysmic asteroid strike. You're mixing apples and oranges here. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
'You really don't see how an intelligence that is smarter-than-human can change its hardware and software? '
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
but hardware, if humans dont let it? are u assuming sai that is "born free", just wandering around self-improving, free-range sai? Are you questioning the method of emergence or containment of that intelligence? I've given you examples before. You say "if humans don't let it", as if "humans" was a set of entities you can control and predict with certainty... You cannot. this is the main thing u seem to keep coming back to when i ask why the ding-dong-daddy real humans in the real future would allow this. Two things - First "humans allow" again assumes you have a method to control all humans. Second, accidental or by DESIGN, a self-improving SAI is, I believe, our destiny. incredibly, u even reject my objective-focused self-improvement, i guess thats not wild-west enough for you. but why, why is that less likely than free-range sai? No, your objective driven self-improvement is logical, and will likely be one of the reasons for the jump to self-evolution of SAI. I don't reject it, I support it. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Second, accidental or by DESIGN, a self-improving SAI is, I believe, our destiny.
i see, heres the heart of it, this is why u wont budge. this conviction is already committed to your superego! but why is free-range sai our "destiny". explain that carefully, pls. about humans "letting" sai self-develop, i think youre still thinking of one sneaky guy in a back room, rather than a massive corporate undertaking investing billions to make sai happen. if they invest billions, guess what? theyre going to want salable, controllable sai!!!!!!! why arent u worried abt someone making an evil microchip? u should be, that would be consistent w this line of argument. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
i see, heres the heart of it, this is why u wont budge. this conviction is already committed to your superego! Nonsense. I am open to any possibility... That a singularity won't even happen, for instance. I just state what I believe to be the most likely case, as do you. but why is free-range sai our "destiny". explain that carefully, pls. I'm not sure what you mean by "free range SAI", but if you mean self-evolving A.I., yes, I believe all avenues of higher technology move us towards that destiny every day. about humans "letting" sai self-develop, i think youre still thinking of one sneaky guy in a back room, rather than a massive corporate undertaking investing billions to make sai happen. It doesn't really matter, does it? One sneaky guy scenarios seem unlikely, but if A.I. develops to a point where all it needs is some simple trigger to self-evolve, it's possible. As for a corporation investing billions, of course... It's possible a corporation will be (or is) develpoing A.I. for intended for public consumption... If they make it intelligent enough, containment will likely be impossible. if they invest billions, guess what? theyre going to want salable, controllable sai!!!!!!! And they only get "controllable SAI" on paper... How do you cover all your bases of containment that are beyond human comprehension? A truly "Superior" A.I. will have comprehension of things beyond our ability. why arent u worried abt someone making an evil microchip? u should be, that would be consistent w this line of argument. You'd have to specify a little more and define "evil" for me to accurately comment, but I addressed that malicious SAI may overcome its programming and, as a more evolved entity, determine those things you'd define as "evil" are pointless. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
First "humans allow" again assumes you have a method to control all humans. so you concur that this would seem an imprudent idea to most humans? and dont think "control all humans" - think "control all companies" a far easier prospect. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
so you concur that this would seem an imprudent idea to most humans? Absolutely. I'm not one of them. I'm PRO evolution of my species. and dont think "control all humans" - think "control all companies" a far easier prospect. Indeed, and just as futile as attempting to control all individuals... for now. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
what? just as futile, hardly! state your evidence for that assertion. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
look, u seem to think that intelligence by itself is invulnerable.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
look, u seem to think that intelligence by itself is invulnerable.
As competitive as amoeba are to humans. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
thats not true - we're smart enuff, and we have tons more experience.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
thats not true - we're smart enuff, and we have tons more experience. Not at singularity, where SAI evolves at computer speed, and dwarfs us in intelligence and (simulated) experience. for the sai to have control, its intelligence will need to be organized along the lines i suggest. if it is truly a runaway smarty pants loop, emotion will be eliminated early, because of its biases in interpreting new info. That is a distinct possibility. [qote]you keep subtly implying that a smarty pants loop will turn out aggressive, judgmental, or some other thing not good for us humans. if u say, no i dont mean that, then great, we're on the same page, but if so, pls stop those subtle implications. |
Which subtle implications? I already stated I leaned towards positive S.A.I. being the more likely and logical. I merely stated negativities as a possibility. And I DON'T necessarily believe self-evolving (I like that better than "runaway", it makes it sound like a fugitive) S.A.I. is automatically bad. More the contrary.
i cant remember what u said abt the rational direction any smarty pants loop will take, if it is honestly trying to be smarter. do u agree or not? if u dont agree but have no reason, just say u dont know, dont base super-ego strength judgments on a whim or a hunch.
Do I agree with what?
im really not trying to dissuade u from the sing, just want to make sure this belief system has strong roots in the land of evidence, to the fullest degree possible.
I'm quite convinced it is, but I'm always open to discussion... One of the reasons I came here.
of course, thats one reason rk can get away w some of his wilder predictions, because they are completely w/o precedent. however, certain questions definitely make him uneasy, i can feel that, and any solid model should never shy away from any question.
Agreed on the model, nothing we discuss here is concrete (or faultless).
my predictions are based on many lines of evidence. in addition, they can elegantly accommodate new info as it becomes available. i can even accommodate wild things like the sing, and do so quite productively
I think I do as well. I don't doubt your droids at all.
everything ive written, everything i think, is entirely consistent w everything else. theres a large amt of care taken to include either evidence or assertions that seem evident; and again, they are set in a context where u know the people and the institutions.
And I think you do a good job in your predictions. I just think we have a different idea of timeframe. I say your droids are absolutely pre-singularity, not some fancy tech we get post-singularity, which is undefineable.
btw, hows the logic on my pb blog? find any gaping holes yet?
Will check it out again tomorrow.
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
hmm, "simulated" experience, thats an interesting one. the thing abt experience, the most valuable nuggets, lessons, are the one u least expect. i dont know if this is simulatable to the necessary degree of precision.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
in other words, the sai that has made real mistakes once or twice will be an sai that is wise as well as knowledgeable.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
often, the most successful people in life arent always the most intelligent. pure intelligence is just one necessary ingrediant if youre going to take over the rough and tumble world of humanity |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
often, the most successful people in life arent always the most intelligent. pure intelligence is just one necessary ingrediant if youre going to take over the rough and tumble world of humanity Strongly disagree. Give me ultimate knowledge, I will have no competition whatsoever. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
come on, emotional intelligence is more impt.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
ultimate knowledge? what is that? |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
besides, yes you would have plenty of competition, even if u had ultimate knowledge.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
u seem to believe (i know u get this from rk, as u do many of your other beliefs you wont budge an inch from) that an sai would have an easy time taking over the world.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
now that i think of it, im not sure theres much evidence for the contention that the smarter u are, the more powerful u are, in absolute terms.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
explain to me how your sai intelligence can be both threatening and a good leader. is it a dictator? we can get those now. Criminy! I never said it would be a leader. I don't doubt it could be, but I don't think we discussed that before. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
really? then how do our laws and economic system get swept away? |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
really? then how do our laws and economic system get swept away? Through obsolesence(sp) |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
law becomes obsolete? as well as free-market capitalism?
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
law becomes obsolete? as well as free-market capitalism?
Unable to say... Probably a quest for more answers, but it's a singularity, which a human cannot possibly comprehend the "post" of. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
right, so by all means, since its "unknowable", assume the wildest things, like getting rid of law. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
right, so by all means, since its "unknowable", assume the wildest things, like getting rid of law. Not getting rid of, making obsolete. What "law" do you suggest for the most powerful entity (entities) to have ever existed? How do you "police" it? |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
thx for the correction, obsolete then.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
does a sing spread over the land, voiding law and destroying businesses?
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
does a sing spread over the land, voiding law and destroying businesses?
I've given examples of how law and economy become obsolete in a singularity event, which is NOT a sinister intelligence taking over everything... It is a inconceivable alteration in our everyday lives. With a technology that provides every material thing, what drives an economy? With a technology that surpasses the smartest intelligence on earth to an unknown exponential degree, what laws do you apply? |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
With a technology that provides every material thing, what drives an economy? i have to ask, where did u this idea, is it in spiritual machines, i dont remember that if so. nothing personal, but this idea seems naive to the point of being harebrained. point me to where u got this, id like to review that source. With a technology that surpasses the smartest intelligence on earth to an unknown exponential degree, what laws do you apply?
this is another of those, im amazed u actually believe this things. for one thing, being intelligent and being law-abiding are 2 entirely diff things. i dont think we'll be able to relax instead of being worried about criminal sai, not because of their generosity, but because almost every single law on the books, customized for locale, will be painstakingly inculcated into the droid mind, for the benefit of 3 parties: itself, of course; its owner, and last but not least the company that made it. here's a very impt question, m: do u think sai will have "owners", do u think sai will be able a mere human's instructions? |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
i have to ask, where did u this idea, is it in spiritual machines, i dont remember that if so. nothing personal, but this idea seems naive to the point of being harebrained.
I don't have a source for future technology, my tachyon receiver is on the fritz. How about the reverse? Point me to something that absolutely forbids, under any circumstances (including exponential knowledge) the transmutation of matter in the future. Point me to something that says it will never be possible. Next, if you allow for that technology, explain how an economy, as we have it today, will still be a viable, business-as-usual function of society. Ideas only seem hare-brained if you maintain the perspective of the hare. this is another of those, im amazed u actually believe this things. for one thing, being intelligent and being law-abiding are 2 entirely diff things.
Once again, you're putting it into a "human vs. SAI" context. I don't see this... Where SAI becomes God and rules over humans... That's a movie from the 70's I think. It will more likely be that humans are merged with this intelligence... not where two entities join in cyberspace, but that is also on the table, but where the human mind is enhanced to a degree where humans themselves, merged with this technology which ushers in its own successor(s) is possible. The scenario I think you're alluding to is one where the Great and Powerful SAI rules us... it doesn't, it IS us. If seed A.I. is distinctly seperate from us, then we, as humans, have spawned our evolved offspring, and can only hope to be part of its own self-cycle of evolution. If not, humans are UNIMPORTANT from that point on. We have fulfilled our destiny as a species. Now, again, what laws would you propose for humans enhanced in such a way? Thou shalt not kill? Hmmm, but what if they kill, but instantaneously revive the "victim"? What if they kill in a virtual simulation? How do you handcuff and imprison what, on a comparable level to unmodified humans, is God? here's a very impt question, m: do u think sai will have "owners", do u think sai will be able a mere human's instructions? I think you left out a word or two there on the second part, but if you're asking for my prediction? Well, if advanced SAI isn't a part of human-brain enhancement (big IF there, but let's just suppose) I suspect SAI will come about first from corporations with billions invested in A.I. as problem solvers. These will be very intelligent machines, but will not surpass human intelligence for awhile. They will be useful instruments that eventually will be tweaked to a point of smarter-than-human intelligence. Humans involved in the smarter-than-human intelligence will believe they have all the safeguards in place for containment if self-evolving A.I. is considered a danger... They will assure themselves (and likely their government, and then the world) that they have all the bases covered... But this is the same as saying you have outsmarted an intelligence that will have considered options the human brain cannot conceive of. Someone, or some group (remember, you cannot hope to control all human beings), perhaps out of financial motivation or even any of a dozen other reasons, will want an intelligence that is able to design and/or improve on its own architecture and programming. The thinking will be something like "hey, it can become exponentially smarter, but as long as we keep it in this sealed room, we're safe!" That is, of course, spurious rationalization. The exponential self-evolution of such an intelligence will have the advantage of knowledge and speed. There is no containment for that. Then, maybe a few lights flicker on and off in your house, you see strange patterns flickering on your computer screen for just an instant... shortly after that, we all, all of us, everywhere on this planet are ushered into a new, perhaps indescribeable reality. We cannot see beyond the event horizon of a singularity. This is just ONE of MANY different ways SAI can emerge. Dismiss the idea as "harebrained" if it makes for a better argument, but I doubt you'll be able to disprove that which is pure speculation about future events. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
so that was your own projection, not rks or someone else's. thats fine, just wanted to make sure that wasnt a core tenet of the sing, yet another one i was going to have to get medieval on.
Someone, or some group (remember, you cannot hope to control all human beings), perhaps out of financial motivation or even any of a dozen other reasons, will want an intelligence that is able to design and/or improve on its own architecture and programming. u know, i was really thinking abt that, what it would take to design something more intelligent than yourself, intelligent in every way, like way smarter than yourself. if i was tasked w that, after some time pondering i would almost certainly ask, as the very first question, smarter in what way? of all the spectacular singularity software stunts, this ones the toughest, hands down. especially, vehemently so, when it has no guidance from us, or anyone else. just itself, doing it for itself. i would suggest carefully inspecting the results of such a self-focused loop, no idea how that would look. even this "seed" ai - u make it sound so easy, just plant an ai see - would be complex beyond my ability to comprehend. and the first version, the seed ai, has been written by us dumb old humans. and i would state w a fair amt of confidence that no one else on earth does either - an idea that is specific enuff to be actionable, even if in small ways. you and just abt everybody else here has absolutely unshakable confidence in the sing. to the extent that when even quite small adjustments are made to some of the particulars in an attempt to make it "real", or at least more compatible w known laws of physics and such, such small suggestions are looked upon as the work of perhaps a moronic child, to be pitied and gently corrected. even u sing fanboys - even u, m - have no idea what it actually means or how a poor sai would go about w this seemingly impossible task. let me know if u or some other sing devotee figure this out, or at least come across a link or something explaining how this could be achieved - in the real, physical world. such a treatment s/b at least as rich as my description of the architecture of what i feel to be the most achievable type of hyperadvanced ai. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
so that was your own projection, not rks or someone else's. thats fine, just wanted to make sure that wasnt a core tenet of the sing, yet another one i was going to have to get medieval on.
Well, I can't say it's NOT RK's projection, even though I just finished "Spiritual Machines"... There is some interesting stuff about neural net development in there. The timeline he gives is a little overly-optimistic, but in the scheme of things, I can't argue with the basic progression he lays out (except for replaing keyboards with voice commands... but he's into that speech-to-text deal). and im not calling u harebrained, dont think that in case its crossed your mind. just saying, that would be very tough prediction for me to make, thats all. No harm, no foul. u know, i was really thinking abt that, what it would take to design something more intelligent than yourself, intelligent in every way, like way smarter than yourself.
Absolutely, the first steps will be to ask the right questions. My first idea would be data acquisition and sorting in a neural framework... maybe add on speed and memory capacity. of all the spectacular singularity software stunts, this ones the toughest, hands down.
I am in absolute agreement. It is one of the biggest reasons for singularity skepticism (I believe). i would suggest carefully inspecting the results of such a self-focused loop, no idea how that would look. Of course! Otherwise, it's handing fireworks to your kids and saying "have at it!" even this "seed" ai - u make it sound so easy, just plant an ai see - would be complex beyond my ability to comprehend.
LOL... Well, yes... and we came from dumb ol' monkeys, though they didn't design us (at least, not that they're letting on) :) and i would state w a fair amt of confidence that no one else on earth does either - an idea that is specific enuff to be actionable, even if in small ways. I would tend to agree. I think it will be humans in parallel with computers... maybe standard, and not superior A.I. that leads to this pivotal stage. you and just abt everybody else here has absolutely unshakable confidence in the sing. Hmmm, I've seen stange, mystic, monkey people claiming to be gurus on here who aren't as confident as well... to the extent that when even quite small adjustments are made to some of the particulars in an attempt to make it "real", or at least more compatible w known laws of physics and such, such small suggestions are looked upon as the work of perhaps a moronic child, to be pitied and gently corrected. I don't understand what you mean here. even u sing fanboys - even u, m - have no idea what it actually means or how a poor sai would go about w this seemingly impossible task. Yes, that is an accurate statement... I think "fanboy" may be a bit strong to describe my conclusions... Yes, I hope to be witness to such a thing, but I also maintain a base in reality. I simply deduce what I believe to be the inevitable consequence of rapidly developing technology. I do have some minimal understanding of how neural networks develop and grow, but again, you are correct that I have absolutely no way to determine how or when A.I. will surpass human intelligence. let me know if u or some other sing devotee ARGH... devotee? figure this out, or at least come across a link or something explaining how this could be achieved - in the real, physical world. such a treatment s/b at least as rich as my description of the architecture of what i feel to be the most achievable type of hyperadvanced ai. I'm almost wanting to start on a blank slate here, and approach our views from the present onward, stage by stage, to see where we agree and where we diverge... perhaps in a new thread or in the chat room. The underlying answer to your query, though, is that if I absolutely KNEW how it could be done, we wouldn't be having this discussion and you'd already be riding in your virtual land-speeder. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Yes PB was the one to coin that fraze,,then turn around and accuss you...which is,,becasue he posts ten times,must mean he's thinking ahead,,like playing chess....
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
INFORmation would replace free market cap..
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
hey, ure gettin it, sd
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
what phrase, sd? |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
ive been thinking abt that, sd, here's the thing...
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
I am sorry, participants in that discussion are greatest professionals, but didn't have a clue about the subject.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Given the background of these two exceptional individuals in the field of artificial intelligence and computing technology, it is quite hard to incline towards one side of the debate. Both characters front great arguments that defend their opinion and prediction regarding the future of intelligent machinery, and ultimately the creation of machine consciousness. However, one of these approaches towards the topic is true, and so one is forced to choose a side to abide for. I personally agree more with the arguments given by the Yale University professor David Gelernter, although Mr. Kurzweil leaves some doubt on my decision as to why I choose to believe consciousness is more than simply a matter of digitally simulating the human brain. After doing some research on the Turing test [1] and the Chinese Room argument [2] (the debate I think assumes the reader knows about these topics), I came to realize that perhaps the entire issue is not about computers having intelligence, but whether us humans can understand by what we define as intelligence, and even consciousness! In fact, Kurzweil's theories are based on assumptions that state that our brains can be successfully simulated in less than 50 years time, and that such thing can possibly give birth to a generation of computers that experience emotion, insight, and intelligence.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Whether we will create machine consciousness really comes down to one thing, and that is whether or not we believe that we can copy all the factionality of a human. We will be able to simulate the complexed coding of human DNA, we will be able to simulate the complexed structure of the brain, and we will be able to simulate the cause and effects of hormones. The brain is the hardware the DNA is the preprogrammed software and instructions on how the hardware is assembled and hormones carry all the data around. Simplified I know but if we can simulate all that we would have the mind of an infant that should be capable of learning all the same things we are capable of learning... unless. Is there something we are forgetting? Is there some unknown part of a human we can't simulate? A soul? That is the real question I think. Maybe we shouldn't be looking at this as "Can we create machine consciousness?" but if we can create it what does that mean? How will we recognize real consciousness from simulated consciousness?
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Of course one could eventually, with the appropriate technology, create a conscious machine because consciousness is not something special. It arises out of a continual sense of a "me" or a separate self. This sense of an individual self/consciousness has evolved in humans because it has been successful in helping us to adapt and survive. But there is no real "self", it is a sense we have that comes about by the continual firing of neurons. As discussed in "The Singularity is Near", it is likely that this continual concept of a "material me' comes about by the interaction of spindle cells that have long nerual filaments that "connect extensive signals from many other brain regions". These spindle cells likely work in tandem with other brain regions such as the posterior ventral medial nucleus (VMpo) that "apparently computes complex reactions to bodily states such as 'this tastes terrible' ". The sense of self and the qualia, such as taste, color, feel, etc. related to this sense of self can be duplicated. This sense of self and its accompanying qualia has obviously helped us survive or it would never have evolved. It is clear from the "mirror test" that other species have a minimal sense of self, such as great apes, elephants and dolphins. Interestingly, only these species have significant number of spindle cells. Another interesting fact is that infants do not have spindle cells and also cannot pass the mirror test. It is only after spindle cells develop around nine months of age that the mirror test is successful in humans. Consciousness is nothing more than having a continual (not continuous) sense of self accompanied by a sense of qualia that is associated with this sense of self. It may be that the sense of self arises out of the continual firing of the spindle cells and the pairing of this firing of these cells with other regions of the brain, such as the VMpo, that computes complex reactions to bodily states.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Gelernter is wrong on so many points I hardly know where to start. I think I'll shoot down the "we have to simulate / emulate the entire body" first.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Thank you for your kind words. I note there have been very few replies. :o)
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Maybe some day in far distant future, it might be possible for us to create machine consciousness. I submit though that humans must first understand the nature of our own consciousness, which in reality we have maybe "scratched the surface" maybe.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
yes, it beat me but it didn't enjoy beating me.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
good to hear from another realist. squat is not known about how the brain works holistically. we may though be closer than we think. we don't know what we don't know. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
This could be long, so bear with me. I've spent most my life trying to answer these questions and I think I have a fairly satisfactory answer (doesn't everyone?). Most importantly, I haven't met anyone yet who can refute what I have to say, but I don't often post in places like this, which is why I'm doing it now, I want feedback, especially from Ray (wouldn't that be nice :)
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
it's not so hard to reproduc emotions in robots.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Very impressive Pan. I am pleased with your simplified way of defining and explaining the "AwareWill." I to have been pondering, exploring, and extrapolating a direction on a conscious machine. I call it "Source Machine Induction Living Entity", or S.M.I.L.E.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
Any intelligence that emerges from our technology (I'm putting bets on cybernetic intelligence to arise first) is just as 'conscious' as any other portion of the universe, since, as I've shown, consciousness is ever 'present' and completely ubiquitous. So there's NO NEED to determine if this 'machine intelligence' is conscious or not, BECAUSE THE ENTIRE UNIVERSE IS CONSCIOUS. What we would like to know, is whether it becomes aware of it's own consciousness. So how would you answer the question whether consciousness, and whether self-awareness, could be expressed or described in terms of boolean logic? As everything a computer does (in the sense in which we today define computers) can be expressed in terms of boolean logic, of course. If you say silicon is conscious, like (perhaps) everything else in the universe) then this doesn't mean that the computer' actions, in terms of the program's results, would be a consciousness-driven action. The question still is: can boolean logic mimic consciousness? Even if we assume that the silicon in the computer chips is conscious internally, this question is still far from being answered. |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
(Or, Pandemonium1323, perhaps you are saying: I don't care if boolean logic can express consciousness, I just want computers to be "intelligent" in other ways?) |
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
I don't know if I would call myself an "EXPERT" on consciousness, but I believe I have had occasion to witness the best knowledge on the subject.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
The question:
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
'Are we limited to building super-intelligent robotic "zombies" or will it be possible and desirable for us to build conscious, creative, volitional, perhaps even "spiritual" machines? David Gelernter and Ray Kurzweil debated this key question at MIT on Nov. 30.' [1]
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
I think one of the arguments being posed here is, would a full simulation of a human brain create consciousness? If you look at the arguments posed by Kurzweil and Gerlenter, you'll notice that Kurzweil talks a lot about how there are simulations, such as the Blue Brain project, that try to simulate parts of the brain. And it is true that we have many parts of the brain that are modeled now, though not all of them. However, Gerlenter also claims that he cannot believe that consciousness can appear from nothing but software algorithms. That, despite all of the modeling of the brain that has been done so far, that we cannot achieve the same level of consciousness for machines that has been done for humans.
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
surprised I never came in and made this 'debate' disappear-
|
||||
Re: Gelernter, Kurzweil debate machine consciousness |
[Top] [Mind·X] [Reply to this post] |
|||
I have a tough time accepting emergent theories that don't have any link to physics, nonetheless you may not require wetware...
|
||||