|
|
|
|
|
|
|
Origin >
How to Build a Brain >
What kind of system of 'coding' of semantic information does the brain use?
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0381.html
Printable Version |
|
|
|
What kind of system of 'coding' of semantic information does the brain use?
As we reverse-engineer the brain, we continue to gather neurophysical information: neural assemblies over here are involved in cognition about faces and neural assemblies over there are involved in cognition about tools or artifacts. But what semantic coding relates physical neurons with symbolic concepts? Daniel Dennett responds to Edge publisher/editor John Brockman's request to futurists to pose "hard-edge" questions that "render visible the deeper meanings of our lives, redefine who and what we are."
Originally published January 2002 at Edge. Published on KurzweilAI.net January 21, 2002. Read Ray Kurzweil's Edge question here.
My question now is actually a version of the question I was asking myself in the first year, and I must confess that I've had very little time to address it properly in the intervening years, since I've been preoccupied with other, more tractable issues. I've been mulling it over in the back of my mind, though, and I do hope to return to it in earnest in 2002.
What kind of system of "coding" of semantic information does the brain use? We have many tantalizing clues but no established model that comes close to exhibiting the molar behavior that is apparently being seen in the brain. In particular, we see plenty of evidence of a degree of semantic localization -- neural assemblies over here are involved in cognition about faces and neural assemblies over there are involved in cognition about tools or artifacts, etc -- and yet we also have evidence (unless we are misinterpreting it) that shows the importance of "spreading activation," in which neighboring regions are somehow enlisted to assist with currently active cognitive projects. But how could a region that specializes in, say, faces contribute at all to a task involving, say, food, or transportation or . . . . ? Do neurons have two (or more) modes of operation -- specialized, "home territory" mode, in which their topic plays a key role, and generalized, "helping hand" mode, in which they work on other regions' topics?
Alternatively, is the semantic specialization we have observed an illusion -- are these regions only circumstantially implicated in these characteristic topics because of some as-yet-unanalyzed generalized but idiosyncratic competence that happens to be invoked usually when those topics are at issue? (The mathematician's phone rings whenever the topic is budgets, but he knows nothing about money; he's just good at arithmetic.) Or, to consider another alternative, is "spreading activation" mainly just noisy leakage, playing no contributing role in the transformation of content? Or is it just "political" support, contributing no content but helping to keep competing projects suppressed for awhile? And finally, the properly philosophical question: what's wrong with these questions and what would better questions be?
Copyright © 2002 by Edge Foundation, Inc.
www.edge.org
| | Join the discussion about this article on Mind·X! | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
semantics in the brain
|
|
|
|
[DD]how does the brain code semantic information?[/DD]
Well, this seems to be the only serious question around all the corners and edges placed here on this site.
There could be a couple of answers. The question is whether it is useful to address semantics (meaning, knowledge, quasi-thesauristic representation spin-glass-calculus) by means of matter and stuff.
A sign has nothing to do with its its meaning. The word "rain" is not connected to rain in any sense - besides its usage for the respective phenomenon.
The brain is a semiosic machine. Any ergodic "machine" which creates order and organization, processes and functions, is inevitably a semiosic machine. What makes the brain different from other complex systems is its built-in mechanisms for progressive abstraction. Progressive abstraction is not a question of matter (see D.Marr's 20-year-old distinction of physical realization, algorithmic representation and computational framework, which resembles in a stunning way to C.Peirce' distinction of iconic, lexicographic and symbolic signs).
(that is only the tip of the argument, but it should satisfy nonetheless)
Back to the question: Why asking about the matter, why not asking for the logical conditions of possibility (of semantics)?
Probably this points the direction DD requested to answer in his latest remark.
[->DD!/]
sincerely
Klaus
|
|
|
|
|
|
|
|
|
Re: ?!
|
|
|
|
From Adams' article :
"Consciousness is not susceptible to scientific investigation. It has no mass, takes up no space, and is not detectable by the senses, even when extended by instruments. These characteristics disqualify it as a potential object of scientific study. "
This is all you need to know from Adams' article to realise that like a lot of people he gives up hope before he even starts ! I would say that I agree with 99% of what Adams' says - the bulk of this article is concerned with bunking of Cognitive science and 'information' approraches to the brain, and the nonsense of the idea of consciousness as an 'emergent property' of information systems. He also states the obvious about consciousness - it's an irreducible , like the experience of the colour red which doesn't lend itself well to definition by being pure semantic only appreciable by somebody else who has experienced 'red' or consciousness.
But his approcah to the search of the Neural Correlate of consciousness is just plain defeatist and demands questions to be asked that don't need to be. If there is no 'explanation' currently connecting neural events with mentally subjective events this is TOTALLY irrelevant and is merely a reflection upon the poor state of brain science or a lack of 'understanding' - science never gives 'understanding' anyway, just explanations.
Do we ever ask why all matter has gravitational fields ? Not often - we take it as a given correlate that matter has gravitational fields. We don't seem to need to know the reason WHY. Then Einstein comes along and says that matter travels along space-time geodesics - that's why matter has gravitaional fields. Great. But why does matter travel along space-time geodesics ? Err - don't know , because it does, that's why , which is why the maths works. In other words, correlation is good enough - science never gives total 'understanding' anyway, just a picture - phsyics is syntactical , after all.
"Your views that complex things cannot "emerge" from software because they are based on binary information (01010101 as you said) sounds very naive to me and suggests that you've never actually developed any software. "
I've been in software for 12 years and have developed reams of it. Then again, I never said what you said I said either. What I said, or what I meant to say, is I that you can't generate anything
other than mathematical objects from mathematical objects , and as consciosuness is clearly not a mathematical object its hard to see how its an 'emergent property' of a software system ( I think you'll find Adams would agree with me on this ) .
"Your statement that "Once the designers' handbook has been read for a computer there is no ambiguity about function" just reinforces my view. Anyone who's developed any complex software knows that even if you document a system fully there are always quirks and weird behaviours that emerge. If that wasn't the case then software would be completely predictable and NEVER CRASH...."
Utterly irrelevant - bugs don't change the definitions of the designers functions one iota. They just change the behaviour of the program. Just because a program intended to copy a file doesn't copy files well doesn't change the fact that that was what it was intended to do - in fact that's why software gets modified, to get to an ever-closer relationship to the definition of its function.
"But the selection process has very clear goals. It may be a blind designer, but it is a designer none the less. Read The Selfish Gene by Dawkins, The Meme Machine by Blackmore and The Dependent Gene by David Moore. "
I've read parts of them and I'm aware of the fundamental asserions. I think they are what karl Popper might in an uncharitable moment describe as pseudoscience. They are interesting but as predictive tools are pretty hopeless. As a model the selfish gene is so endlessly malleable to be almost worthless. They possibly have a bit more scientific validity than the bible but aren't as well written ( not that I'd hold the bible as a great source of data, you understand ).
I think the fact is that animals and plants simply do what they do - there is no 'reason' for it, or at least there doesn't NEED to be a reason for it, unlike a machine - and that biological systems change like all other phsyical systems. Why they change is really only of interest on a case by case basis : the need for 'selfish gene' type theories jsut doesn't exist, and they are incapable, as their remit is so wide , of any great predictions.
|
|
|
|
|
|
|
|
|
This is all you need to know...?
|
|
|
|
Hi John,
Don't know about you, but I'm finding this entertaining 8)
As for Adams, if you agree with 99% then you're defnly a little ahead of me...but at least it added to our debate.
It didn't seem to me that he was suggesting the NCC approach should stop, just that there was room for a pluralist approach to this complex problem.
I get the feeling from your language that you don't agree with this...but I could be wrong.
As for the WHY question. I've heard your answer described as the "weak entropy" approach. It's like that because it is and if things were different then it would be different, but they aren't, so it isn't....so there!
> you can't generate anything
other than mathematical objects from mathematical objects, and as consciosuness is clearly not a mathematical object its hard to see how its an 'emergent property' of a software system
Well I find this sentence very circular and amusing...
1. from what I've seen mathematical objects can create patterns, predictions, meaning and even understanding just to name a few other things...(isn't a red pixel on your monitor just a representation of a mathematical object stored in the video memory of your computer? And isn't my perception of it just a representation of that representation?)
2. as for consciousness not being a mathematical object...a) you claim nobody knows what consciousness is, so how can you make this claim, b) could you define exactly what you mean by a "mathematical object". This appears within itself to be a relatively contentious or at least fluid area of debate.
3. "emergent properties" do not ONLY emerge from mathematical objects (depending on how you've defined them)
4. I never claimed that consciousness was a software system, just used it as an analogy.
However, have you ever played with any software based on neural networks (just as one example). They do appear to learn to recognise patterns, can handle fuzzy representations and from inside their data structures the weightings that represent their "acquired knowledge" don't make any obvious sense to a human.
> in fact that's why software gets modified, to get to an ever-closer relationship to the definition of its function.
Which can be seen as a type of evolution involving heredity, variation and selection.
> the need for 'selfish gene' type theories jsut doesn't exist, and they are incapable, as their remit is so wide , of any great predictions.
This just seems such a limiting belief that I don't really know how to address it?!
Anyway, lets cut to the chase. Can you say in 30 words or less what you think this all means.
Now that I know more about your belief systems I'd like to understand how they affect your big picture view.
Mediated Realities, AI, nanotech, the Singularity , what's your vision of our future?
roBman |
|
|
|
|
|
|
|
|
Re: This is all you need to know...?
|
|
|
|
"1. from what I've seen mathematical objects can create patterns, predictions, meaning and even understanding just to name a few other things...(isn't a red pixel on your monitor just a representation of a mathematical object stored in the video memory of your computer? And isn't my perception of it just a representation of that representation?)"
You're falling into the oldest strong AI trap in the book here. When is a duck not a duck ? Answer, when it's a painting of a duck. A pixel on your computer screen is not a mathematical object. It can represent a digit if you want it to, of course, as long as you have agreed a 'communicating standard' with whoever you wish to transfer your thoughts to. But the pixel is matter, and the combination of 'communicative matter' plus the 'communication standard' can be used as tools to realise the mathematical object residing in someone's brain. In other words, reading and writing, to keep it simple.
Mathematical objects can create patterns - they can do this because patterns themselves are mathematical objects. They can't 'create' predictions , but they can certainly be useful, when used in representative models, to have a damn good guess what's going to happen. And you'll have to give me a concrete example of how mathematics can lead to real understanding, other than explaining one symbolic axiom by decomposition into another. But one thing mathematical objects, which ultimately reside in people's heads , can't do, is combine to create
matter lor natural phenomena , such as consciosusness or subjective mental states.
Your claim that I claim nobody knows what consciousness 'is' is wrong - I said it was difficult to define ( though not imopossible ) , but as we've already established, that's the nature of subjective mental effects - can't describe the experirience of the colour red to a blind man. In fact as its a purely semantical event I would stick my neck out and say consciousness is the only every human being really does 'know and understand'. Its a bit like the US DOJ official asked to define pornography - "I can't define it, but I sure as hell know it when I see it".
I've played with neural nets - they don't recognise anything. They're associative memory systems - 'mathematical objects' if you like. We should reemmber with neural nets that the analogy is of the brain, talk to some guys these days and it appears to be that everbody's trying to squeeze brains into neural nets.
"Which can be seen as a type of evolution involving heredity, variation and selection. "
KInd of, I suppose, if you're really that desperate for a pointless analogy ! But in some ways of course not true : genes can spontaneously change under no impetus whatsoever and 'wipe out' ( to use 'Dawkins' behavioural terminology ) all the genes around under its own steam, no external 'design factors' involved. Life, as they say, is complicated - and the simplest approach is to accept that. No boxed solutions required.
|
|
|
|
|
|
|
|
|
A more eloquent answer
|
|
|
|
I just found a quote that sums up my point far more eloquently than I did:
"Or suppose we put a spade in the earth, a softer medium; our deepest dig will heave to view only another surface, this one crumbly perhaps, or with its clay compacted by the brutality of the blade. We can dig and delve like the most industrious duck; we can poke and pry: we shall find nothing but surface. Surfaces are unreal. They have only one side--their "out" side--and as far as our world is concerned, outside goes on forever. So if we feel lonely cooped up in our consciousness--a prisoner "inside"--we can take cool comfort from the fact that outside we are simply surface, and have plenty of company. If you like, consciousness, either real or implied, is the other, missing side of surface."
William Gass, "The Face of the City," Harpers Magazine, March 1986, p. 37.
roBman |
|
|
|
|
|
|
|
|
Re: What kind of system of 'coding' of semantic information does the brain use?
|
|
|
|
The following definition of information is from this web site:
Information
A sequence of data that is meaningful in a process, such as the DNA code of an organism or the bits in a computer program. Information is contrasted with noise, which is a random sequence. However, neither noise nor information is predictable. Noise is inherently unpredictable but carries no information. Information is also unpredictable - that is, we cannot predict future information from past information. If we can fully predict future data from past data, then that future data stops being information.
This definition is incomplete and that is why we have such a dim understanding of the connections between matter, physical forces and information and knowledge. This definition describes the end result of the basic information processes that all physical information systems use.
The complete physical explanation that describes how all information systems work and what they all have in common is as follows:
Physical forms can be changed by external forces. These changes remain until the physical form is changed again. Physical forms can pattern external physical forces (the scattering of photons by any physical surface is an example) which can then pattern or change another physical form.
If you examine any physical information system you see this causal chain in them all. If a photographer takes a picture of you the scattering of the photons in the room you are in patterns the photons in the room. When the camera makes its image of the room it is taking a small sampling of the photons in the room but those photons are being patterned by the contents of the room, including you as an occupant of the room.
When the film is developed or when the data image is uploaded to a computer terminal the image that results was caused by the patterning of the photons in the room the image was captured in. The picture, itself, now patterns new photons that you interact with to view the image made by the camera.
As the photons bounce off the image they are getting patterned by the physical structure of the image. If you are using a computerized image the special hardware in the computer is converting its binary representation of the image into pixel patterning on your computer monitor that you perceive as an image.
There is a part of your brain that is devoted to recognizing faces. This set of neural structures provide the ability to recognize particular faces. As the photons from the image pattern your eyes, and then your visual cortex the patterning of that part of the brain then causes a neural recognition to happen or not.
The complete causal chain of patterned photons in the original room causing changes in the patterning of the chemicals of your film or the sensor in your digital camera. These changes remain in the film or in the digital format of the cameras memory until they are changed. The image formed by the picture or the changes in the computer monitor can now pattern the sensors in your eyes and then pattern the recognition centers of your brain that relate to faces.
This same chain is in all information systems. As you read this document the words are being patterned by my brain and placed out here for you to see through the process of me patterning the inner structure of my computer by typing on the keys. The specialized hardware there is converting the changes to the keyboard into inner representations of those keys. The computer is then passing these text patterns on to the web site where I am entering the information to be viewed. You are decoding the sentences because the sentences are just the patterning of your computer monitor in a form that your brain can decode as sentence syntax.
At all parts of this process one kind of physical force patterns a physical structure or form that can then pattern yet another physical structure or form. So the missing, dynamic, element is now there for information systems.
Patterned physical forces can pattern new physical forms which can then pattern new physical forces. This causal chain is in all information systems and is the core proess that nature uses to make all information systems possible in the first place.
With this understood, it now beocmes possible to examine semantic content and how it relates to the physical structures of the brain. As you read this the words are being decoded by the language centers of your brain. This process is not simple. The words, themselves, have trained associations with other content of your brain.
The recognition and understanding of words and their meanings can be taught to a computer quite easily using neural net analogues or even other kinds of methods used in AI. But, the process of decoding a full sentence and gaining 'semantic meaning' from the words involves both recognitions of the meanings of each word but also requires recognition of the overall meaning of the words.
For example, imagine a black cat. Decoding the last phrase caused you to form a mental image of a black cat. This image actually exists as a patterning of your visual cortex and can be saved as a memory. You can then recognize that the image is, indeed of a black cat and that is what the sentence asked you to do. Now, if we just say: pink elephant- you do the same thing. You form an image of a pink elephant in your visual cortex. The patterning of your visual cortex can now be stored as a memory.
Imagine that you are talking to yourself. Through the same decoding process you can form inner patterning of your visual cortex and memory. This implies that just using language, internally, allows you to form any kind of patterning that hearing or reading language can cause.
What happens when you recall the black cat or the pink elephant at a later date? The patterning of your neural net memories will then pattern the same parts of your brain that the language centers patterned. You can then examine the images because they are now patterning your sensory memory where they originated in the first place.
If you think in terms of neural nets it is easy to see the source of semantic meaning. As you decode the following sentence: A black dog- you will find that the first word, A, causes your recognition of the word to happen. You also reocognize it as a letter, too. The next pattern that forms is: A black- which, of course is a compound recognition of the word A and black. Then you add in the last word: A black dog.- and the entire compound recognition of a particular kind of thing happens.
As you can see, hierarchies of meaning can be represented by neural nets by having them organized in the right way. You can most certainly learn individual words but you have more than enough neural nets to learn the meanings of complex patterns of words as well.
If you recall a black dog at a later date you will probably just recall it as a mental image. You can then apply language to report to an external observer that you just had a mental image of a black dog.
The different parts of the brain all work in parallel and that means that you can have your inner thoughts happening while you are currently experiencing new external events. You can even mix and match the events happening internally with the ones happening externally.
As you recall any of the things I have used as examples, a pink elephant, lets say, it is pulled into your short term memory but your current short term memory is very large and complex so you can quite easily have a host of other things happening in them as you read this. This means that since the original image of a pink elephant was stored your recall of it now will have it pulled into your current short term memory buffers and can get stored with alterations caused by new experiences.
Your over all semantic understanding of this inner memory will be modified by what is happening now as you think about it. This point is proven when you realize that as you recall memories you change them. If asked to describe an accident on the scene you will write down one version of the accdient but if asked to write it down again, at a later date, you will write something totally different since the later experiences of thinking about the accident have caused changes in your original memories of the accident!
So semantic memory is dependent on a neural net hierarchy and that hierarchy is tied to the speech centers of the brain and the sensory systems as well as memory. This is possible to track because of the fact that all information systems behave the same way. |
|
|
|
|
|
|
|