Origin > How to Build a Brain > What kind of system of 'coding' of semantic information does the brain use?
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0381.html

Printable Version
    What kind of system of 'coding' of semantic information does the brain use?
by   Daniel Dennett

As we reverse-engineer the brain, we continue to gather neurophysical information: neural assemblies over here are involved in cognition about faces and neural assemblies over there are involved in cognition about tools or artifacts. But what semantic coding relates physical neurons with symbolic concepts? Daniel Dennett responds to Edge publisher/editor John Brockman's request to futurists to pose "hard-edge" questions that "render visible the deeper meanings of our lives, redefine who and what we are."


Originally published January 2002 at Edge. Published on KurzweilAI.net January 21, 2002. Read Ray Kurzweil's Edge question here.

My question now is actually a version of the question I was asking myself in the first year, and I must confess that I've had very little time to address it properly in the intervening years, since I've been preoccupied with other, more tractable issues. I've been mulling it over in the back of my mind, though, and I do hope to return to it in earnest in 2002.

What kind of system of "coding" of semantic information does the brain use? We have many tantalizing clues but no established model that comes close to exhibiting the molar behavior that is apparently being seen in the brain. In particular, we see plenty of evidence of a degree of semantic localization -- neural assemblies over here are involved in cognition about faces and neural assemblies over there are involved in cognition about tools or artifacts, etc -- and yet we also have evidence (unless we are misinterpreting it) that shows the importance of "spreading activation," in which neighboring regions are somehow enlisted to assist with currently active cognitive projects. But how could a region that specializes in, say, faces contribute at all to a task involving, say, food, or transportation or . . . . ? Do neurons have two (or more) modes of operation -- specialized, "home territory" mode, in which their topic plays a key role, and generalized, "helping hand" mode, in which they work on other regions' topics?

Alternatively, is the semantic specialization we have observed an illusion -- are these regions only circumstantially implicated in these characteristic topics because of some as-yet-unanalyzed generalized but idiosyncratic competence that happens to be invoked usually when those topics are at issue? (The mathematician's phone rings whenever the topic is budgets, but he knows nothing about money; he's just good at arithmetic.) Or, to consider another alternative, is "spreading activation" mainly just noisy leakage, playing no contributing role in the transformation of content? Or is it just "political" support, contributing no content but helping to keep competing projects suppressed for awhile? And finally, the properly philosophical question: what's wrong with these questions and what would better questions be?

Copyright © 2002 by Edge Foundation, Inc.



www.edge.org

 Join the discussion about this article on Mind·X!  
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

semantics in the brain
posted on 01/23/2002 5:00 PM by kwa@prospero

[Top]
[Mind·X]
[Reply to this post]

[DD]how does the brain code semantic information?[/DD]

Well, this seems to be the only serious question around all the corners and edges placed here on this site.

There could be a couple of answers. The question is whether it is useful to address semantics (meaning, knowledge, quasi-thesauristic representation spin-glass-calculus) by means of matter and stuff.
A sign has nothing to do with its its meaning. The word "rain" is not connected to rain in any sense - besides its usage for the respective phenomenon.

The brain is a semiosic machine. Any ergodic "machine" which creates order and organization, processes and functions, is inevitably a semiosic machine. What makes the brain different from other complex systems is its built-in mechanisms for progressive abstraction. Progressive abstraction is not a question of matter (see D.Marr's 20-year-old distinction of physical realization, algorithmic representation and computational framework, which resembles in a stunning way to C.Peirce' distinction of iconic, lexicographic and symbolic signs).

(that is only the tip of the argument, but it should satisfy nonetheless)

Back to the question: Why asking about the matter, why not asking for the logical conditions of possibility (of semantics)?

Probably this points the direction DD requested to answer in his latest remark.
[->DD!/]

sincerely
Klaus



















Can the NCC approach work?
posted on 01/30/2002 10:49 PM by roBman@InfoBank.com.au

[Top]
[Mind·X]
[Reply to this post]

The question of whether we can really work out how meaning is generated within the brain by studying the matter - the Neurological Correlates of Consciousness (NCC) approach is a very important question.

If we try and take an analogous approach to understanding a simpler system it can give some insights.

For example, could we really understand the way meaning is generated and manipulated within a computer by studying it from the outside.

Assume for a minute that we didn't know how they work and we just created a device like MRI or PET scanning to observe how the bits inside the CPU and hard drive etc interact. From this would we really be able to extrapolate how bytecode instructions manipulate symbolic representations. And if we could, would we really be able to work out how the grammar and syntax of these symbols really interact to create meaning and the emergent property we call an Operating System. Without a good grounding in how graphical user interfaces have developed and the cultural norms that allow them to operate (and interact with people) I think it may be nearly impossible.

I'm not suggesting that this approach is useless, as it's obviously giving us some very useful insights. But is this really the best, or even only way to work out how we really think?

I think actually using the GUI and noticing what happens when we click on things would be a much faster way to get there.

And if it's that complex for a relatively simple system like today's computers, then how much more complex will it be for the brain.

Ray Kurzweil says he has a theory on how this can be worked out by reverse engineering the brain. I wonder if this includes an experiential reverse engineering from the inside as well, or if it's just an NCC approach?

As for the other comment about "what we would do with this knowledge" and "does anyone understand the hyper-enormous responsibility of this type of research"?

Personally I'm trying to grapple with these issues and I am concerned (as I've stated before) that not enough effort is going into developing an ethical framework for this type of development.

At least Mind-X is a forum for the beginnings of that debate.


roBman

Re: Can the NCC approach work?
posted on 01/31/2002 8:13 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

Is this a justification for behaviouirism ?Didn't behaviousrism die with the ark about 40 years ago ?

isms can change too
posted on 01/31/2002 8:24 AM by roBman@InfoBank.com.au

[Top]
[Mind·X]
[Reply to this post]

Hi John,

You seem to be very quick to lable things and use that label to limit that things meaning.

The behaviourism of 40 years ago was based on the zitgeist of 40 years ago (but I assume you don't believe in that either 8) ).

Times change, our view of reality and overall context ebb and flow.

Saying this is an ism that was abandoned 40 years ago doesn't seem like a constructive criticism to me.

Do you see actual flaws in the analogy?

And more imporantly, can you prove your contrary materialist view?


roBman

Re: isms can change too
posted on 01/31/2002 1:41 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

I'm sorry roBman but I think it is behaviourism. And the trouble with behaviourism was that it was extremely restrictive , ignoring entirely the fascinating world between the ears. We wouldn't look at the stars and be content with a description of them as light sources : we'd like to know where the hell light comes from - and the same with people and their brains and minds.

Your example highlights a key difference between implicit systems provided by nature and explicit systems provided by technology. Once the designers' handbook has been read for a computer there is no ambiguity about function. The same cannot be said for a brain, which wasn't designed for any particular function by a grand designer, it was ( it seems ) the product of 'random cosmic change' for want of a better description.

Definitions of reality change in a lot of areas, but rarely so in science. That method hasn't changed much in 300 years, and its that method that I advocate in investigating the brain. It really isn't terribly controversial. And scientists need to do it as they need to provide the theoretical basis for the relationship between the brain's objective working mechanisms and the subjective mental effects they produce - to start understanding exactly how it is that brains cause minds and mentality. You're right that engineers can't 'reverse engineer' the brain - how could they ? They don't have the slightest idea how it works. Not that anybody else does either come to that - which is why a lot more work needs to stop the confusion.

?!
posted on 02/01/2002 5:36 AM by roBman@InfoBank.com.au

[Top]
[Mind·X]
[Reply to this post]

Hi John,

mmm...I don't think you read the link I posted about Bill Adams work on developing a Psychology of Consciousness - http://home.earthlink.net/~adamswa/intro.htm. At least give it a skim read.

Also I think your view of the world might be expanded a little by reading some of J. J. Gibson's ideas on the distinction between sensory modalities and sensory perceptual systems - http://www.ksu.edu/psych/farris/gibson/files/modality.html


Your views that complex things cannot "emerge" from software because they are based on binary information (01010101 as you said) sounds very naive to me and suggests that you've never actually developed any software.

Your statement that "Once the designers' handbook has been read for a computer there is no ambiguity about function" just reinforces my view. Anyone who's developed any complex software knows that even if you document a system fully there are always quirks and weird behaviours that emerge. If that wasn't the case then software would be completely predictable and NEVER CRASH....

NOTE: This is not intended as a personal criticism...just an observation.

Your comment that my analogy IS Behaviourism which misses "the fascinating world between the ears" shows that you missed my point completely. When you translate this analogy back to the world of the subjective experience, "clicking on things" doesn't mean "watch what other people do". It means explore the rich internal world of subjective representations you have within your own mind. Specifically, focus on a rigorous study of "the fascinating world between the ears"!


And your comment that the scientific method hasn't changed much in 300 years is kind of amusing. 300 years is only a drop in the ocean compared to how long subjective experience has likely been around (10,000 year + by some accounts). Again I would point you to Bill Adams work - http://home.earthlink.net/~adamswa/intro.htm

But the point I have to take the strongest issue with is "The same cannot be said for a brain, which wasn't designed for any particular function by a grand designer, it was ( it seems ) the product of 'random cosmic change' for want of a better description."

I think there is a VERY strong case that the brain (and mind) are the result of gene/meme co-evolution. Evolution is absolutely a "grand designer". While many people may think this is a process of "random changes", it isn't. The variation component may be (although that's not really true either). But the selection process has very clear goals. It may be a blind designer, but it is a designer none the less. Read The Selfish Gene by Dawkins, The Meme Machine by Blackmore and The Dependent Gene by David Moore.


roBman

Re: ?!
posted on 02/01/2002 8:10 AM by john.b.davey:@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

From Adams' article :

"Consciousness is not susceptible to scientific investigation. It has no mass, takes up no space, and is not detectable by the senses, even when extended by instruments. These characteristics disqualify it as a potential object of scientific study. "

This is all you need to know from Adams' article to realise that like a lot of people he gives up hope before he even starts ! I would say that I agree with 99% of what Adams' says - the bulk of this article is concerned with bunking of Cognitive science and 'information' approraches to the brain, and the nonsense of the idea of consciousness as an 'emergent property' of information systems. He also states the obvious about consciousness - it's an irreducible , like the experience of the colour red which doesn't lend itself well to definition by being pure semantic only appreciable by somebody else who has experienced 'red' or consciousness.

But his approcah to the search of the Neural Correlate of consciousness is just plain defeatist and demands questions to be asked that don't need to be. If there is no 'explanation' currently connecting neural events with mentally subjective events this is TOTALLY irrelevant and is merely a reflection upon the poor state of brain science or a lack of 'understanding' - science never gives 'understanding' anyway, just explanations.

Do we ever ask why all matter has gravitational fields ? Not often - we take it as a given correlate that matter has gravitational fields. We don't seem to need to know the reason WHY. Then Einstein comes along and says that matter travels along space-time geodesics - that's why matter has gravitaional fields. Great. But why does matter travel along space-time geodesics ? Err - don't know , because it does, that's why , which is why the maths works. In other words, correlation is good enough - science never gives total 'understanding' anyway, just a picture - phsyics is syntactical , after all.

"Your views that complex things cannot "emerge" from software because they are based on binary information (01010101 as you said) sounds very naive to me and suggests that you've never actually developed any software. "

I've been in software for 12 years and have developed reams of it. Then again, I never said what you said I said either. What I said, or what I meant to say, is I that you can't generate anything
other than mathematical objects from mathematical objects , and as consciosuness is clearly not a mathematical object its hard to see how its an 'emergent property' of a software system ( I think you'll find Adams would agree with me on this ) .

"Your statement that "Once the designers' handbook has been read for a computer there is no ambiguity about function" just reinforces my view. Anyone who's developed any complex software knows that even if you document a system fully there are always quirks and weird behaviours that emerge. If that wasn't the case then software would be completely predictable and NEVER CRASH...."

Utterly irrelevant - bugs don't change the definitions of the designers functions one iota. They just change the behaviour of the program. Just because a program intended to copy a file doesn't copy files well doesn't change the fact that that was what it was intended to do - in fact that's why software gets modified, to get to an ever-closer relationship to the definition of its function.

"But the selection process has very clear goals. It may be a blind designer, but it is a designer none the less. Read The Selfish Gene by Dawkins, The Meme Machine by Blackmore and The Dependent Gene by David Moore. "

I've read parts of them and I'm aware of the fundamental asserions. I think they are what karl Popper might in an uncharitable moment describe as pseudoscience. They are interesting but as predictive tools are pretty hopeless. As a model the selfish gene is so endlessly malleable to be almost worthless. They possibly have a bit more scientific validity than the bible but aren't as well written ( not that I'd hold the bible as a great source of data, you understand ).

I think the fact is that animals and plants simply do what they do - there is no 'reason' for it, or at least there doesn't NEED to be a reason for it, unlike a machine - and that biological systems change like all other phsyical systems. Why they change is really only of interest on a case by case basis : the need for 'selfish gene' type theories jsut doesn't exist, and they are incapable, as their remit is so wide , of any great predictions.




This is all you need to know...?
posted on 02/01/2002 10:22 AM by roBman@InfoBank.com.au

[Top]
[Mind·X]
[Reply to this post]

Hi John,

Don't know about you, but I'm finding this entertaining 8)

As for Adams, if you agree with 99% then you're defnly a little ahead of me...but at least it added to our debate.

It didn't seem to me that he was suggesting the NCC approach should stop, just that there was room for a pluralist approach to this complex problem.

I get the feeling from your language that you don't agree with this...but I could be wrong.

As for the WHY question. I've heard your answer described as the "weak entropy" approach. It's like that because it is and if things were different then it would be different, but they aren't, so it isn't....so there!

> you can't generate anything
other than mathematical objects from mathematical objects, and as consciosuness is clearly not a mathematical object its hard to see how its an 'emergent property' of a software system

Well I find this sentence very circular and amusing...

1. from what I've seen mathematical objects can create patterns, predictions, meaning and even understanding just to name a few other things...(isn't a red pixel on your monitor just a representation of a mathematical object stored in the video memory of your computer? And isn't my perception of it just a representation of that representation?)

2. as for consciousness not being a mathematical object...a) you claim nobody knows what consciousness is, so how can you make this claim, b) could you define exactly what you mean by a "mathematical object". This appears within itself to be a relatively contentious or at least fluid area of debate.

3. "emergent properties" do not ONLY emerge from mathematical objects (depending on how you've defined them)

4. I never claimed that consciousness was a software system, just used it as an analogy.

However, have you ever played with any software based on neural networks (just as one example). They do appear to learn to recognise patterns, can handle fuzzy representations and from inside their data structures the weightings that represent their "acquired knowledge" don't make any obvious sense to a human.


> in fact that's why software gets modified, to get to an ever-closer relationship to the definition of its function.

Which can be seen as a type of evolution involving heredity, variation and selection.

> the need for 'selfish gene' type theories jsut doesn't exist, and they are incapable, as their remit is so wide , of any great predictions.

This just seems such a limiting belief that I don't really know how to address it?!


Anyway, lets cut to the chase. Can you say in 30 words or less what you think this all means.

Now that I know more about your belief systems I'd like to understand how they affect your big picture view.

Mediated Realities, AI, nanotech, the Singularity , what's your vision of our future?


roBman

Re: This is all you need to know...?
posted on 02/01/2002 12:14 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"1. from what I've seen mathematical objects can create patterns, predictions, meaning and even understanding just to name a few other things...(isn't a red pixel on your monitor just a representation of a mathematical object stored in the video memory of your computer? And isn't my perception of it just a representation of that representation?)"

You're falling into the oldest strong AI trap in the book here. When is a duck not a duck ? Answer, when it's a painting of a duck. A pixel on your computer screen is not a mathematical object. It can represent a digit if you want it to, of course, as long as you have agreed a 'communicating standard' with whoever you wish to transfer your thoughts to. But the pixel is matter, and the combination of 'communicative matter' plus the 'communication standard' can be used as tools to realise the mathematical object residing in someone's brain. In other words, reading and writing, to keep it simple.

Mathematical objects can create patterns - they can do this because patterns themselves are mathematical objects. They can't 'create' predictions , but they can certainly be useful, when used in representative models, to have a damn good guess what's going to happen. And you'll have to give me a concrete example of how mathematics can lead to real understanding, other than explaining one symbolic axiom by decomposition into another. But one thing mathematical objects, which ultimately reside in people's heads , can't do, is combine to create
matter lor natural phenomena , such as consciosusness or subjective mental states.


Your claim that I claim nobody knows what consciousness 'is' is wrong - I said it was difficult to define ( though not imopossible ) , but as we've already established, that's the nature of subjective mental effects - can't describe the experirience of the colour red to a blind man. In fact as its a purely semantical event I would stick my neck out and say consciousness is the only every human being really does 'know and understand'. Its a bit like the US DOJ official asked to define pornography - "I can't define it, but I sure as hell know it when I see it".

I've played with neural nets - they don't recognise anything. They're associative memory systems - 'mathematical objects' if you like. We should reemmber with neural nets that the analogy is of the brain, talk to some guys these days and it appears to be that everbody's trying to squeeze brains into neural nets.

"Which can be seen as a type of evolution involving heredity, variation and selection. "

KInd of, I suppose, if you're really that desperate for a pointless analogy ! But in some ways of course not true : genes can spontaneously change under no impetus whatsoever and 'wipe out' ( to use 'Dawkins' behavioural terminology ) all the genes around under its own steam, no external 'design factors' involved. Life, as they say, is complicated - and the simplest approach is to accept that. No boxed solutions required.



Re: This is all you need to know...?
posted on 02/16/2002 8:12 PM by stephenhanneke@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

I've been strolling along the discussion here, but I can't help but feel we've taken a wrong turn somewhere.
Why exactly is it that we can't concretely define and study consciousness? There have been movements lately along the boundaries between quantum physics and neuroscience which hold great promises for a complete understanding of the matter.

As the theory runs, consciousness seems to influence the collapse of a quantum mechanical wave function because consciousness itself is a collapsing wavefunction. In this view, Des Cartes' "I AM" is a moment of certainty in the otherwise chaotic and unpredictable universe.
Evolution was able to harness these moments of certainty and use them to build a powerful computer. This, then is why intelligence and consciousness find themselves lumped together.

This theory may not be entirely complete, as it is built upon the shaky ground of the hidden variables interpretation of quantum mechanics. However, I think that with some work it may be adapted to fit the mold of several other interpretations.

Hameroff can explain it far better than I, so I'd suggest checking out his essay in the 'Big Thinkers' section of the site.

-Steve

Re: This is all you need to know...?
posted on 02/17/2002 10:31 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> consciousness seems to influence the collapse of a quantum mechanical wave function because consciousness itself is a collapsing wavefunction

Consciousness has nothing to do with the collapse of a quantum function. The substrate on which the consciousness runs on - can have. As any other object made of particles.

Consciousness is an emergent property of the brains computation.

- Thomas

Re: This is all you need to know...?
posted on 02/17/2002 11:33 AM by stephenhanneke@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

>Consciousness is an emergent property of the brains computation

But perhaps on a fundamental level, that computation is of the quantum variety.

Even if the Orchestrated Objective Reduction hypothesis is false (though I'm not saying it is), the question still remains as to what connections exist between collapsing wavefunctions and consciousness. This is the meeting ground of objective and subjective reality, and the answer to consciousness must be stated in this context.

-Steve

Re: This is all you need to know...?
posted on 02/17/2002 12:08 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> But perhaps on a fundamental level, that computation is of the quantum variety.

Perhaps. But it more looks it isn't.

> the question still remains as to what connections exist between collapsing wave functions and consciousness

By the DECOHERENCE interpretation, it has none. An electron can as easily collapse the wave function as a "human consciousness" can.

- Thomas

Re: This is all you need to know...?
posted on 02/18/2002 12:58 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

Why exactly is it that we can't concretely define and study consciousness?

For the same reason you can't explain the colour red to a blind man.

Re: This is all you need to know...?
posted on 02/18/2002 3:50 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

John!

You speak, as you were unconscious. :)

- Thomas

Re: This is all you need to know...?
posted on 02/18/2002 7:09 PM by stephenhanneke@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Now I think we're getting ahead of ourselves.

Before we can explain what it is like to experience anything, color or otherwise, wouldn't we need to explain what experience is? And perhaps then we could move on to subcategories.

That is, unless you've thought of a better strategy.?

remember:
have not does not mean cannot.
-Steve

complex systems arising from software
posted on 05/06/2003 5:17 PM by ChaCha

[Top]
[Mind·X]
[Reply to this post]

Perl is a great example. I doubt that it's creator, Larry Wall, would have been able to forsee all the possibilities in the language when he first developed it. The "do what I mean, not what I say" property of perl is absolutely amazing. Often times, these sort of ideosyncracies arise from systems that are slightly faulty, from the perspective of the problem they were designed to solve. But those minor glitches and ideosyncracies of the language end up creating infinite possibilities in other contexts.

Re: Can the NCC approach work?
posted on 02/01/2003 12:56 AM by Stan Rambin

[Top]
[Mind·X]
[Reply to this post]

What about analog vs. boolean? Is the tranmission of information controlled in a 0,1 bit set, or can the various degrees of transmission contain sematic information? Is memory a route of connecting neurons that have increased their transmission quality over repeated use? If so, then a method of quality as well as location and quantity is involved in memory.

Stan Rambin

Re: Can the NCC approach work?
posted on 02/01/2003 8:28 AM by Thomas

[Top]
[Mind·X]
[Reply to this post]

I doubt it.

Even if it was so, it is the question of how many bits are needed, for a complete description of what may going on.

- Thomas

Re: Can the NCC approach work?
posted on 02/01/2003 12:39 PM by Grant

[Top]
[Mind·X]
[Reply to this post]

Ones and zeros contain semantic information when they are used as words in statements such as 1+1=2 and 1-1=0. Come to think of it, they also contain semantic information as parts of a computation in a computer. Machine language has the same kinds of rules and grammar as any other language. The ones and zeros only have meaning to the computer when used according to those rules. Outside the rules they only constitute noise.

Grant

A more eloquent answer
posted on 01/30/2002 10:58 PM by roBman@InfoBank.com.au

[Top]
[Mind·X]
[Reply to this post]

I just found a quote that sums up my point far more eloquently than I did:

"Or suppose we put a spade in the earth, a softer medium; our deepest dig will heave to view only another surface, this one crumbly perhaps, or with its clay compacted by the brutality of the blade. We can dig and delve like the most industrious duck; we can poke and pry: we shall find nothing but surface. Surfaces are unreal. They have only one side--their "out" side--and as far as our world is concerned, outside goes on forever. So if we feel lonely cooped up in our consciousness--a prisoner "inside"--we can take cool comfort from the fact that outside we are simply surface, and have plenty of company. If you like, consciousness, either real or implied, is the other, missing side of surface."

William Gass, "The Face of the City," Harpers Magazine, March 1986, p. 37.



roBman

Re: What kind of system of 'coding' of semantic information does the brain use?
posted on 01/30/2002 9:39 PM by darkstar@mail.ru

[Top]
[Mind·X]
[Reply to this post]

Okay, so what would you say if someone walks in tomorrow and says: "I have the answers."? What next? What are you going to do with it?
Does anyone understand the hyper-enormous responsibility of this type of research or not? :-\

The nuclear bomb is a children's toy compared to this.

Re: What kind of system of 'coding' of semantic information does the brain use?
posted on 11/04/2002 9:19 AM by denik

[Top]
[Mind·X]
[Reply to this post]

this system is VERY simple...

knowing it it's possible to create any information system and artificial intelligence too.

Re: What kind of system of 'coding' of semantic information does the brain use?
posted on 04/30/2003 2:39 PM by Progon

[Top]
[Mind·X]
[Reply to this post]

The following definition of information is from this web site:


Information
A sequence of data that is meaningful in a process, such as the DNA code of an organism or the bits in a computer program. Information is contrasted with noise, which is a random sequence. However, neither noise nor information is predictable. Noise is inherently unpredictable but carries no information. Information is also unpredictable - that is, we cannot predict future information from past information. If we can fully predict future data from past data, then that future data stops being information.


This definition is incomplete and that is why we have such a dim understanding of the connections between matter, physical forces and information and knowledge. This definition describes the end result of the basic information processes that all physical information systems use.

The complete physical explanation that describes how all information systems work and what they all have in common is as follows:

Physical forms can be changed by external forces. These changes remain until the physical form is changed again. Physical forms can pattern external physical forces (the scattering of photons by any physical surface is an example) which can then pattern or change another physical form.

If you examine any physical information system you see this causal chain in them all. If a photographer takes a picture of you the scattering of the photons in the room you are in patterns the photons in the room. When the camera makes its image of the room it is taking a small sampling of the photons in the room but those photons are being patterned by the contents of the room, including you as an occupant of the room.

When the film is developed or when the data image is uploaded to a computer terminal the image that results was caused by the patterning of the photons in the room the image was captured in. The picture, itself, now patterns new photons that you interact with to view the image made by the camera.

As the photons bounce off the image they are getting patterned by the physical structure of the image. If you are using a computerized image the special hardware in the computer is converting its binary representation of the image into pixel patterning on your computer monitor that you perceive as an image.

There is a part of your brain that is devoted to recognizing faces. This set of neural structures provide the ability to recognize particular faces. As the photons from the image pattern your eyes, and then your visual cortex the patterning of that part of the brain then causes a neural recognition to happen or not.

The complete causal chain of patterned photons in the original room causing changes in the patterning of the chemicals of your film or the sensor in your digital camera. These changes remain in the film or in the digital format of the cameras memory until they are changed. The image formed by the picture or the changes in the computer monitor can now pattern the sensors in your eyes and then pattern the recognition centers of your brain that relate to faces.

This same chain is in all information systems. As you read this document the words are being patterned by my brain and placed out here for you to see through the process of me patterning the inner structure of my computer by typing on the keys. The specialized hardware there is converting the changes to the keyboard into inner representations of those keys. The computer is then passing these text patterns on to the web site where I am entering the information to be viewed. You are decoding the sentences because the sentences are just the patterning of your computer monitor in a form that your brain can decode as sentence syntax.

At all parts of this process one kind of physical force patterns a physical structure or form that can then pattern yet another physical structure or form. So the missing, dynamic, element is now there for information systems.

Patterned physical forces can pattern new physical forms which can then pattern new physical forces. This causal chain is in all information systems and is the core proess that nature uses to make all information systems possible in the first place.

With this understood, it now beocmes possible to examine semantic content and how it relates to the physical structures of the brain. As you read this the words are being decoded by the language centers of your brain. This process is not simple. The words, themselves, have trained associations with other content of your brain.

The recognition and understanding of words and their meanings can be taught to a computer quite easily using neural net analogues or even other kinds of methods used in AI. But, the process of decoding a full sentence and gaining 'semantic meaning' from the words involves both recognitions of the meanings of each word but also requires recognition of the overall meaning of the words.

For example, imagine a black cat. Decoding the last phrase caused you to form a mental image of a black cat. This image actually exists as a patterning of your visual cortex and can be saved as a memory. You can then recognize that the image is, indeed of a black cat and that is what the sentence asked you to do. Now, if we just say: pink elephant- you do the same thing. You form an image of a pink elephant in your visual cortex. The patterning of your visual cortex can now be stored as a memory.

Imagine that you are talking to yourself. Through the same decoding process you can form inner patterning of your visual cortex and memory. This implies that just using language, internally, allows you to form any kind of patterning that hearing or reading language can cause.

What happens when you recall the black cat or the pink elephant at a later date? The patterning of your neural net memories will then pattern the same parts of your brain that the language centers patterned. You can then examine the images because they are now patterning your sensory memory where they originated in the first place.

If you think in terms of neural nets it is easy to see the source of semantic meaning. As you decode the following sentence: A black dog- you will find that the first word, A, causes your recognition of the word to happen. You also reocognize it as a letter, too. The next pattern that forms is: A black- which, of course is a compound recognition of the word A and black. Then you add in the last word: A black dog.- and the entire compound recognition of a particular kind of thing happens.

As you can see, hierarchies of meaning can be represented by neural nets by having them organized in the right way. You can most certainly learn individual words but you have more than enough neural nets to learn the meanings of complex patterns of words as well.

If you recall a black dog at a later date you will probably just recall it as a mental image. You can then apply language to report to an external observer that you just had a mental image of a black dog.

The different parts of the brain all work in parallel and that means that you can have your inner thoughts happening while you are currently experiencing new external events. You can even mix and match the events happening internally with the ones happening externally.

As you recall any of the things I have used as examples, a pink elephant, lets say, it is pulled into your short term memory but your current short term memory is very large and complex so you can quite easily have a host of other things happening in them as you read this. This means that since the original image of a pink elephant was stored your recall of it now will have it pulled into your current short term memory buffers and can get stored with alterations caused by new experiences.

Your over all semantic understanding of this inner memory will be modified by what is happening now as you think about it. This point is proven when you realize that as you recall memories you change them. If asked to describe an accident on the scene you will write down one version of the accdient but if asked to write it down again, at a later date, you will write something totally different since the later experiences of thinking about the accident have caused changes in your original memories of the accident!

So semantic memory is dependent on a neural net hierarchy and that hierarchy is tied to the speech centers of the brain and the sensory systems as well as memory. This is possible to track because of the fact that all information systems behave the same way.

Re: What kind of system of 'coding' of semantic information does the brain use?
posted on 06/11/2003 8:44 AM by iggitcom

[Top]
[Mind·X]
[Reply to this post]

Read "The Brain Is A Wonderful Thing" at http://www.enticypress.com to find out.