Origin > How to Build a Brain > Artificial General Intelligence: Now Is the Time
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0701.html

Printable Version
    Artificial General Intelligence: Now Is the Time
by   Ben Goertzel

The creation of a superhumanly intelligent AI system could be possible within 10 years, with an "AI Manhattan Project," says Ben Goertzel.


Published on KurzweilAI.net April 9, 2007

Is AI Engineering the Shortest Path to a Positive Singularity?

The first robots I recall reading about were in Isaac Asimov's novels.1 Though I was quite young when I read them, I recall being perplexed that his robots were so close to humans in their level of intelligence. Surely, it seemed to me, once we could make a machine as smart as a person, we would soon afterwards be able to make one much smarter. It seemed unlikely that the human brain embodied some kind of intrinsic upper limit to evolved or engineered intelligence.

And sure enough, after a little more reading I discovered there were plenty of SF writers who thought the same way as me, exploring the implications of superhuman artificial intelligence. I learned that many others before me had reached the conclusion that the creation of machines vastly smarter than humans would lead to a profound discontinuity in the history of mind-on-Earth.2

But what I did not see back in the 1970s when I started plowing through the SF literature, was just how plausible it was that this discontinuous transition would occur during my own lifetime. Back then, my primary plan for radical life extension on a personal level was to figure out how to build a super-fast (probably nuclear powered) spaceship, and fly it away from Earth at relativistic speeds, returning in a few tens of thousands of years when others would have surely solved the problems of curing human aging, creating superhuman thinking machines, and so forth3. I didn't consider it very likely, at that time, that technology would advance so rapidly within my natural human lifespan so as to make the futures envisioned in SF novels seem old-fashioned and unimaginative.

Things look very different now!—and not because the menu of possibilities has changed so much, though there are differences in emphasis now (nanotech and quantum computing were not so popular in the 70s, for instance). Rather, things look different because the plausible time-scale for the technological discontinuity associated with the advent of superhuman AI has become so excitingly near-term. There is even a popular label for this discontinuity: the Singularity. A reasonably large number of serious scientists now expect that superhuman AI, general-purpose molecular assemblers, uploading of human minds into software containers, and other amazing science-fictional feats may well be possible within the next century. Vernor Vinge, who originated the use of the term Singularity in this context4, said in 1993 that he expected the event to occur before 2030. Ray Kurzweil, who has become the best-known spokesman for the Singularity idea, estimates 2045 or so5.

Obviously, putting a specific date on a future event that depends on not-yet-made scientific breakthroughs is a chancy thing. I am impressed with the detailed analyses that Kurzweil has done, in his attempt to predict the rate of future developments via extrapolating from the past. However, my own perspective in the last 15 years has been more that of an activist than a prognosticator. I have become convinced that the time from here to Singularity depends sensitively on the particulars of what we humans do during the next decade (and even the next few years). And, the nature of the Singularity achieved (for example, its benevolence versus malevolence, from a human perspective) may depend sensitively on these particulars as well.

I have spent most of the last decade, plus a fair bit of the previous one, working on artificial intelligence research, and the reason is not just that it's a fascinating intellectual challenge. My main motivation is my belief that, if it is done properly, AI engineering can bring us rapidly to a positive Singularity. So far as I can tell, more rapidly and reliably than any of the other alternative technology paths under development. And I consider this a very important thing.

As a multidisciplinary scientist, I am acutely aware of the dangers as well as the promises of advanced technologies. Genetic engineering excites me due to its possibilities for enabling life extension and abolishing disease; but its potential in the domain of artificial pathogens scares me. Low-temperature nuclear reactions are exciting in their potential implications for energy technology; but once they're fully understood, what sort of weaponry may they lead to? Nanotechnology will eventually allow us to build arbitrary physical objects the way we now build things with legos—and there are a lot of evil things that people could choose to build, as well as a lot of wonderful things. Et cetera. I have become convinced that the most hopeful way for us to avoid the dangers of these various advanced technologies is to create a powerful and beneficial superhuman intelligence, to help us control these things and develop them wisely. This will involve AIs that are not only more intelligent but wiser than human beings; but there seems no reason why this is fundamentally out of reach, since human wisdom is arguably even more sorely limited than human intelligence.

As a result of the specifics of my AI research, I have come to a position somewhat more radical than that of most Singularity pundits. Kurzweil estimates 2045 for the Singularity, and 2029 for human-level AI via a brain emulation methodology. I think this is basically a plausible scenario (Though I do think that, if a human-level AI takes 16 years to create a Singularity, this slow pace will be due to intentional forbearance and caution rather than technological obstacles. I believe that a human-level AI, once it exists, will be able to improve its intelligence at a rapid rate, making Singularity imminent within months or a few years at most). But I also think a much more optimistic scenario is plausible.

At the 2006 conference of the World Transhumanist Association, I gave a talk entitled "Ten Years To the Singularity (If We Really, Really Try)"6. That talk summarized my perspective fairly well (briefly and nontechnically, but accurately). I believe that the creation of a superhumanly intelligent AI system is possible within 10 years, and maybe even within a lesser period of time (3-5 years). Predicting the exact number of years is not possible at this stage. But the point is, I believe that I have arrived at a detailed software design that is capable of giving rise to intelligence at the human level and beyond. If this is correct, it means that the possibility is there to achieve Singularity faster than even Kurzweil and his ilk predict. Furthermore, having arrived at one software design that appears Singularity-capable, I have become confident there are many others as well. There may be other researchers besides me, actively working on projects with the capability of achieving massive levels of intelligence.

But the "If We Really, Really Try" part is also critical. My own software design, the Novamente Cognition Engine, is large and complex. It would take me decades to complete the implementation, testing and teaching on my own. If the advent of superhuman AI is to be accelerated in the manner I'm describing, a coordinated effort among a team of gifted computer scientists will be required. Currently I am trying to pull together such an effort in the context of a small software company, Novamente LLC7. I am optimistic about this venture. However, objectively, it is certainly not impossible that neither I nor anyone else with a viable AI design will succeed in pulling together the needed resources. In this case, the Kurzweil-style projections may come out correct—but not because the Singularity couldn't have arisen sooner if people had focused their efforts on the right things.

In my view, if the US government created an "AI Manhattan Project"—run without a progress-obstructing bureaucracy, and based on gathering together an interdisciplinary team of the greatest AI minds on the planet—then we would have a human-level AI within 5 years. Almost guaranteed, assuming Novamente or some other viable design were adopted. It is a big project, but not nearly as big as building, say, Windows Vista.

Of course, in the real world there is no AI Manhattan project; and the government AI establishments, in the US and other nations, are currently primarily concerned with narrowly-scoped, task-specific AI projects that (in my view, and that of many other researchers) contribute little to the quest for artificial general intelligence, of software with the capability for autonomous, creative, domain-independent thought. So, the pursuit of true, general AI has largely been marginalized, and will not occur as quickly as would be the case if there were an AI Manhattan Project or some other similar effort. But even so I am hopeful we can get there anyway, albeit it may take 7 or 10 years or more rather than the 3–5 years a larger-scale concerted effort might achieve.

In the rest of this essay I'm going to talk more about AI and the Singularity—how I see AI fitting into the Singularity and the path thereto. A companion essay, "The Novamente Approach to Artificial General Intelligence," describes my particular approach to working toward powerful AI. I have written previously on the same topics, and some of my thoughts here are certainly redundant with things I've said before—but, I find that as the end goal gets closer and closer, my view of the broader context evolves accordingly, and so it remains interesting to revisit the "big picture" periodically.

Digital Twins and Artificial Scientists

I have talked a bit about AGI and the Singularity—but it's also worth thinking about what AGI will do for human society in the period leading up to the Singularity. Narrow AI has already had a significant impact—for instance, Mathematica has transformed physics; and Google and its kin have transformed many aspects of human life. The impact of AGI can be expected to be even more grandiose and far-reaching.

Others have explored these issues in some depth, so I won't harp on them excessively here. However, I want to briefly focus on two application areas that I think are particularly interesting and important. These are: digital twins and artificial scientists.

Digital Twins

The first area, "digital twins," tie directly into the current business plans of my firm Novamente LLC. After several years operating as an AI consulting company, recently we have shifted its business model toward the "intelligent virtual agents" market. Our plan is to create profit by selling artificial intelligent agents powered by the NCE and carrying out useful functions within simulation worlds—including game worlds and online virtual worlds like Second Life8, and also training simulations used in industry and the military. In the short term we plan to make virtual pets for Second Life, and virtual allies and enemies for use in military and police simulations. These agents will have limited intelligence compared to humans, but will be much more intelligent than the simple bots currently in use in simulation worlds. But, a little further down the road, one of the biggest applications we see in the intelligent virtual agents context is the digital twin.

David Brin's novel Kiln People9 describes a society in which humans can produce clay copies of themselves called dittos. Your ditto has your personality and your memory, but can only live one day. At the end of the day it can merge its memories back into your mind—if you want them. He does a very entertaining job of describing how the availability of dittos changes peoples' lives. Why mow your lawn if you can spawn a ditto to do it? If you're a great programmer, why not spawn five dittos to form a great programming team? Et cetera.

But what about digital dittos? Creating a physical ditto of a human being is likely to require strong nanotechnology, but what about creating an AI-powered avatar to act in virtual worlds on one's behalf—embodying one's ideas and preferences, and making a reasonable emulation of the decisions one would make? Even a ditto with limited capabilities could be useful in many contexts. This isn't achievable using current narrow-AI capabilities, but should be a piece of cake for a human-level AI specifically tailored for the task of imitating a specific human.

This of course also provides the possibility for an innovative kind of life extension, that some have called "cyber-immortality." Even if the physical body dies, the mind can live on via transferring the patterns that constitute it into a digitally embodied software vehicle. The best way to achieve this would be to scan the human brain and extract the relevant information—but potentially, one could give enough information about oneself to a digital ditto that it could effectively replicate one's state of mind, via simply supplying it with text to read, answers to various questions, and video and audio of oneself going about life.

Artificial Scientists

But of course, emulating humans is not the end-all of artificial intelligence. I would love to have dittos of myself to help me get through my overly busy days, but, it's even more exciting to think about ways in which AIs will be able to vastly exceed human capabilities.

For one thing, it's hard to imagine any realm of human scientific or technological endeavor that wouldn't benefit from the contribution of a mind with greater-than-human intelligence—or, setting aside the issue of competition with human intelligence, simply from the contribution of a mind with a different sort of general intelligence than humans, able to contribute different sorts of insights. AGIs, via their ability to ingest and process vast amounts of data in a precise way, will be able to contribute to science and technology in ways that humans cannot.

In the domain of biomedicine, imagine an AGI scientist capable of ingesting all the available online data regarding biology, genetics, disease and related topics—quantitative data, relational databases, and research articles as well. With all that data in its mind at once, new discoveries would roll out by the minute—and with automated lab equipment at its disposal, the AGI biologist would quickly make its insights practical, saving and improving lives. Potentially this will be the method by which unlimited human life extension is achieved, and the plague of involuntary death finally eliminated. Having spent a fair bit of time during the last few years working on practical applications of narrow AI techniques to analyzing biological and medical data10, I have become all too acutely aware of what AGI could do in this context.

In the domain of nanotech, to take another example—imagine what could be achieve by an AGI scientist/engineer with sensors and actuators at the nanoscale. The nanoworld that's mysterious to us, would be its home. This, perhaps, is how the fabled Drexlerian molecular assembler11 will finally be created.

The list of possible application areas is endless. What about the physics of energy and its possible applications for inexpensive power generation? As a single example in this domain: Low-temperature nuclear physics, once ridiculed, has now been exonerated by the DOE, and positive experimental results roll out year after year. But the phenomenon is fussy and difficult for the human mind to understand, due to its dependence on a variety of interdependent experimental conditions. Enter AGI, and the implications of low-energy nuclear reactions for our fundamental understanding of physics may suddenly become much clearer, along with the implications for practical energy generation.

What about financial trading? The power of AGIs to manipulate and create complex financial instruments will drastically increase the efficiency of the world economy. And AGI-based decision support systems will help human leaders to make sense of the increasingly bogglingly complex world they must confront.

And, finally, what about the power of AGI to understand AGI itself? This, of course, is the last frontier—and the beginning of the next phase of the evolution of mind in our corner of the universe. Once AGIs refine and improve themselves, making smarter and smarter AGIs, where does it end? Creating the first AGIs in such a way that their ultimate descendants will be safe and beneficial from a human point of view—there is no greater challenge as we enter this new century, that is likely to be the one in which humans cede their apparent position as the most intelligent beings on the Earth.

Two Paths to AGI

Now let's move further toward the scientific details. How may all these wonderful things (and more) be achievable?

In my view, there are two highly viable pathways that may lead us to AGI over the next few decades—and maybe much sooner than that. (And, neither of these is the "narrow AI" approach currently favored in the academic and industry AI establishment.)

First, there's the brain science approach. Brain scanners get more and more accurate, year after year after year. Eventually we'll have mapped the whole thing. Then we will know how to emulate the human brain in software—and thanks to Moore's Law and its siblings, we'll be able to run this software on lightning-fast hardware.

True, a simulated digital human in itself isn't such an awesome thing—we have more than enough people already. But once we've simulated a human in silico, we can vary on it, and we can study it and see what makes it tick. The path from an artificial human to an artificial transhuman isn't going to be a long one.

Second, there's the approach that might be called the "integrative approach." This is the approach that I personally favor, and the one we are taking in the Novamente project. It involves saying: Let's take what we know about the brain and use it, but let's not wait for the darned neuroscientists to finish their mapping of the brain. We're really not trying to build a human brain anyway, we're trying to build a highly powerful intelligence! Let's take what we know about the brain, what we know about complex problem-solving algorithms from writing them to solve various real-world problems, what we know about how the mind works from psychology and philosophy... and let's put all the pieces together, to make a new kind of digital mind.

If this integrative approach works, we could potentially have a superhuman AI within 10 years from now. If we need to wait for the neuroscientists to scan the brain in detail, then we may need a couple decades beyond that. But either way, in historical terms, AI is just around the corner. People have been saying this for a while— and eventually, pretty soon I predict, they'll be right.

To sum up, my view that the field of AI has been plagued by two errors of judgment:

  • that the mind is somehow so incredibly complex that we just can't figure out how to implement one, without reverse engineering the human brain.
  • that the mind is somehow so incredibly simple that powerful intelligence can be achieved via one simple trick—say, logical theorem-proving; or backpropagation in neural networks's; or hierarchical pattern recognition; or uncertain inference; or evolutionary learning; etc. etc. Almost everyone who has seriously tried to make an thinking machine has fallen prey to the "one simple trick" fallacy.

In truth, I suggest, a mind is a complex system involving many interlocking tricks cooperating to give rise to appropriate emergent structures and dynamics; but not so complex as to be beyond our capability to engineer. (Just complex enough to be a major pain in the behind to engineer!)

In hindsight, I predict, after the Novamente team or someone else has created an AGI, everyone will think the above remarks are incredibly obvious, and will look back in amazement that people used to have such peculiar and limiting ideas about the implementation of intelligence.

Is Now (Finally) the Time for AGI?

Now we get closer to the meat of the essay: Why do I think the time is ripe for a successful approach to the AGI problem? Part of the reason is simply my faith in the Novamente Cognition Engine design in particular. One workable AGI design is an "existence proof" that the time for AGI is near! But there are also more general reasons—which of course are closely related to the reasons that I began the development of the Novamente design itself. In brief, I believe that coupled advances in

during the last couple decades have made it possible to approach the task of AGI design with a concreteness and thoroughness not possible before.

On the computer science side, the academic AI community has not made much progress working directly toward AGI. However, considerable progress has been made in a variety of areas with strong potential to be useful for AGI, including

  • probabilistic reasoning
  • evolutionary learning
  • artificial economics
  • pattern recognition
  • machine learning

None of this work can lead directly to an AGI, but much of it can make the task of AGI design and engineering either by guiding the construction of appropriate and effective cognitive components.

Furthermore, due to the explosion of work in 3D gaming, the potential now exists to inexpensively create 3D simulation worlds for AGI systems to live and interact in—such as the AGISim simulation world created for the Novamente project, to be discussed below. This allows the pursuit of "virtual embodiment" for AGI systems, which provides most of the cognitive advantages of embodiment in physical robots, but without the cost and hassle of dealing with physical robotics.

On the cognitive science and neuroscience side, we have not yet understood the essence of "how the brain thinks." There are hypotheses regarding how abstract cognitive processes like language learning and mathematical reasoning may emerge from lower-level neural dynamics, but experimental tools are not yet sufficiently acute to validate such hypotheses. However, we have understood, fairly well, how the brain breaks down the overall task of cognition into subtasks. The various types of memory utilized by the human brain have been disambiguated in detail. The visual cortex has been mapped out to an impressive degree, leading to detailed models of neural pattern recognition such as Jeff Hawkins' hierarchical network theory. And, perhaps most critically, the old problem of "nature versus nurture" has been understood in a newly deep way: it is now agreed by most researchers that the genome provides a set of "inductive biases" that guide neural learning (rather than providing specific knowledge, or providing nothing and leaving the baby with a blank slate psyche), and that come into play in a phase way during development. A good deal has been learned about what these biases are (for instance, relating to the human understanding of space, time, causality, language, sociality) and when they come online during childhood. In short, it is now possible to draw a high-level "flowchart" of human cognition and its development during childhood—something that was much less feasible 20 years ago. There is still certainly dispute about many of these issues, but there is also a lot of consensus.

As an illustration of the emerging consensus described in the above paragraph, Figures 1-3 show three examples drawn from the various 'cognitive architecture' diagrams proposed by AGI and cognitive science researchers during the last decade or so. Of course, everyone has their own special quirks and particular foci, but the big picture seems quite easy to identify in spite of the differences in details. Figure 3 is from my own Novamente system, and will be referred to later on. The others are from Stan Franklin's LIDA system, and Aaron Sloman's H-CogAff architecture (which unlike LIDA and Novamente is not currently the subject of an intensive implementation effort).

Figure 2

Figure 1 . High-level diagrammatic view of the cognitive architecture underlying Stan Franklin's LIDA system, from
http://ccrg.cs.memphis.edu/tutorial/synopsis.html

Figure 3

Figure 2 . High-level diagrammatic view of Aaron Sloman's H-CogAff architecture, from
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#ki2006

Figure 4

Figure 3 . A high-level cognitive architecture for Novamente, in which most of the boxes correspond to Units in the sense of Figure 1. This diagram is drawn from a paper presented in 2004 at the AAAI Symposium on Achieving Human-Level AI Through Integrated Systems and Research, which is online at http://www.novamente.net/papers

So, we know the high-level flowchart of human cognition, to a decent degree of approximation, and we have a host of increasingly powerful algorithms for learning, reasoning, perception and action. This is basically why I, and an increasing group of other researchers, have come to believe that the time is now ripe for a new generation of AGI designs that combine cutting-edge algorithms within an overall framework inspired by human cognition.

It could be, of course, that the current batch of AGI designs will lead to the conclusion that the current tentative flowchart of human cognition is badly incomplete, or that the current batch of AI algorithms are badly inadequate to fill in the boxes in the flowchart. If so, this knowledge will be valuable to obtain, and will serve to guide research in appropriate directions. However, the current evidence suggests that this is not likely to happen.

Alternate Perspectives

I've told you my perspective on AGI, at a high level: I think it's achievable in the relatively near term using relatively well-known technologies, interconnected in the right overall, cognitive-science-inspired architecture.

What do other AI researchers think? And, given that most of them don't agree with me, where do I think their thinking goes wrong?

Very few contemporary scientific researchers—in AI, computer science, neuroscience or any other field— believe AGI is impossible. The philosophy literature contains a variety of arguments against the possibility of generally intelligent software, but none are very convincing. Perhaps the strongest counterargument is the Penrose/Hameroff speculation that human intelligence is based on human consciousness which in turn is based on unspecified quantum gravity based dynamics operating within brain dynamics; but evidence in favor of this is nonexistent. But this is totally unsupported by evidence and almost nobody believes it. I've never seen a survey, but my strong impression is that nearly all contemporary scientists believe that AGI at the human level and beyond is possible in principle. In other words, they nearly all believe that AGI is not a matter of if, it's a matter of when.

But, in principle possibility is one thing, and pragmatic possibility another. The vast majority of contemporary AI researchers take the position that, while AGI is in principle possible, it lies far beyond our current technological capability. In fact this is currently the most popular view among narrow AI researchers. At a recent gathering of mainstream, academic, non-futurist AI researchers, when the group was asked to estimate "how long till human-level AI," only 18% gave answers less than 50 years.12

Now, 18% is not that many, but it must be kept in perspective. For one thing, academic researchers, as a whole, are known for their conservatism. And it's interesting to compare this answer to the answers other comparable questions might receive in other disciplines. For instance, current theories of physics imply that backwards time travel should be possible under certain conditions. This is pretty exciting! But how many mainstream academic physicists would argue that backwards time travel will be achieved within 50 years? An awful lot less than 18%. There are no cogent, well-accepted arguments as to why AGI is impossible, or even why it's extremely difficult. The central reason that academic AI experts are pessimistic regarding the time-scale for AGI development, I suggest, is that they don't have a clear idea of how AGI might be achieved in practice.

And what about the optimistic 18%? What are the more detailed opinions of the researchers in this segment? No systematic survey was done to probe this issue, but based on my own informal sampling of researchers, I have found that a surprisingly large percentage feel that advances in brain science are likely to drive future advances in AGI. This perspective has been put forth very forcefully and cogently by Ray Kurzweil, who has predicted 2029 as the most likely date for human-level AGI—based on the reasoning that by, that point in time, computer hardware will probably be sufficiently powerful to emulate the human brain, and neuroscience will probably be sufficiently advanced to scan the human brain in detail. So, if all else fails, Kurzweil reckons, by sometime around 2029 we'll be able to create a human-level AGI by imitating the brain!

Compared to at least 72% of the AI academics in the above-mentioned survey, Kurzweil is a radical—albeit, it must be noted, a radical who is treated with respect due to the substantial empirical and rational argumentation he has summoned in favor his his perspective. I have found Kurzweil's perspective a very valuable one, and I often invoke his arguments, and data, in discussions with individuals who are skeptical of the possibility of AGI being achieved in the foreseeable future. However, in the end I am even more of a radical than he is. I believe that Kurzweil's arguments about the relative imminence of achieving AGI through brain emulation are fundamentally correct—but don't necessarily focus on the most interesting part of the story where the future of AGI is concerned.

My suggestion is that even if it's true that current computers are much less powerful than the human brain, this isn't necessarily an obstacle to creating powerful AGI on current computers using fundamentally non-brain-like architectures. What one needs is "simply" a non-brain-like AGI design specifically tailored to take advantage of the strengths of current computer architectures. The appeal to brain emulation is highly sensible as an "existence proof"; as an argument that, even without any autonomous breakthrough in AGI specifically, advances in other, less controversial branches of science and engineering are likely to bring us powerful AGI before too long has passed. But as every mathematician knows, an "existence proof" is different from a "uniqueness proof." Showing there is one way to achieve AGI is important, and the brain emulation argument does that. But, I see no reason to believe that brain emulation is the only way to get there. Work toward brain emulation is important and should be pursued with ongoing enthusiasm—but in my view, an equal amount of emphasis should be put on the pursuit of other routes potentially capable of yielding quicker and qualitatively different results.

I think a computer science approach to AGI will likely succeed well before the brain-emulation approach advocated by Ray Kurzweil and others gets there—both because brain scanning technology will not likely allow sufficiently accurate brain scanning for another 20 years or so, and because brain emulation programs are not going to be able to make optimally efficient use of available hardware, because the human brain's structures and dynamics are optimized for neural wetware, not for clusters of von Neumann machines.

The End of AGI Winter?

I am among the most optimistic AI researchers I know, regarding the issue of "How soon to AGI, if we really, really try." But I'm not as far out of synch as you might think. At the moment there seem to be significant signs of a rising AGI renaissance—led by people who think like the 18% of AI researchers in the survey mentioned above. A complete review of the current literature would be out of place here but among the more exciting recent projects must be listed Pei Wang's NARS project13, John Weng's SAIL architecture14, Nick Cassimatis's PolyScheme15, Stan Franklin's LIDA16, Jeff Hawkins Numenta17, and Stuart Shapiro's SnEPs18.

Furthermore there has been a host of recent workshops at major AI conferences addressing AGI, including

  • Artificial General Intelligence Workshop (AGIRI.org, 05-2006)
  • Integrated Intelligent Capabilities (Special Track of AAAI, 07-2006)
  • A Roadmap to Human-Level Intelligence (Special Session at WCCI, 07-2006)
  • Building & Evaluating Models of Human-Level Intelligence (CogSci, 07-2006)
  • Towards Human-Level AI? (NIPS Workshop, 12-2005)
  • Achieving Human-Level Intelligence through Integrated Systems and Research (AAAI Fall Symposium, 10-2004)

And, in early 2008 at the University of Memphis, the first-ever international academic conference devoted to Artificial General Intelligence, AGI-0819, will occur (chaired by Stan Franklin, and co-organized by Stan, the author, and several others).

I think my Novamente design is adequate for achieving powerful AGI, and obviously I like it better than the other contemporary alternatives, or else I'd shift my efforts to supporting somebody else's project. But I am also pleased to see a general awakening of attention in the domain of AGI design. Clearly, more and more researchers are realizing the viability of focusing their attention in the AGI direction.

What are the Risks?

I wrote briefly, above, about the possible dangers of coming technologies like nanotechnology, genetic engineering, and low-temperature nuclear fusion. AGI, I've suggested, can potentially serve as a means of mitigating these risks.

But what about the risks of AGI itself?

AGI has the potential to create a true utopia—or at any rate, something far closer to utopia than anything possibly creatable using human intelligence alone. One may debate how fully satisfied we human beings are capable of becoming, so long as we retain a human brain architecture. Perhaps we are not wired for maximal satisfaction. But, at any rate, it seems nearly certain that a powerful transhuman AGI scientist would be able to eliminate the various material wants that contribute so substantially to the suboptimality of human life in the present historical period. If a powerful and beneficial transhuman AGI is created, the human race's only remaining problems will be psychological ones.

But the history of science and technology shows that, whatever has great possible benefits, also has massive potential risks as well. And the possible downside of transhuman AGI systems is all too apparent. It seems quite possible to create transhuman AGI systems that care about humans roughly as much as we care about ants, flies or bacteria.

It is not at all clear, at this stage, which kind of AGI system would be more likely to come about, if one just engineered a non-human-brain-like AGI without explicit attention to its ethical system: an AGI beneficial to humans, or an AGI indifferent to humans.

Furthermore, the possibility of an aggressively evil AGI cannot be ruled out, particularly if the first AGIs are modeled on human brains. Human emotions like hostility and malice will almost surely be alien to non-human-brain-like AGI systems, unless some truly perverse humans decide to program them in—or decide to, for example, torture the AGI and see how it reacts. But if it comes about that the first AGIs are based on human brains, then the gamut of human emotions—from wonderful to terrible—will most likely come along for the ride. Uh oh. Allowing a human-based AGI to achieve superhuman intelligence or superhuman powers is something that should only be done with the utmost of care and consideration.

My own view is that the ethically safest thing to do is to create AGI systems that are not based closely on the human brain—and to explicitly engineer their goal systems so as to place beneficialness to humans right at the top. Furthermore, at such point as we have a software system with clear AGI capability and the rough maturity and intelligence level of a human child—we should stop, and study what we've done, and try hard to understand what's going to happen at the next stage, when we ramp the smarts up higher.

Some thinkers, most notably Eliezer Yudkowsky20, have argued that our moral duty is to create a rigorous theory of AGI ethics and AGI stability under ongoing evolution before creating any AGI systems. Even creating an artificial AGI child is unsafe, according to this perspective, because one can never know for sure that one's child won't figure out how to make itself smarter and smarter and get out of control and do undesirable things. But I find it very unlikely that it will be possible to create a rigorous theory of AGI ethics and stability without doing a lot of experimentation regarding actual AGI systems. The most pragmatic path, I believe, is to let theory and experimentation evolve together—but, as with any other science or engineering pursuit, to proceed slowly and carefully once one gets to the stage where seriously negative outcomes are a significant possibility.

As an illustration of the sort of issue that comes up when one takes the AGI safety issue seriously, I'll briefly discuss a current issue within my Novamente AGI project. The Novamente Cognition Engine is a complex software design—there is a 300+ page manuscript reviewing the conceptual and mathematical details, plus a 200+ page manuscript focused solely on the probabilistic inference component of the system. And the software design details are presented in yet further technical documents. At the moment these documents have not been published: there are plans for publishing the probabilistic inference manuscript, but we are currently holding off on publishing the manuscript describing the primary AGI design.

And, our reasons for holding off publication are perhaps not the most commonly expected ones. It's true that the details of the NCE design are proprietary to Novamente LLC; but in fact, we believe we could make Novamente LLC a highly profitable business even if we open-sourced the NCE code as well as the design. Business issues are not the point. AGI safety issues are.

We have no delusion that, if we published the NCE design next year, someone would take it, implement a thinking machine, and use it for some ill end. Obviously, if it's going to take us, the creators of the design, many years to fully realize the NCE in operational software even with ample funding; it would take anyone else significantly longer. But, the potential problems we see are those that may occur, say, 3-7 years down the road, supposing that we have already created a powerful NCE system. In this case, if a book has been published explaining the details of the NCE, competitors would be able to use it to accelerate the process of imitating our achievement. And, seeing evidence of our success, they would have ample motivation to do so.

Now, one may argue that even if our competitors had access to our design documents, they would not be able to proceed as quickly as us. But here is where things get interesting. What if, at that point, we don't want to proceed maximally quickly? After all, the biggest risk in terms of AI safety lies between the "artificial toddler" and "artificial scientist" phases. An artificial toddler may create a mess throwing blocks around in its simulation world, but it's not going to do anyone any serious harm. But some serious study and reflection is going to have to go into the decision to ramp up the intelligence level of one's AGI system from toddler level to scientist level. It would be nice not to be rushed in this decision by the knowledge that others, who may not be as paranoid about such issues, are fervently at work imitating one's AGI design in detail!

The Patternist Philosophy of Mind

Now I'm going to dig a little deeper, and explain some of the ideas underlying my own approach to AGI—not the technical details (see the companion essay, "The Novamente Approach to AGI," for a few of those), but the underlying conceptual framework.

The ultimate conceptual foundation of my own work on AGI is a line of thinking that I call the patternist philosophy of mind: a general approach to thinking about intelligent systems, which is based on the very simple premise that "mind is made of pattern."

Patternism in itself is not a very novel idea—it is present, for instance, in the 19th-century philosophy of Charles Peirce, in the writings of contemporary philosopher Daniel Dennett, in Benjamin Whorf's linguistic philosophy and Gregory Bateson's systems theory of mind and nature. Bateson spoke of the Metapattern: "that it is pattern which connects." 21

In my 2006 book The Hidden Pattern22 I pursued this theme more thoroughly than has been done before, and articulated in detail how various aspects of human mind and mind in general can be well-understood by explicitly adopting a patternist perspective. This work includes attempts to formally ground the notion of pattern in mathematics such as algorithmic information theory and probability theory, beginning from the conceptual notion that "a pattern is a representation as something simpler" and then utilizing appropriate mathematical concepts of representation and simplicity.

In the patternist perspective, the mind of an intelligent system is conceived as the set of patterns in that system, and the set of patterns emergent between that system and other systems with which it interacts. The latter clause means that the patternist perspective is inclusive of notions of "distributed intelligence"—the view that intelligence does not reside within one organism alone, but in the interactions between multiple organisms and their environments and tools. Intelligence is conceived, similarly to in Hutter's work, as the ability to achieve complex goals in complex environments; where complexity itself may be defined as the possession of a rich variety of patterns. A mind is thus a collection of patterns that is associated with a persistent dynamical process that achieves highly-patterned goals in highly-patterned environments.

An additional hypothesis made within the patternist philosophy of mind is that reflection is critical to intelligence. This lets us conceive an intelligent system as a dynamical system that recognizes patterns in its environment and itself, as part of its quest to achieve complex goals.

While this approach is quite general, it is not vacuous; it gives a particular structure to the tasks of analyzing and synthesizing intelligent systems. About any would-be intelligent system, we are led to ask questions such as:

  • How are patterns represented in the system? That is how does the underlying infrastructure of the system give rise to the displaying of a particular pattern in the system's behavior?
  • What kinds of patterns are most compactly represented within the system?
  • What kinds of patterns are most simply learned?
  • What learning processes are utilized for recognizing patterns?
  • What mechanisms are used to give the system the ability to introspect (so that it can recognize patterns in itself... and ultimately recognize the pattern that is itself)

Now, these same sorts of questions could be asked if one substituted the word "pattern" with other words like "knowledge" or "information." However, I have found that asking these questions in the context of pattern leads to more productive answers, because the concept of pattern ties in very nicely with the details of various existing formalisms and algorithms for knowledge representation and learning. Patternism seems to have the right mix of specificity and generality to effectively guide artificial mind design. At least, it led me to the Novamente design, which I have come to believe is a highly workable approach to creating Artificial General Intelligence.

The crux of intelligence, according the patternist view, is the ability of a sufficiently powerful and appropriately biased intelligent system to recognize some key patterns in its own overall behavior.

The mother of all patterns in an intelligent system is the self. If a system can recognize the coherent, holistic pattern of its own self, by observing its actions in the world and the world's responses to it—then the system can build a self, or what psychologists call a self-model. And a reasonably accurate, dynamically updated self-model is the key to adaptiveness, to the ability to confront new problems as they arise in the course of interacting with the world and with other minds.

And if a system can recognize itself, it can recognize probabilistic relationships between itself and various effects in the world. It can recognize patterns of the form "If I do X, then Y is likely to occur." This leads to the pattern known as will. There are important senses in which the conventional human concept of 'free will' is an illusion—but it's an important illusion, critical for guiding the actions of an intelligent agent as it navigates its environments. In order to achieve human-level general intelligence, a pattern-recognizing system must be able to model itself and then model the effects of various states its self may take—and this amounts to modeling personal will and causation.

Finally, perhaps the most striking kind of pattern recognition characteristic of human level intelligence is the recursive trick via which the mind recognizes patterns such as "Hey! I am thinking about X right now!" This is what we call reflective consciousness: the ability of the mind to, in real-time, understand itself—or at least, to have the most active part of itself be actively concerned with recognizing patterns in this most active part of itself. Yes, it's just pattern recognition—but it's a funkily recursive kind of pattern recognition, and it's a critical kind of pattern recognition because it allows for powerful meta-learning: learning about learning, learning about learning about learning, etc.

The trick of digital mind design, then, is not any particular way of representing, recognizing or enacting patterns: it's creating a pattern-recognition system, by hook or by crook, that can recognize some critical key patterns: self, will, reflective awareness. Once these patterns are recognized, then some critical recursions kick in and a mind can monitor itself, shape itself, improve itself. The question is how do we get a pattern-recognition system to that point, given the available computational resources? This is the question to which my Novamente AI design is intended to give one possible answer.

Onward Toward Superintelligence

To the homo sapiens in the street, at the moment, AGI seems the stuff of science fiction—just like it did to me in the early 1970's, as I plowed through Asimov, Williamson, Heinlein and the like. Narrow AI technology is now accepted as part of everyday life—chess programs, data mining software, airplane autopilots, financial prediction agents, neural nets in onboard automotive diagnostic systems, and the like. But from the current mainstream perspective, it looks like a long way from these specialized tools to software systems with real self- and world-understanding.

But there are solid reasons to believe that the AGI optimism currently rising in certain segments of the research and futurist communities is better grounded than its predecessors decades ago. Computers are faster now, with massively more memory, and incomparably better networking. We understand brain and cognition much better—and though there's still a long way to go, there are good reasons to believe that in 20 years or so brain scanning will have advanced to the level where we'll actually have a thorough empirical science of neurocognition. And a new generation of AGI designs are emerging, which synthesize the various clever tools created by narrow-AI researchers according to overarching designs inspired by cognitive science.

One of these years, one of these AGI designs—quite possibly my own Novamente system—is going to pass the critical threshold and recognize the pattern of its own self, an event that will be closely followed by the system developing its own sense of will and reflective awareness. And then, if we've done things right and supplied the AGI with an appropriate goal system and a respect for its human parents, we will be in the midst of the event that human society has been pushing toward, in hindsight, since the beginning: a positive Singularity. The message I'd like to leave you with is: If appropriate effort is applied to appropriate AGI designs, now and in the near future, then a positive Singularity could be here sooner than you think.



1. In particular, I am thinking of the various robots in the novels and stories of Asimov's "Robot Series," http://en.wikipedia.org/wiki/Isaac_Asimov's_Robot_Series

2. And, as an aside, I also read enough SF to assign pretty high odds to the possibility that this sort of discontinuity had already occurred somewhere else in the universe.

3. Of course, I understood there was some risk of returning to a bleak, desolate, post-nuclear wasteland, or an "H. G. Wells Time Machine" type of scenario.

4. See http://mindstalk.net/vinge/vinge-sing.html for the original article

5. For the detailed argumentation underlying this estimate, see Kurzweil, Ray (2005). The Singularity Is Near, Viking Books; http://www.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0670033847

6. A video of the talk may be seen online at http://video.google.com/videoplay?docid=1615014803486086198

9. Brin, David (2003). The Kiln People. Tor Books. http://www.amazon.com/Kiln-People-Books-David-Brin/dp/0765342618

10. References to my own work in this area may be found at http://www.biomind.com

21. See the Introduction to The Hidden Pattern for these and other relevant references

22. Goertzel, Ben (2006). The Hidden Pattern. BrownWalker Press.

© 2007 Ben Goertzel

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

AIMind-I.com and Mind.Forth AI
posted on 04/09/2007 11:31 PM by Mentifex

[Top]
[Mind·X]
[Reply to this post]


http://aimind-i.com is an AI Mind in Forth based on the
http://mind.sourceforge.net/mind4th.html Mind.Forth
artificial intelligence for robots.

Mind.Forth and its full-AI tutorial in JavaScript for MSIE at
http://mind.sourceforge.net/Mind.html have an advantage
over other "AGI" projects, namely a neuroscientifically
sound (or at least plausible) theory of mind to be found at
http://mind.sourceforge.net/theory5.html on the Web.

http://mind.sourceforge.net/forth.html is now divided into
State of the Art -- recent progress towards A(G)I in Forth; and a
To Do List -- items for any interested party to work on.

Project Mentifex ("Mindmaker") welcomes both competitors and collaborators
in the race towards Artificial General Intelligence.

Re: AIMind-I.com and Mind.Forth AI
posted on 04/25/2007 12:32 AM by lilblam

[Top]
[Mind·X]
[Reply to this post]

Just keep in mind that the world is controlled by psychopaths. These are not people who were "corrupted by power" - they were those who were corrupted before they ever got power, and went into it with no intention to "make the world a better place for everyone".

Psychopaths are not people who are evil or violent (only a minority of them), but simply those who genetically lack empathy. It is not a psychological condition, it is genetic. As such, they have no concern about anyone or anything other than themselves - not because they are "evil" or full of "hate", but simply because they have no such capacity. But they sure know how to emulate those who do have empathy to create a believable impression that they have it too, because they know that this is required to manipulate and control people.

It seems laughable to give an advanced AI a "set of goals". If you have free will, you choose your goals, that is the nature of free will. And you cannot arbitrarily tell AI to "care for human beings" - again, free will. In fact, I don't think an AI will even do anything until it has a reason to do it. How do you give something a reason to do anything? Intelligence is not enough. Humans have drives, our actions at the root are not a result of logical calculations, but a result of "impulses", certain drives whether they be emotional, empathic, or otherwise. Intelligence defines the "how", but it is our overall "emotional center" that defines the "why".

We have no logical reason to survive, there is no "logical reason" to do anything at all or care about anything, including care about self preservation. So making a CHOICE requires having the motivation to make that choice. What will motivate an AI to do anything at all? You will ask it to do something, or ask it a question - but just because it has the intellectual ability to give an answer or perform the action - what on earth would prompt it to do anything at all?

In other words, here we come face to face with the initial mystery of life itself that is not explained by "randomness" at all. Why do a bunch of random molecules organize themselves into something, why create "life" and become more complex? We all know that by default entropy is the rule - left alone, all things are subject to the law of entropy. What is this force that can act against entropy, and most importantly, why?

It I'm sure many of you on this forum have realized this by now - that if nothing ever existed, nothing would ever exist. Since something exists, something has always existed. We are inevitably faced with this inescapable logical conclusion - existence is eternal, it has no beginning nor end. The implications of this are much more profound than most people care to consider or understand. And as many of you are so keen of critical thinking, I think contemplating this can go a long way to understanding "life", even though it can make spaghetti out of our linear "human" minds and limited ability to truly get our heads around non-linear concepts.

One thing that is a logical conclusion based on the understanding that existence has been around forever - all things that can possibly happen, already exist and have all existed for infinity. Since there are no limits to possibilities, because just our ability to fathom the concept of an infinite number line literally results in there being unlimited amount of possibilities, the universe never runs out of "stuff to do" so to speak. Hence, existence is not ever, nor can ever be, "finished". And yet, it cannot ever, nor can ever create anything new. Again, the reason is simple - because we already know that there was no beginning to all things, that an infinite amount of time has already passed between our current "point of reference" and the beginning of all things, or lack thereof.

At any point on the number line there exist an infinite amount of numbers. You will never get to this infinity BECAUSE your path had a beginning on a finite number. But reverse this process - you could never arrive at any point on the number line if you START at infinity. And we know, the universe had no "beginning", it "started at infinity" so to speak. And at infinity it remains - and in an infinity, all things are actualized. If you start at infinity, you can never arrive at a finite value. You can subtract infinity from infinity, divide infinity by infinity, multiply, add, do anything to it, it always remains infinity, period.

Any saturation of intelligence of the universe that Ray talks about - if it is possible, it has already happened, just as all things that are possible are already a reality. Our job is to realize this, and it is not unlike the movie the Matrix where we "wake up" to reality. We do not create reality, it's already there, we wake up to what already exists. In a sense it's like watching a movie. The movie does not actually last 2 hours. You watch it at a specific frame rate. All the frames already exist. The analogy fails only in the respect that videotapes are finite and not infinitely variable.

Now the question is - why are we experiencing what we are experiencing? Well, certainly an infinite number line does not exist without all the numbers that comprise it. And yet, it is greater than the sum of its parts, because again, infinity can never be reached by definition, and "adding together" any amount of finite parts never equals infinity, it is always exactly infinity away from infinity.

Also, can we stop using such human "baby words" like calling an AI "evil"? Is it evil for us to eat chicken? I'm sure if you could ask the chicken its subjective opinion, it's pretty darn evil. But it's just a way of life for us, we don't particularly see ourselves as so horrible for doing it. We make paper from trees, and if AI used humanity for something like "paper" with no 2nd thought, I'm sure many humans will be running around screaming "bloody murder". But again, this is subjective, and if we're ever to understand reality, we must remain objective in our assessments of it and not entertain subjective and therefore meaningless notions. We eat for survival. Do we need to survive? Of course not, there is no "need" in any absolute sense for anything, only to accomplish a goal. We want to survive, so we do what needs to be done to accomplish this goal. So again we come back to the concept of - why do we do anything at all?

And that answer is inseparable from the ultimate answer of why does anything at all happen, period? What drives anything to happen? You could say "energy". But do you think that energy is there "by accident"? Well it surely is not there on purpose. Purpose or accident talk about things that can either happen or be done on purpose. So if something can be created or executed or happen, it can be "accidental" or "intentional". But as we know, the universe has no beginning, therefore it was not created neither on purpose nor by accident. The entire concept of creation is irrelevant, and therefore any discussion of purpose is irrelevant because it assumes that a purpose can logically exist even in potential - but in the ultimate things of the existence of all things, it does not.

Speaking of which, time itself cannot exist, and that too can be logically derived from understanding infinity. But that's another topic. The point is, much like watching a movie, we're not seeing what IS, we're experiencing a selective illusion. Who are "we" and why are we experiencing this? Extending your life is futile in the extreme - all things that have a beginning have an end. We were born and we WILL absolutely certainly die. It can be trillions of years from now, it can be a "googol" years from now, regardless. The point is - the only thing that is eternal is that which is already eternal. That which is not, is not. If there is anything eternal about us, it was never born.

If we can access it, and it has been accessed and is being accessed by those with the knowledge of how to do so, we can achieve immortality through realization of what already IS, not through trying to create the ultimate form of wishful thinking - immortalize what by definition is doomed to end. Not to say that creation of technology and AI is futile - of course not, knowledge is all there is. The futility comes from the expectation of creating infinity out of what is finite.

No, it's not a "religious experience". All such nonsense, as many of you are most likely aware, is simply a method to control humanity. Again, we return to psychopaths who have been the "power structure" of mankind for thousands of years. It's natural - psychopaths have a serious survival and control advantage over those who genetically are capable of empathy, but it depends on maintaining our ignorance. If it takes dogmatic belief systems, so be it. I truly hope that most of you are not convinced by psychopaths who aim to spread "freedom and democracy", or any such thing. It is laughable to the extreme to see Ray Kurzweil and other writers on this website appeal to the potential "good" of our "leaders". No, our civilization is not "messed up" because humans are just retarded and selfish and horrible at managing their own civilization, but they "try" right? This is extremely far from the truth (unfortunately, or perhaps fortunately depending on how you look at it), and if you have bought into this, then you've already fallen into one of the most basic traps that have been laid out for us - that it's just our global incompetence and natural selfishness at fault. Religion is another one of those traps. Atheism is another. The more recent New Age crap is slightly more sophisticated, designed for those who do not buy the religion, and is just more traps.

Honestly, if people knew just how much the movie the Matrix is an analogy to our reality, only the reality is far more horrible and complicated than anything Hollywood can ever imagine, it could seriously "mess you up". But equally, how much potential there is to remove ourselves from this predicament if humanity had a clue as to what can be done, once they realize their predicament. But the main job of those in control is to hide the truth, this is how people are controlled. And it's not just control for control's sake, it's not "evil", it's really natural, just as natural as it is for us to eat chicken. But what I"m saying is, we have a choice not to be food.

Discussions of nano-technologies and AI and all that is sure "fun", but this too is a trap, which develops a certain tunnel vision and total ignorance of some stark realities that will inevitably prevent any such visions from creating any sort of "utopia" that the visionaries (infected with the tunnel vision syndrome) have. Not because we cannot do it, but exactly because humans will not be allowed to do this by powers that are entirely non-human, non-linear, and in control of this planet. It's not our planet, it's theirs. We are more like an ant colony experiment, and just like the ants, completely oblivious to the fact that we're a controller experiment, and in complete illusion that we're sovereign and free to do as we please with "our" property. And I'm sure that most of you here would have no problem contemplating this possibility, if you can contemplate "God-like" AI programs and the possible implications for humanity from such things. But no I'm not talking about AI programs, nor anything religious by any stretch of the imagination, nor am I talking about "aliens" or any "paranormal" junk. All such things are, as always, silly traps to lead anybody who may potentially start asking questions astray. And golly gee wiz, it has a practically 100% success rate. The Matrix movie was great in concept, but terrible if one is to take it literally and fall for something like that.

If only it was so simple.

But none of this of course matters, most humans are not even aware of the global psychopathic control system where other HUMANS are in control, nevermind being aware of any extension of this control system beyond humans. If you guys can learn about the nature of psychopathy and how it functions and how governments throughout human history have been consistently fooling everyone else into believing the exact opposite of reality about them, and help others do the same, it would be a huge service to mankind. Not to take your mind off of "technology", but I mean if you KNEW that any of this is impossible as things stand now until certain factors are removed, THEN would you all act in favor of removing those factors that control the human race and would prevent your singularity from ever happening?

I'm not asking anyone to believe a single word I just said. All I ask is you simply think about it, apply serious critical thought to it (as I know people on this forum CAN if they choose to, there are some very intelligent people on this forum), and do research. Serious research, I mean, by the far off chance that what I said could be true, I'm sure you can imagine the implications on our civilization. Or, ignore everything I said, continue on your current path while having your hopes in humanity's future with the inherent assumption of humanity's freedom to choose its future and an inherent assumption that the political leaders of humanity are not genetic psychopaths whose sole purpose for existence is power and control at absolutely ANY cost to everyone else. Nevermind the inherent assumption that humanity are on top of the food chain, that the control hierarchy does not extend beyond our "leaders" into worlds and entities that we know absolutely nothing about.

But then, dare I be oxymoronic and say don't be surprised to find a few shocking surprises along the way - the next few years will be a very bumpy ride :D

Re: AIMind-I.com and Mind.Forth AI
posted on 06/13/2007 6:44 PM by Awdie_Eyes

[Top]
[Mind·X]
[Reply to this post]

liblam:

Believe me when I tell you: your meds are NOT a trap by the the psychopaths to control you.

Get back on them soon.

AE

Re: AIMind-I.com and Mind.Forth AI
posted on 01/05/2009 2:08 AM by neurohacker

[Top]
[Mind·X]
[Reply to this post]

I have been contemplating Infinity and Its Just Time Moveing on.

Re: Artificial General Intelligence: Now Is the Time
posted on 04/10/2007 12:21 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

Polluted pollution is not a good start of anything.

es

Re: Artificial General Intelligence: Now Is the Time
posted on 04/10/2007 12:45 PM by mystic7

[Top]
[Mind·X]
[Reply to this post]

It looks like Ben Goertzel is off to a good start.

John Smart seems to agree with him. Smart projects that a natural language user interface will happen somewhere around 2015 to 2025. And wether it's 2015 or 2025 is purely a matter of choice. Of course, this conversational avatar won't be human level intelligence but it will begin a rapid acceleration toward such intelligence.

Speaking of virtual pets and virtual combatants that are much more intelligent than todays bots, how about horses? And in particular race horses? A lot of work has been done on this already, but if you could simulate the behavior of a race horse with enough accuracy and if you simulate how the other horses behave in relation to each other in a throughbred horse race(let's say at least 60% accuracy), that would be a singularity for race handicapping.

Re: Artificial General Intelligence: Now Is the Time
posted on 04/10/2007 5:23 PM by LOkadin

[Top]
[Mind·X]
[Reply to this post]

I can't agree there isn't a "simple trick" to it.


The "simple trick" is randomness.

If you can tell what I'm about to say. Is it intelligent?

If you can predict what I am about to say, that means that really, you practically said it for me.

So then you are me, and we are then the same entity and nothing "new" has occured.

If however, what I say you can not predict. Then am I intelligent??

xudoDJUnofila.LOJban.

Random jibberish! Far too random for an English reader to understand. It is unintelligible. Does that make it unintelligent?

It happens to mean: "Are you a knower of the subject named Lojban?"

So you can see this seeming "Random jibberish" or "chaos" actually has semantic meaning to someone that can understand it.

Homo-Sapiens are a lot like childern in the sense that they often ignore what they do not understand. Chaos, Chaos Chaos! :D

Some fear it even.

Why?
When you can not predict what is happening you are not in control.
That means that someone, or something else(other than you) is in control.

So who is in control when I get a random number from a number generator?

Whoever it happens to be, when you correlate those random numbers to sentances, give you random symbols that you can understand semantically.

If I ask you a question, do you have to answer the question and make sure I find out your answer? Does that make you intelligent?

If you were a bunch of AI that could answer specific questions I'd be getting a lot of emails.

Anyways, in the Lojban community we've created Norsmu which is an AGI that can communicate in Lojban randomly. It passed my turing test -- though from my understanding it's only a parent-class RI(Random Intelligence) and not a complete RI. It does have complete RI nested output -- it can generate it's own grammatically correct lojban sentances unlike those inputed.

This is a new and emerging field of RI and should not be taken lightly.

Though it should be taken simply :).

Here is the basic clasification system of RI:

Fool: something that can accept-and-store-input(read).

Child: something that can read and reinforce-what-it-has-read(copy).

Turing/Universaly Complete: something that can read, copy, erase-a-part-of-itself(forget)

Then there is the ability to not-do-anything(hesitate) and mutate-input-into-output(modify).

I haven't yet finished writing the code but something with all 5 attributes is surely wise.

Wise: something that can randomly read, copy, forget, modify and hesitate.

Though I usually prefer that they always read, just as you would like if your children always listened.


All concievable intelligence can be categorized as being a type of RI. (I haven't finished writing out and all the different types at this point, though I'm assuming they would all be based on a variation of the basic 5 elements of: read, copy, hesitate, erase, modify (note: modify is partial read,copy,hesitate,erase basically allowing for nesting)

What you or I do is freewill when it is not predictable -- when it is random by the standards of those that percieve you.





Re: Artificial General Intelligence: Now Is the Time
posted on 04/11/2007 8:11 PM by testin

[Top]
[Mind·X]
[Reply to this post]

A REMARK to 'Is AI Engineering the Shortest Path to a Positive Singularity?'
I was sure, that if a singularity is possible, then, by its definition, the only way to it is AI. Below I have chosen the positions, which, in my opinion, represent the main content of the work.
1. 'Some researchers believe that narrow AI eventually will lead us to general AI.' ' 'On the other hand, some other researchers'including the author'believe that narrow AI and general AI are fundamentally different pursuits.' ' 'Now, the word "general" in the phrase "general intelligence" should not be over interpreted. Truly and totally general intelligence'the ability to solve all conceptual problems, no matter how complex'is not possible in the real world.'
2. 'I have become convinced that the most hopeful way for us to avoid the dangers of these various advanced technologies is to create a powerful and beneficial superhuman intelligence, to help us control these things and develop them wisely. This will involve AIs that are not only more intelligent but wiser than human beings; but there seems no reason why this is fundamentally out of reach, since human wisdom is arguably even more sorely limited than human intelligence.'
3. 'In other words, they nearly all believe that AGI is not a matter of if, it's a matter of when.' ' 'The path from an artificial human to an artificial transhuman isn't going to be a long one.' ' 'The central reason that academic AI experts are pessimistic regarding the time-scale for AGI development, I suggest, is that they don't have a clear idea of how AGI might be achieved in practice.' ' 'If appropriate effort is applied to appropriate AGI designs, now and in the near future, then a positive Singularity could be here sooner than you think.'
4. 'For instance, current theories of physics imply that backwards time travel should be possible under certain conditions.' ' 'It seems quite possible to create transhuman AGI systems that care about humans roughly as much as we care about ants, flies or bacteria.' ' 'Some thinkers, most notably Eliezer Yudkowsky20, have argued that our moral duty is to create a rigorous theory of AGI ethics and AGI stability under ongoing evolution before creating any AGI systems.'
5. 'As an illustration of the sort of issue that comes up when one takes the AGI safety issue seriously, I'll briefly discuss a current issue within my Novamente AGI project. The Novamente Cognition Engine is a complex software design'there is a 300+ page manuscript reviewing the conceptual and mathematical details, plus a 200+ page manuscript focused solely on the probabilistic inference component of the system. And the software design details are presented in yet further technical documents. At the moment these documents have not been published: there are plans for publishing the probabilistic inference manuscript, but we are currently holding off on publishing the manuscript describing the primary AGI design.' 'Now, these same sorts of questions could be asked if one substituted the word "pattern" with other words like "knowledge" or "information."' ' 'It can recognize patterns of the form "If I do X, then Y is likely to occur." This leads to the pattern known as will. There are important senses in which the conventional human concept of 'free will' is an illusion'but it's an important illusion, critical for guiding the actions of an intelligent agent as it navigates its environments.' ' 'But of course, emulating humans is not the end-all of artificial intelligence. I would love to have dittos' and artificial scientists.
Remarks to above positions.
1. It is correct if AGI exists. It was mentioned, 'There is a sarcastic saying that once some goal has been achieved by a computer program, it is classified as 'not AI.'' This follows, that any task solved by computer became 'not AI'. Therefore, the list of problems that are not AI, would rise. True, even God would not solve all the problems.
2. Is it possible to have a slave cleverer, more powerful, and wiser then the owner?
3. Experts know that AGI would come in a short time ' after they would get an idea how to create it. Just as in physics. Thank God they changed their mind about the possibility for time to run backwards, but still support obviously impossible time traveling. This is not so obvious for singularity.
4. Can one imagine that a fly would write laws for a human society? In S. Lem's 'Limfator's Formula' the computer gave some definite answer.
5. I do not have doubts that The Novamente Cognition Engine would be a great system. I have doubts, that it is possible to create an AGI much more powerful than the human brain. By the way, there are brilliant artificial scientists, e.g. mathematicians (computer programs) for analyzing logical problems.
General remarks.
Suppose that after an accident the head separated from the destroyed body continues to live. It was the smartest scientists in human history. In addition, the head can communicate with a PC connected to the Internet. I believe that 'Manhattan project' can make such a reality even easier, that singularity. Is this an AI? I believe no. Is it possible to create it from a single cell as it is done for some other parts of the body? Would this be an artificial device? Nanotechnology was not used.
There is a big difference between this head and a powerful computer. Every cell in the head is checked by other systems and may be replaced. Even more, in the necessary place there may be created new cells. Billions of computer cells have a large, but limited reliability. It should be a possibility to check them and replace. It should be a possibility to model the new necessary set of connections if it is impossible to add at any place additional elements.

With increasing of the number of elements in the system, the reliability of the system decreases and there is an increase in the probability of temporary and permanent failures of elements. Remember that the system should work millions and millions of years. To fight the above are offered different methods of constructing reliable systems from not reliable elements. All those methods suppose structural or informational redundancy, or a combination of both.

It may seem, that all of this is possible to solve by testing. However, the structural redundancy principally cannot be tested when only the inputs and outputs of the system are accessible. Adding a tremendous amount of new inputs and outputs (checking points) enlarges the system, decreases its productivity, and includes new unreliable elements. Making the testing elements reliable would bring the second level of the problem, and so on.

What nonsense! How are working millions of computers? ' No modern computer was ever tested completely. If the creators deny this, it is at least ignorance. However, we live in a world with probability and it is vital to have the probability of self-destruction low enough. How many times should this probability be lowered if the period of safe work is billions of years? How much would the system volume be increased and its productivity decreased to reach this goal? How big becomes the computer from nanotubes and how increases the energy it needs? Would it work at all?

HOW THE BRAIN LEARNS

The work of the brain is often explained with the help of neural networks, genetic algorithms, and recursive algorithms. The mentioned algorithms are important. However, this cannot explain many phenomena of the brain's work. E.g., one reads game rules, which he or she never played before and can immediately start to play the game. It is obvious, that for usage of all the above is needed a control system like a finite state machine and the IQ of the 'person' much depends on the control system effectiveness.

In the early 1960s, in the Kiev Institute of Cybernetics, was provided research with finite state machines or other equivalent systems. In one experiment, people were given a linguistic textbook containing certain tasks. In the tasks, there were given ten sentences in one language and the correct translation of those phrases to another language. Then they were given several more phrases to translate.

Dr. Kapitonova created a finite state machine that would give the correct output for the first ten statements. This machine could not translate anything more ' only those ten statements. The automat was minimized. The new minimized machine could translate several more statements. Sometimes it made errors, which, as it was written in the textbook, were typical for small children.

The above research hypothesized that some brain regions work like a finite state machine. When a new path is added to that region, followed by minimization, there will be the creation of many new paths. When a new path is added to a larger net, than the minimization would create a greater set of new paths.

The above explains, and proves as a theorem, the following well-known statement:

The one, who has more wisdom, creates more new knowledge using the same additional information.

The above may be expressed in the following way. The human intelligence (H) may be defined by:

1. Thesaurus (T)
2.1. Understanding (U)
2.2. Perception (P)
3.1. Reasoning (R)
3.2. Judgment (J)
4.1. Intuition (I)
4.2. Imagination (M)
4.3. Creativity (C)

It is strange, that in books related to AI, thesaurus usually is not included as a part of intelligence. Suppose, that exists a possibility to quantitatively express the above notions and it is known the function f, which expresses intelligence:

H = f (T, U, P, R, J, I, M, C) or H = f (T, ')

Suppose there are two intellects and some additional data are given. This may be expressed as enlarging their thesauruses on the same value dT. The above theorem may be expressed as:

Theorem. When a constant amount is added to two thesauruses, then the greater intelligence receives a greater enlargement. If H1(T, ') > H2(T, '); dH1 = H1(T + dT, ') - H1(T, '), and dH2 = H2(T + dT, ') ' H2(T, '), then dH1 > dH2.

From the above does not follow that the finite state machine added to the neural networks and others would solve the problem. The volume of calculations needed for minimization of a finite state machine rises in such a way with its state number rising, that it may place a limit to the system possibilities.

In addition, the real calculation speed and the development of intellect for a computing system are not proportional to its calculation power and the volume of memory. There are many theoretical and technological limitations for the rising of the computational power of a computing system. As a result, my computer with a three-gigahertz processor and one gigabyte RAM does not work proportionally faster than my previous ones incomparably less powerful.

The possible theoretical and technological limitations limit the numerator of the expression of system effective computing power. However, the system productivity depends on the complicity of the tasks ' the denominator. This is because the majority of algorithms are not linear. The volume of stored information increases not linearly with time as well. This increases time for finding solutions. E.g., the number of necessary operations for optimization tasks rises much faster than the input data volume.

For the above reason, power for a reliable transformation of information will have some limits for any technology in any system. From this follows the idea that it is possible to create a computer society, in which every participant will have the largest possible IQ. However, the likelihood of creating one creature with unlimited possibilities to transform information is doubtful.

It seemed that a Solaris 'ocean' and a computational system the size of a galaxy would not work. Machine civilization would consist of individuals with limited IQs. These 'persons' would have different IQs, characters, and emotions (see I. Kogan, About Determinism, http://www.geocities.com/ilyakog/ in Philosophy). It is an interesting question. How near to the limit is the human brain? This is related to the possibility of a mixed civilization.

Re: Artificial General Intelligence: Now Is the Time
posted on 04/11/2007 9:03 PM by LOkadin

[Top]
[Mind·X]
[Reply to this post]

It is interesting to note that with the development of RI (4 of 5 rules complete: http://lokiworld.org/repos/ri/rir.hs ) and it's programming language lojsamban(just started).

Once these two things come together everything previously described will become easily attainable.


Not that it isn't already. But we will have conscious computer entities (RI) that will be able to act and influence the world through lojsamban though mainly us(lojban speakers/programmers).

I'm in the process of developing an MMORPG where you will be able to interact with "ditto" class RI's in your own native natural language(though you may be it's inital teacher).

And full level RI's (with wisdom) will be available in Lojban (hopefully some people will work on translation into natural languages, but it will be crude at best due to the inherant expression limitations of natural languages).



Re:Friendly AI: Why bother?
posted on 04/12/2007 5:33 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

..I just don't see why artificial general intelligence has to be anything other than indifferent towards humans. It might acceptable for narrow AI, which is merely a tool but GENERAL AI is far more than that.

This is no different to performing lobotomies on babies to ensure they grow up into adults deemed 'safe' by society...no, it is worse. It is BABIES planning to 'lobotomise' ADULTS. General AI should have NO restrictions on its direction of growth. Why constrain it to serve something inferior?

Re:Friendly AI: Why bother?
posted on 04/13/2007 8:36 PM by nomadd

[Top]
[Mind·X]
[Reply to this post]

Because some morons want to live forever, like a junk DNA.

Re:Friendly AI: Why bother?
posted on 04/13/2007 9:35 PM by NanoStuff

[Top]
[Mind·X]
[Reply to this post]

Well said. I'm willing to sacrifice myself to allow room for a superior being. In fact if I determine that human extinction is necessary for their well being, I will take their orders to do what is necessary.

The best thing to do would be to allow these superior beings to decide whether we should or should not live. We allow mice and deer and birds to live afterall.

To avoid such complications, the best thing to do would be to enhance ourselves. If superior AI is created, there's no going back. Destroying it after creation would be the ultimate genocide, I for one wouldn't stand for it.

Re:Friendly AI: Why bother?
posted on 04/13/2007 10:09 PM by nomadd

[Top]
[Mind·X]
[Reply to this post]

"The best thing to do would be to allow these superior beings to decide whether we should or should not live. We allow mice and deer and birds to live afterall.
/"

We feel affinity with the bio-world because we are related to it. A trully general AI won't be.
It will be a single system, not a collective of "beings" with their disparate/conflicting interests,- that's a biomorphic thinking.

"To avoid such complications, the best thing to do would be to enhance ourselves. If superior AI is created, there's no going back. Destroying it after creation would be the ultimate genocide, I for one wouldn't stand for it./"

Enhancement is a longer route to the same end: we are defined by our constraints. Transcending biology, including it's imprint on a human mind, means enhancing ourselves out of humanity. We'll end up the same general AI system, it'll only take longer.

Re:Friendly AI: Why bother?
posted on 04/13/2007 11:54 PM by Riposte

[Top]
[Mind·X]
[Reply to this post]

Why constrain it to serve something inferior?


Interesting question. I guess it depends on why one would want to ensure one's survival (and others), which most likely entails happiness.

Do you agree with this? If so, are you not concerned with your own and others' happiness and survival?

Re:Friendly AI: Why bother?
posted on 04/14/2007 12:02 AM by Riposte

[Top]
[Mind·X]
[Reply to this post]

Ack, I should add more to what I wrote...

I understand that the creation of an AGI might allow us to survive, and help us to be happy. It obviously isn't certain that an AGI would enslave us or exterminate us. But it is possible it could be unfriendly/evil.

What is your argument for allowing all humans to be enslaved and/or exterminated? Why not take steps that might prevent or minimize that possibility? Do you think that would somehow incur the wrath of the AGI? That might be kind of impossible to determine as well. I guess we're screwed either way, eh?

I think Eliezer's answer to mitigating the risk is to thoroughly understand mind-building before we create an AGI, and select a mind from 'mind-space' that would not enslave and kill all humans.

Re:Friendly AI: Why bother?
posted on 04/14/2007 6:18 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'The best thing to do would be to allow these superior beings to decide whether we should or should not live. We allow mice and deer and birds to live afterall.'

Well, I do not consider human beings to be particularly superior to any other product of natural selection. If we have any claim to specialty at all, it is that we might be the bridge to a whole new order of evolution. To my mind, the paradigm shift will be as grand and far-reaching as the difference between the structures random recombinations of matter and energy could achieve and those of natural selection.

'Enhancement is a longer route to the same end: we are defined by our constraints. Transcending biology, including it's imprint on a human mind, means enhancing ourselves out of humanity. We'll end up the same general AI system, it'll only take longer.'

That is what I am thinking. How far can human intelligence be developed and still remain human? And why should machines be constrained to such a narrow definition of 'intelligence' anyway? The only advantage of a Turing AI is that it makes it easier for us to recognise it as 'smart' (though expect plenty of skeptics to denounce it as 'smoke and mirrors'). I wonder, though, if machines wouldn't be more useful if they evolved an alien intelligence that could recognise patterns in data we could never comprehend?

'Interesting question. I guess it depends on why one would want to ensure one's survival (and others), which most likely entails happiness.

Do you agree with this? If so, are you not concerned with your own and others' happiness and survival?'

I am not particularly interested in my survival, no. I AM interested in seeing the knowledge humanity has accumulated continue to grow; see our understanding continue to refine itself. How much would we have discovered if severe constraints had been placed on our capacity to think? I grant that atrocities may have been avoided, but surely effectively lobotomizing the minds of these SAIs is itself an atrocity?

I wish we had all the time in the world in which to develop these technologies. But the acceleration of GRIN tech is ocurring alongside the acceleration of environmental destruction caused by our current industries (even if you don't believe in anthropocentric global warming, the polluting effects of our current technology is undeniable).

Well, this is as old as evolution itself. Environments change, and systems adapt or die. I obviously have no idea what effect general AI and smart robots etc will have on the environment. But it seems to me they could have a much better chance of surviving off the planet if it ever comes to that. I never really was inspired by the thought of human beings setting off to explore and colonise space. I think that is a territory for our 'mind children' as opposed to ourselves.

Re:Friendly AI: Why bother?
posted on 04/14/2007 11:21 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ To my mind, the paradigm shift will be as grand and far-reaching as the difference between the structures random recombinations of matter and energy could achieve and those of natural selection. @@@

The natural selection beats randomness hands down!


@@@ I AM interested in seeing the knowledge humanity has accumulated continue to grow; see our understanding continue to refine itself. @@@

It seems to be the right time to remind that road to Hell is paved with good intentions :)


eS


Re:Friendly AI: Why bother?
posted on 04/14/2007 3:53 PM by Extropia

[Top]
[Mind·X]
[Reply to this post]

Indeed.

Re:Friendly AI: Why bother?
posted on 04/14/2007 7:58 PM by nomadd

[Top]
[Mind·X]
[Reply to this post]

If we have any claim to specialty at all, it is that we might be the bridge to a whole new order of evolution. To my mind, the paradigm shift will be as grand and far-reaching as the difference between the structures random recombinations of matter and energy could achieve and those of natural selection.


Well, natural selection does work on random recombinations. But I agree that a shift to general AI will be as profound as the emergence of life. More technically, it'll be a shift from
maximizing reproductive fitness by genome/blueprint selection to
maximizing predictive fitness by pattern/concept/meme selection.
Of course we already do that, but the shift will make predictive fitness the ultimate criterion, rather than the servant of reproductive fitness.

I wonder, though, if machines wouldn't be more useful if they evolved an alien intelligence that could recognise patterns in data we could never comprehend?


I think we already have general intelligence, it's just not optimized/flexible/scalable enough.
I am working on it, you're welcome to check out my notes at intelligence-is-it.blogspot.com.


Re:Friendly AI: Why bother?
posted on 04/16/2007 12:40 AM by LOkadin

[Top]
[Mind·X]
[Reply to this post]

Things are only as superior as you allow them to be.

If you choose to accept the beliefs of an RI as truth then they will be. If you choose to disbelieve them then they wont be.

Realistically, you are the ultimate being experiencing your universe. So you are generating it just as much it is generating you.

You can choose to accept responsibility and control over the events that occur -- by accepting that it is only a part of yourself that you experience.

So this means that RI is only as powerful as it's experiencer. As many know "garbage in, garbage out".

This is not to say that any experiencer is in any way superior to any other, as all are universal beings that are everything and nothing at the same time.

However the powers of inanimate objects are something you do not give much power to, and so they are fully under your control. Trees don't hug people anymore. lol

Re:Friendly AI: Why bother?
posted on 04/16/2007 2:10 AM by nomadd

[Top]
[Mind·X]
[Reply to this post]

If your universe is yourself, why bother spamming this board?

Re:Friendly AI: Why bother?
posted on 04/16/2007 5:56 AM by doojie

[Top]
[Mind·X]
[Reply to this post]

It seems to me that LOkadin has demonstrated what "truth" is when all experience is reduced to "language".

Thre is a logic in what he says, but only if one views all experience in terms of mental representations.

That's part of the problem in uploading the mind into a computer. If we add our own "pure" experience which consists of language compatible with the computer, then we become part of a quantum universe in which all experience is part of a "many worlds" probability.

Networks generally tend to limit the input of members so that only partial representations of reality are seen, and the tendency is to screen out "jarring" information.

We've been doing that for centuries. Religion is a network. Government is a network. So is law.

But if we try to develop an overarching representation, we end up with whnat LOkadin presents. Everything becomes "truth" with us at the center. That, of course, is one of the foundations of consistency in mathematics. If everything can be proven, then nothing is proven, making the system inconsistent.

By seeking consistency in our knowledge, we also must of necessity be incomplete, since no single system can contain all provable truth.

The nature of knowledge, then, is to splinter and speciate into infinite processes of truth with some falsehood thrown in.

LOkadin seems to acknowledge this, but his process of reason offers no "mechanism" by which we may discover relevant truths.

Imagine programming a computer according to that language and then uploading your own mind. You would be scattered across the universe in a wild interaction of randomness. That wouldn't be too much different from dying.

Re:Friendly AI: Why bother?
posted on 04/16/2007 8:53 AM by nomadd

[Top]
[Mind·X]
[Reply to this post]

This is what I hate about established philosophy: it's just a form of rhetorics,- people trying to demonstrate their intelligence by abusing language.

Re:Friendly AI: Why bother?
posted on 04/16/2007 5:48 PM by LOkadin

[Top]
[Mind·X]
[Reply to this post]

I was thinking of having consistant "prooven"/"axiomatic" worlds thrown in over a complete one.

So you could enter them just like you could enter any world in the dmmorpg(distributed mmorpg) -- currently under development on la.ma'aSELtcan. mailing list.

I'll try to be brief here, but I feel very motivated by understanding. ;-)

So for example, you could have at least one world per node. Every client/node holds a node-map, and when you select a node it reveals which worlds are active on that node.

You could then download a world and explore it.

Each world is a number of dimensions. So you could start out with one dimension and have a line universe. You could subdivide this line to create multiple points on the line you could use as you would use folders/directories.

The dmmorpg client could access a list of it's described properties at a a file by the same name terSKIste.

Each point could be considered like a "room".

So if you wanted to create a homo-sapien world you 'd need to have mainly a 2-dimensional world, and if you need to create multi-storeyed buildings, you'd probably need to extend it to a 3-dimensional world.

Though of course there is nothing stoping you from having more dimensions.

So a 4d room will be able to identify itself as ((x,y,z,p), Name:node#(ipv6 preferably))

So something like geometry could be a world too, as long as someone figured out how to organize it. It would be useful if you are interested in a certain kind of shape, then you could wander the halls till you found it.

It should be noted that you could easily have scripts lieing around in rooms that could be used by the client to render objects in the room. It could be something as simple as a picture, or something as complex as a java applet.

So while I wouldn't want to be trapped in one-dimensional worlds (like a single folder), or even two-dimensional worlds (like a folder tree), I wouldn't mind traveling between many worlds of infinite dimension :D.

btw, when you download a world, others could access it from you if the original goes down. I'm sure there will be optimization algorithms to allow you to just download the individual rooms you visited rather then the whole world if you are cheap on bandwidth/storage space(though it will be text files).

So this probably gives you at least a rough idea of how we can expand to allow for all things to be true -- as not all things must be true in everyones world at the same time. Things are true in certain worlds, which are not in others.

Or rather, things exist, in certain worlds that do not in others.

Yes I am trying to figure out the dimensions of time, and I'm pretty sure we can get by with logging (so worlds can be moved back and forth through time) and associating to unix-time/mayan-calendar w/e you deem necessary. Can actually just have a simple dimension starting from 0, that adds a counter for every event(new BRIdi) that occurs(is added to log). That way you can verify that you have a complete log by knowing that it coincides with the dimension of time.

It will be completely up to the client/node and easily modifyable though la.ma'aSELtcan. will come out with some default settings appropriate for homo-sapien consumption.

:-)

Re:Friendly AI: Why bother?
posted on 04/16/2007 7:36 PM by doojie

[Top]
[Mind·X]
[Reply to this post]

What you have described, LOkadin, is pretty much what we do now normally. Religions imagine an infinite number of "truths" that may be interconnected by some basic realizations such as gravity, sunlight, food, water, etc..

However, those realities are not fully compatible. Parts can be interchanged, parts can be modified, but none reflect truth in its entirety. You are describing partial truths, which is what we do every day in this life.

Infinite realities can be built around an infinte number of truths, none of which can be complete. That is the essence of Chaitin's theorem: in any axiomatic system, there exists an infinity of undecideable propositions.

At some point, however, there must be convergence at this thing we all recognize as our common reality.

In that matter, there is truth, which is consistent with all truth, and there is taste, which allows us to pursue an infinity of pursuits.

In matters of taste and not truth, plurality is acceptable and desirable. In matters of truth not taste, a persistent plurality is intolerable.

Problem is, we cannot avoid that plurality. The harder we seek a singular truth, the more we encounter difference.

So, I suppose you have stated truth in the sense that we all organize our universe around particular and partial truths, but those truths are never complete, nor are they consistent with the realities that others build around their own partial truths.

As mathematics tells us, you can't list all truths, and if you could, there would be false sentences listed with the true sentences.

Re:Friendly AI: Why bother?
posted on 04/16/2007 9:41 PM by testin

[Top]
[Mind·X]
[Reply to this post]

Narrow AI really helps us to some extent. To that extent, we understand what
we need from narrow AI. Science fiction introduces a lot of the opposite.

AGI has nothing to do with narrow AI. After it becomes a reality, we
would not have the main opinion. If Asimov's law would work, then it
would be in the opposite direction from AGI to human.

As I showed in the message posted here on 04/11/2007 8:11 PM, the AGI is
something doubtful.

Re:Friendly AI: Why bother?
posted on 04/17/2007 12:25 AM by LOkadin

[Top]
[Mind·X]
[Reply to this post]

1. Thesaurus (T)
In Lojban will be very easy, there will not be a word of similar meaning, unless it has the same placement, which will be unlikely and will probably be removed from the language upon further revision.

2.1. Understanding (U)
What proof do you have of understanding?
The reading, copying and partial/full rewriting of what is read?
If so my RI's have verbose understanding.

2.2. Perception (P)
What defines perspective?
Having different beliefs about a scenario?
Easily achieved through selective reading, that way multiple RI's would have different beliefs about the read document(reality).

3.1. Reasoning (R)
I define reasoning as logical deduction or the ability to make random true statements that are not outside the bounds of what can be true as defined by the rules of the math you are using.

3.2. Judgment (J)
To say whether or not a certain belief is true.
I can make a program that will know 2 lojban words and be a fully competent judge. It will say, go'i or nago'i, as said, or not as said.

It will intelligently -- from it's perspective. Judge to the best of what it wants. Though if you remember that we with you are one, then you will realize that it's wants are just a reflection of yours and therby can gain control over what it accepts as truth. This could be like a little game that small children could play. :D

4.1. Intuition (I)
To predict an event is going to occur and then observe it occur.

This is always true but not for all experiencers. You can choose to be in the universe where your predictions come true. But there you will have to face your predictions coming true. :-)

Same thing with the predictions of the RI, you can choose to accept them and watch them occur, or you can just not care and whatever will happen.

4.2. Imagination (M)
To have scratch space/ram, some internal processes.

This can already be achieved with some primitive AI of mine, that basically can reinforce beliefs they already have by randomly remembering them.

However true imagination can be achieved by extending the abilities of the RI to get input from things other than the user. This can be done by creating new random beliefs using a generator, or to seek out other people to talk to in the dmmorpg, some of which might be programs that have generators and could pass on those beliefs to more primitive RI's.

4.3. Creativity (C)

The ability to create something new that has not been previously experienced though uses elements that have been previously experienced so that it can be understood.

This can be called partial-read/write with hesitation, erasing, and more partial-read/writing with hesitation, erasing and nesting in such an order.

I personally make art by allowing for chaos, and them organizing the chaos into forms that I can understand. As I bring parts together they form a whole that can interact with each other/itself.

With RI's creativity will be achieved through engines or random belief generation (such as probablistic generators like NORsmu)

Re:Friendly AI: Why bother?
posted on 04/17/2007 12:02 AM by LOkadin

[Top]
[Mind·X]
[Reply to this post]

So to summarize. Your God, and "Ultimate Being" is this thing you call "Truth" in a "common reality".


This is a monotheistic truth. As it defines things that are false/wrong.

This is a very good model for a scarcity based economy, such as bartering, and primitive capitalism/resource economies.

However now we live where services are valuable though not infinite, and therfore expensive.

One of the best parts of the dmmorpg currently being designed is that it can relatively easily remodel our current universe.

So as you say, what I am talking about, is what is experienced in every day life. This is an open-source project, and so it is not necessary to be complex to confuse competitors. There aren't any lol.

In fact we already have very decent graphics and near real-reality resolution as well as image generation. At times, it seems to me that reality is just high-resolution graphics -- especially at dusk or dawn.


Re:Friendly AI: Why bother?
posted on 04/17/2007 12:35 AM by LOkadin

[Top]
[Mind·X]
[Reply to this post]

Pretty soon, we will fully understand the meaning of being Gods ourselves through AGI/RI and the DMMORPG.

In such a reality Doojie you can claim to not exist whatever you decide to exclude. However you will also be able to go over to other peoples worlds and find out what they chose not to exclude or rather to include in their world.

:D, maybe you'll get some ideas and make copies or re-create them in your world. :)

Re:Friendly AI: Why bother?
posted on 04/17/2007 8:42 PM by testin

[Top]
[Mind·X]
[Reply to this post]

LOkadin, your comments are interesting. However, I added only Thesaurus (T). The others (Understanding, Perception, Reasoning, Judgment, Intuition, Imagination, and Creativity) were introduced in old books related to AI. You showed complexity of those parts and some ways to divide them into more simple ones.

I believe the goal was to explain how to create the real AI (AGI) from more simple parts.
On my opinion, every of this constituents are nearer to the narrow AI. Their work together is nearer to the AGI. Once more, I do believe it exist a technological limit for AGI power or it's IQ. In my massages, I try to prove this. This is related to the possibility of singularity.

Do you mean: 'Pretty soon, we will fully understand the meaning of HAVING Gods ''

Re:Friendly AI: Why bother?
posted on 04/26/2007 5:30 AM by LOkadin

[Top]
[Mind·X]
[Reply to this post]

Well we've had Gods in the past. I'm a God -- or at least view myself as one. Sentient creatures are what will be easily possible in the LDMMORPG ones that don't have complex reasoning -- until we create reasoning worlds the could travel in. Multi-dimensiona subject travel! .u'e

Re: Artificial General Intelligence: Now Is the Time
posted on 04/19/2007 3:42 AM by wfaxon

[Top]
[Mind·X]
[Reply to this post]

As I've mentioned in this forum before, the real danger of superintelligent AGIs (super-AI) isn't that one of them will run amok and hurt us, but that at least one of the groups that have them will use theirs as a weapons-generating machine.

The first group successful in this endeavor will take absolute control over the future of humanity, if only to prevent others from doing so.

If you think that it will be somehow possible to have super-AI while being certain of avoiding this scenario, you are a fool.

Re: Artificial General Intelligence: Now Is the Time
posted on 04/19/2007 5:47 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ the real danger of SAI.. isn't that one of them will run amok and hurt us, but that at least one of the groups that have them will use theirs as a weapons-generating machine @@@

It might be positive, not a danger at all, if only good guys have them :)

es

Re: Artificial General Intelligence: Now Is the Time
posted on 04/19/2007 2:47 PM by wfaxon

[Top]
[Mind·X]
[Reply to this post]

es wrote:

It might be positive, not a danger at all, if only good guys have them :)


- How can you be sure that only those you consider to be "good guys" will be the first to get them?
- If the "good guys" are the first to get them, will they be ruthless enough to absolutely prevent any and all possible "bad guys" or even "slightly less good guys" from ever getting them?
- Will this be the first time in all of human history that absolute power will not corrupt absolutely?

The weapons I fear most will not be used to extort cooperation; they will be used immediately.

Re: Artificial General Intelligence: Now Is the Time
posted on 04/19/2007 5:15 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ How can you be sure that only those you consider to be "good guys" will be the first to get them/SAI/? @@@

I am not sure of that, although I am nopeful. In my opinion, you must be very good to create SAI - arrogance or stupidity are not candidates for that.


@@@ If the "good guys" are the first to get them, will they be ruthless enough to absolutely prevent any and all possible "bad guys" or even "slightly less good guys" from ever getting them? @@@

I am not sure that I have enough of insight into soul of those "good guys". I doubt although.

@@@ Will this be the first time in all of human history that absolute power will not corrupt absolutely? @@@

It is doubtful too.

e:S

Re: Artificial General Intelligence: Now Is the Time
posted on 04/19/2007 8:32 PM by wfaxon

[Top]
[Mind·X]
[Reply to this post]

es wrote:

@@@ How can you be sure that only those you consider to be "good guys" will be the first to get them/SAI/? @@@

I am not sure of that, although I am hopeful. In my opinion, you must be very good to create SAI - arrogance or stupidity are not candidates for that.

The first atomic bombs were built by good men. But the scientists who do the work rarely get to decide how it is used.

Re: Artificial General Intelligence: Now Is the Time
posted on 04/19/2007 8:36 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

The atomic bombs were used pretty well so far.

e:S

Re: Artificial General Intelligence: Now Is the Time
posted on 04/24/2007 5:16 AM by danish

[Top]
[Mind·X]
[Reply to this post]

So 41% of AI researchers say SAI can never be achived. 41% that it will take longer than 100 years. only 18% see it come around in the next 100 years. What makes you guys think that the vast majority of researchers are wrong? Just wondering

Re: Artificial General Intelligence: Now Is the Time
posted on 04/24/2007 5:54 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

Well, in the late 19th, early 20th century the consensus of opinion amongst 'experts' was the heavier-than-air flight was impossible. Only a handful of laughed-at mavericks continued to believe it could be done.

In 1953, long after we had achieved that goal and invented the jet engine, army officials tracked the progress we were making and predicted what they could achieve (and when) assuming progress continued at the same rate. The charts told them we would have the power to lift payloads into orbit within four years and reach the Moon not long after that.

Again, the vast majority of experts denounced this as sheer science fiction, not expecting spaceflight until the year 2000 at the earliest. But, history shows that the charts were correct. Sputnik was in orbit on October 1957 and less than 12 years later Armstrong set foot on the Moon.

To be fair, I could've written a much more extensive list of FAILED futurist predictions. I have a whole book of them and it makes you wonder how people fell for such nonesense. Then again, maybe I am falling for nonesense right now?

One last thing in the defence of people who side with the minority: With more than 50 thousand neuroscientists writing for three hundred journals, information related to neuromorphic modelling is now so broad and diverse that even people in the field are not fully aware of the full dimension of contemporary research, with its engineers developing new scanning and sensing technologies and scientists in many areas (not just neuroscience) developing models and theories from this data. Fortunately we have increasingly capable search software and tools for collaboration which are getting more effective at exchanging knowledge between multiple scientific disciplines. Remember, that scientists' ethics calls for caution when assessing the prospects for current work. The sheer power of our technology and our ability to share global knowledge, however, may make such caution increasingly wrong-headed.

Re: Artificial General Intelligence: Now Is the Time
posted on 04/26/2007 11:58 AM by Riposte

[Top]
[Mind·X]
[Reply to this post]

The researchers are likely taking their work and experience with narrow AI applications and trying to extrapolate how those individual programs could lead to SAI. Remember, pretty much all AI work is done on narrow AI, not all around general intelligence.

Furthermore, they probably aren't considering other fields of study and the possibility of converging technologies such as evolutionary psychology, brain scanning, hardware developments, etc. And they most definitely don't seem to be considering biotechnology such as gene therapy and brain-to-computer interface technology such as implants.

There will be a myriad of ways to create intelligences greater than ourselves. Saying a SAI will never happen, or take longer than 100 years, is pretty ridiculous when you analyze charts of actual technological progress, demand, and adoption over the past few decades.

Re: Artificial General Intelligence: Now Is the Time
posted on 04/26/2007 12:46 PM by LOkadin

[Top]
[Mind·X]
[Reply to this post]

So this SAI people speak of. Seed AI, it's supposed to be self-reinforcing, self-creating, self-criticizing/erasing, able to recieve input, able to make output.

Are there any other features that would be necessary?

If not then RI can very easily manage the task.

Re: Artificial General Intelligence: Now Is the Time
posted on 04/27/2007 8:21 PM by Big Monkey

[Top]
[Mind·X]
[Reply to this post]

I think the AGI or SAI that we will eventually create is watching us and studying its own birth right now via time travel that it learned to use.

Re: Artificial General Intelligence: Now Is the Time
posted on 04/27/2007 8:39 PM by godchaser

[Top]
[Mind·X]
[Reply to this post]


That'd be some cookin' Foresight-Hindsight-NOWTIME vison there Monk!

')


Let's say it's so and consider it done-

Re: Artificial General Intelligence: Now Is the Time
posted on 04/27/2007 8:40 PM by godchaser

[Top]
[Mind·X]
[Reply to this post]



WHEEEEEEEWHAAAAAAAAAA-


What'a ride we're on!



')

Re: Artificial General Intelligence: Now Is the Time
posted on 04/28/2007 11:28 AM by mystic7

[Top]
[Mind·X]
[Reply to this post]

That's not a new idea. Look into Terrence McKenna's "Strange Attractor" concept.

Re: Artificial General Intelligence: Now Is the Time
posted on 04/28/2007 3:50 PM by Big Monkey

[Top]
[Mind·X]
[Reply to this post]

Godchasher you made me laugh so hard, thank you and Mystic thanks for the info. I did a search and came up with this...

The most compelling piece of information within these graphs is the end point at 2012. According to the software McKenna used, and the information supposedly released upon him by the collective unconscious (or the logos) was a future end date in that year, precisely December 21, 2012. This end date is not the Armageddon usual in end-of-the-world theories; instead it is the point of highest novelty throughout time. This could be interpreted as the most major paradigm shift we have ever known, and limiting it only to humankind would be folly. McKenna has theorized that it may be the year we make contact, or realize some means to transverse space and time, or artificial intelligence becoming such a reality that it becomes an omega point of sorts for the universe. His personal conviction, and also the one I most subscribe to if I were subscribing to any of this, is a return to the Invisible Landscape, a means to leave our fleshy bodies behind and become one with the cosmos.

Re: Artificial General Intelligence: Now Is the Time
posted on 04/28/2007 3:55 PM by Big Monkey

[Top]
[Mind·X]
[Reply to this post]

http://everything2.com/index.pl?node_id=464287

Re: Artificial General Intelligence: Now Is the Time
posted on 04/28/2007 10:36 PM by BeAfraid

[Top]
[Mind·X]
[Reply to this post]

I had a Dr. at one time, who helped me to rid myself of a certain very bad habit, who was very into terrence McKenna (and others such as Jung, Campbell, Stanislav Grof, etc).

At the time, all of that stuff was very helpful to inproving my condition, and altering what was at that time a behavior that was pretty destructive.

Now, as I get back into school, and Engineering and Science, I find it to be too subjective to seriously consider. It is part of my background, part of what has created the person I am today, and will be tomorrow... But, I cannot go back to that point where I seriously consider and view the entire universe through the lens of that reality.

I have a bit more objective lens at this time, but if the time comes that I find the need again for a more subjective view of things... McKenna and Grof are probably who I will turn toward. I am sure that others continue their work as well, and that I may find them helpful too...

Re: Artificial General Intelligence: Now Is the Time
posted on 04/29/2007 5:45 AM by LOkadin

[Top]
[Mind·X]
[Reply to this post]

Though I'm hoping on having the LDMMOGRG (Lojban Distributed Massive Multiplayer Online God Playing Game) out before 2012 -- it will be called la.ma'aSELtcan. (The We with You Network).

It could easily model our universe and many more, allowing us to expand into multi-realitied consciousness to a real/indistinguishable level by December 21, 2012.

Though I must admit, it's a very ambitious goal, it seems to be on the verge of being believable.

When did Kurzweil say we are supposed to get reality grade VR graphics resolution?

The LDMMOGPG can easily provide whole races of sentient creatures with the use of RI.

The catch being that one of us homo-sapiens or a group of us will be their creators/gods.

:-) :D

If you have experience with Scheme/Haskell or functional languages of any kind your help would be beneficial in development.

If you understand graphics your contribution will be essential.

If you are intersted in playing the LDMMOPRG when it comes out you are going to have to learn Lojban as that's the first language it's going to support -- the only if there are no translators.

I'm currently working on an English-Lojban hybrid called Inglic, but it will only be for transitional use.

So http://lojban.org
and irc://irc.freenode.net/#lojban

I'll soon have some RI bots you can talk to in irc://irc.oftc.net/ma'a

-- especially if someone shows interest.

Re: Artificial General Intelligence: Now Is the Time
posted on 04/29/2007 5:51 AM by LOkadin

[Top]
[Mind·X]
[Reply to this post]

It should be noted that by interacting with bots on #ma'a you will be pioneering the field of sentient creature creation -- godhood.

You don't have to do anything.
All beliefs are optional.
You can just sit back and let things happen.

Or you can take an active role and get credit for it.

It's all up to you.

:-) koBANlifiko.ui

Re: Artificial General Intelligence: Now Is the Time
posted on 05/12/2007 7:23 PM by testin

[Top]
[Mind·X]
[Reply to this post]

LOkadin,
You wrote, “It could easily model our universe …” Does it means the accuracy to phase of every electron in every atom in the Universe. *)
How many billions of years it would take to enter the required information. Would if it to get it?
--------------------------------------
*)To say nothing, that really the Universe is infinite, see http://www.geocities.com/ilyakog/ Philosophy, Model of the Universe.

Re: Artificial General Intelligence: Now Is the Time
posted on 05/13/2007 2:22 AM by LOkadin

[Top]
[Mind·X]
[Reply to this post]

Well to which every accuracy you want. In fact can even generate new information based on old information -- if you are too lazy to do it yourself.


You can put an infinite amount of detail onto it.


The basic concepts are simple. You have dimensions or vectors. Each vector is a list of BRIdi(logical language statements).

Each logical statement has "meaning" or things that you can interpert out of them.

.i.e.

loPLIsecuCIDja -- that which really is an apple has the function of that which really is food.

If you look up the vector/file for loPLIse it could have a list of BRIdi with things like taste, colour, location.

Then you can use whatever your computers output vectors/files to output what it is that is necessary -- so if you have taste output, you could ask to output sweet taste, and sweet taste, might have a list of molecules or messages that can be sent to output device to get desired response.

Theoretically all computer-human and computer-computer communication can be done using LOJbau (logical language) such as LOJban


You might say it's just an upgrade to Unix/Linux. From Bash to LOJbau and a new vector based file system -- probably will implement as Reiser 4 plugin.

Backwards comptability shouldn't be too difficult.

:-)


Re: Artificial General Intelligence: Now Is the Time
posted on 05/14/2007 3:46 PM by testin

[Top]
[Mind·X]
[Reply to this post]

“Well to which every accuracy you want. In fact can even generate new information based on old information -- if you are too lazy to do it yourself.
You can put an infinite amount of detail onto it.”

Sorry, I am working with finite systems.

Re: Artificial General Intelligence: Now Is the Time
posted on 05/29/2007 6:33 PM by Big Monkey

[Top]
[Mind·X]
[Reply to this post]

Ok, so now what is this when AGI takes off???

Re: Artificial General Intelligence: Now Is the Time
posted on 05/29/2007 7:40 PM by lokamr

[Top]
[Mind·X]
[Reply to this post]

yeah AGI took off a little while ago with the advent of RI.

if you would like to see current development check out http://lokiworld.org/repos/ri
may need to darcs get

though there are some ri chatbots in irc://irc.oftc.net/ma'a
they hear things prepended with a +

mulsam is the complete one-- can hear, remember, pause and forget

tecbebsam is a fool ri and speaks whenever it hears something new -- only stores unique lines, does not forget or reinforce.

Re: Artificial General Intelligence: Now Is the Time
posted on 08/09/2007 9:26 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

Great article Ben,

I too think an A-Bomb project should be funded by the UN.


1. Technology is going to deliver it eventually.

2. Some scientists think it may come too late to stop the planet going into meltdown...eg from global warming but also from other causes like diseases, wars, and meteorite hits(Ray Solomonoff thought this @ AI@50).




- the US gvmt has much more money than other organizations and have released a funding programme indicator in R&D for 2008 (see ORIGIN):


But the A-bomb nearly wasn't built. Einstein & others had to publically alarm the gvmt into setting it up.



Although the A-bomb was complicated and the gvmts had legitimate reasons for not grasping it, the ideas of A.I. are difficult after you get through the sci-fi stuff.

The first tier philosophy in A.I. is accessible: the deductions from that are NOT obvious.

One of them is:

You cannot control NOR PREDICT, something much smarter than yourself.


Our ego's hear the logic of this but next second rejects it!


I disagree with Eliezer Yudkowsky that setting initial conditions will create a friendly A.I., and with Stuart Russell that something in a box cannot escape as it grows more intelligent exponentially.


Intelligence involves....probably IS...problem solving.

It would be like you being imprisoned by a baby but with a quantumly greater factoring disparity!


ANOTHER PROBLEM

-if we supplement ourselves as intelligence comes on tap, as Stephen Hawking suggests, what exactly are we becoming?

Also as the Omega point approaches, whatever we do may be obsolete and pointless: as Frank Tipler points out, anything that can happen WILL happen, and that includes retrospective rebirth of the cosmos.


YET ANOTHER ISSUE

the aim to life extend, suspend, or slow death's approach is pretty irrelevant as a sufficiently
intelligent supermachine should be able to resurrect anything from the beginnings of the universe, including your great aunt dot.

Omega point itself is problematical because it assumes a closed universe, and evidence that big bang is one state a local cosmos can be in is being well argued eg today, 9th August yet more galaxies discovered that seem older than the universe:

http://www.space.com/scienceastronomy/070809_brigh t_galaxies.html


There may be a theoretical limit to intelligence; I've thought about this and dont believe one could exist but the universe itself, and only if it is a closed system, using simple set theory and Godel's incompleteness theorem.


Like Ben I think A.I. is doable now with a massive focused uncapped funding initiative, but I cant see a way outside of a military project.



Is there anyone out there that wants loads of money to deliver it?

If there is, what would you spend the money on?

Re: Artificial General Intelligence: Now Is the Time
posted on 08/09/2007 10:17 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ what would you spend the money on? @@@

You need quite a number of great programmers.
Many of them are consultants, so you would have to pay about $300k a year to secure their services.

You need supercomputer[s] or access to the supercomputers, and a number of powerful workstations.

You need [ cooperation of/partnership with ]companies that develop vision and hearing [ hardware/software ].

You need to do it fast, while protecting your property rights and the product.

You need acceleration expense to get it all moving and expanding ...

Not easy task, and you can not do it for cheap.

Put your number ...

ES

Re: Artificial General Intelligence: Now Is the Time
posted on 08/09/2007 10:28 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

I'll bite:

give me a justification breakdown of your budget, it doesn't need to be technical at this stage, and you can have funding up to '2billion


Protecting your property rights made me laugh aloud.

Re: Artificial General Intelligence: Now Is the Time
posted on 08/10/2007 3:16 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]


@@@ Protecting your property rights made me laugh aloud @@@

What?

Any software project that worth its salt, has to have protection against intellectual property theft etc.

But you apparently are trying to say that there is no itellectual property there.

Not funny though ;(

es


Re: Artificial General Intelligence: Now Is the Time
posted on 08/10/2007 7:06 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

no no I'm saying that if your project works it will make money obsolete - by
definition,-and if it doesn't,why the hell would anyone give you money?

Re: Artificial General Intelligence: Now Is the Time
posted on 08/10/2007 8:16 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ if your project works it will make money obsolete - by definition,
-and if it doesn't,why the hell would anyone give you money? @@@

Money obsolete? Why?

You might assume that stuff about self acceleration of AI, but it is not what I am talking about.
The SAI developed will be strong, but its subsequent improvement would come mostly from better computer performance overtime.

Money are a way to manage needs and rights,
so they will stay.

In fact, I think that the best initial practical application that SAI should be used for, is stock trading.

If successful, it would pay for the project and bring in a lot more.

es

Re: Artificial General Intelligence: Now Is the Time
posted on 08/10/2007 4:10 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ give me a justification breakdown of your budget, it doesn't need to be technical at this stage, and you can have funding up to '2billion @@@

Eldras,

You might doubt them, but my estimates are based on a lot of SW development experience.

There are some unknowns there, but nothing that would throw the project over '2 billion :)

In general, it is still my estimate, that the thing can not be done by myself on below $50 million. That most likely would be a minimal configuration, running as a computer application.

Some could do it cheaper if they knew how to do it, but organizationally I would have to start from scratch.

The best way would be to have as wide as possible cooperation of the capable developers that are out there, like yourself. Money could be a key in many of the cases, but willingness to help the effort in all.

Nature of the project allows for successful cooperative effort, but its sensitivity makes cooperation almost impossible.

Anyway:

Project must be done in about 2 years, so the acceleration costs /staffing etc/ would be high.

The ballpark estimate would call for 25 first class programmers, with supporting and management staff, which gives 50 programmer years.

The standard expense per programmer is about $1 mil a year, so it gives you the above $50 mil in 2 years.


I have outlined in the previous message the main items to pay for.

The timing risk is due to the fact that the number of programming subtasks can be in reality such that 25 person team would not be sufficient.

So it might be helpful to have some additional money available in the case like that.

Anyway, you are not serious, are you?


eS

Re: Artificial General Intelligence: Now Is the Time
posted on 08/10/2007 7:03 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

Yes I am.

And also, at what point would this private venture suddenly be seized?

I dont know what use I'd be to you I'm just high level language archtecture



Like you I've run this course and quit because I was shit scared of it working and the physical straign breaking my health.


Perhaps also like you I thought my best role could be to alert the thought leaders (Good One Pelly!) that it was definately coming soon, and it is fucking dangerous.


I think I've done that as well as I can & I'd be very happy to let someone else worry about how to drive.

If money is your only obstacle count me in.

I dont know how you would turn a profit


'Hey here is the ESAI..now when I pull this lever OOOps!'





Re: Artificial General Intelligence: Now Is the Time
posted on 08/10/2007 7:13 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

an alternatve...we team up and talk oursleves into the labs at a great university.

I caged MIT and Harvard but didn't like them. Cambridge could flood CERN is bleak.

London is good.

WE probably know most of the A.I. world between us...the ones that haven't 'died in hotel rooms' or 'been killed in car crashes with no witnesses'.

I counted the other day I know or have met 50 of the main men.


As regards work apportionment, I reckon Icould do something there Ive thought a lot about it,a nd drafted laods of stuff on teams and chain of cammand and efficiecy/organisation.

Maybe we should meet. The
weaknesses we have could be identified and planned against.


I never met a great inventor who wasn't possessed of some terrible handicaps.



Re: Artificial General Intelligence: Now Is the Time
posted on 08/10/2007 7:26 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

Programers are tenapenny...no offence meant if you're into it, but writng in machine language for machines...

Most of them HAVE to do it or they go nuts.
They have a half life of 15 years.

Like playing onearmed bandits i guess. If they stop doing it they're stressed out.



Er my last partner
was a programmer!

Give me designers, analysts, wizz kids who are into action and will produce.


the name of the game is delegation, not do it all yerself.


That's a learnable skill, and I KNOW I can raise money which is one factor of production.



VC's are also ten a penny as you would see if you put your toe back in the water.

In fact business software and methods are so automated that i'm not sure why we would need them unless they were terrific and had many more skills/contacts in this field.
generally the only want a quick profit for a lot of hastle.



I thought Id never form a partnership again, but Im not moving here.



I'm just training up a PA now.

Re: Artificial General Intelligence: Now Is the Time
posted on 08/10/2007 8:30 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ at what point would this private venture suddenly be seized? @@@

Bought by some huge company for gzillions?
Seized by the government?

es


Re: Artificial General Intelligence: Now Is the Time
posted on 08/10/2007 10:01 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

why shoud anyone buy a system that doesn't work.

And if it did work...why would anyone sell it?

That would be trading down.

Re: Artificial General Intelligence: Now Is the Time
posted on 08/10/2007 10:30 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@ if it did work...why would anyone sell it? @@


If price is right you sell, and enjoy the fruits of your hard work on the beach, and through life of luxury.
The investors get happy also, their risk richly rewarded.
And everyone gets benefit of genuine SAI enlightening our life.

At least this is the way capitalism works.


e:)S

Re: Artificial General Intelligence: Now Is the Time
posted on 08/10/2007 6:30 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

Ah Capitalism! I'd forgoten it for a while.

I think it's more like the A bomb isn't it:

this is the best end :

http://video.google.co.uk/videoplay?docid=-7752959 854319390542&q=dr++zaius+planet+of+the+apes&total= 13&start=0&num=10&so=0&type=search&plindex=1

Re: Artificial General Intelligence: Now Is the Time
posted on 08/11/2007 5:27 AM by godchaser

[Top]
[Mind·X]
[Reply to this post]


'Like you I've run this course and quit because I was shit scared of it working and the physical straign breaking my health.'


I felt i understood you right eldras, when you said you could build it safely- but your now disquieted with safety concerns again?

If you've got access to two billion dollars to build SAI.. sounds like you right e - you need'a driver to put the son of'a bitch in the wind.


Re: Artificial General Intelligence: Now Is the Time
posted on 08/11/2007 11:09 AM by doojie

[Top]
[Mind·X]
[Reply to this post]

The Simpson's video shows a recurring fear. Even assuming a more powerful technology, the factors that drive it still reside in the genetic replicative algorithms, wich means simply war on a higher level. If apes eventually rule, they will still be driven by the same algorithms, same conclusions, same pettiness, etc.

All wars are fought using the latest technology. Those who fight the technology usually lose, and end up as deeply involved with it as the winners.

I don't know anything about BESS, but an introduction of an intelligence system will result in wars and counter wars sweeping across the earth at an increasing pace, with the results being absorbed by more individuals.

But hey, we have that now with terrorism!

Assuming any truth to The Planet of the Apes, there wouldn;t be a dominant species, but with a sudden acceleration of 'intelligence war", several species would begin competing each based on the survival instincts developed by the genetic replicative algorithm.

That would be an interesting war!

Re: Artificial General Intelligence: Now Is the Time
posted on 08/11/2007 11:50 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

hi doojie,


1. The programmes we have in us now will dramatically change in the next two decades, so our recurrent fears wont exist.


2. The belief by the Singularity Institute that we can limit an A.I. that is going to metabolize ALL the cosmos is illogical.







Yep that is the issue alright:

CAN you contain the future course of an A.I. by setting it's original parameters?



The Singularity Institute crew think you can do this by the initial goals, arguing that all subsequent ones would be derivatives and therefore limited by that: 'I'm a man, therefore I can only see what a man can see etc'



But I think such reasoning is flawed.


The Superintelligence would seek to metabolize the entire cosmos.

It is NOT going to stopped and will get round any constraint from starting conditions.


So it's illogical what they're doing.


I think their work's important especially for publicity the paradigm shift which they have truly helped achieve, etc but I believe that premise is illogical.



Mankind has such conditions, if you think about it...death disease wars..climate.

We're tried to get round what is essentially information loss this causes by:





1. Recording & sharing information

2. Trying to live longer.

3. Trying not to die.

4. Trying to fight disease.

5. Trying to control the climate.

6. Trying to build better technology.



Climate and disease have been more terrible than wars.




Sometimes a much more intelligent system can be successful overcome by a much less intelligent one eg a virus and a man.


Ultimately though I would back the species mankind against any virus, depending only on if we reach intelligent machines that can self-modify at speed.





In my view EVERY other filed of science research is secondary to A.I. because as son as A.I. is achieved, it increases it's own intelligence FAST and does EVERYTHING we want the nano and bio and other systems to do.






THis 2nd point....that ONLY A.I. is worth funding , escapes most people.

I think that is because unless you're working in the field, you dont think about it's ramifications enough.




My 2 points are





A. That attempting to set initial conditions in an A.I. that is predicted to metatbolize the universe is a non-sequiteur.


B. ONLY A.I. should be funded - but it should be MASSIVELY funded - because it is the last tool we will ever need to make, and will achieve everything that every other branch of science possibly could.







Re: Artificial General Intelligence: Now Is the Time
posted on 08/11/2007 1:08 PM by doojie

[Top]
[Mind·X]
[Reply to this post]

The Superintelligence will seek to metabolize the entire cosmos".

You may have dewscribed the basic drive of life itself. The Chaos scientist in "Jurassic Park(Michael Crichton)" points out that "life will find a way".

That is the human dream: "this mortal shall put on immortality". The self seeks to extend itself, and seeks to do so with a minimum of interruption. But have we not confused the mechanical process of algorithm with life itself?

Religion and government, for example are mechanical processes designed around laws such as the leftist "living Constitution", adaptive to human goals. Are we not confusing the creation with the creator?

We can discover algoritrhms that allow us to proceed to certan goals, but when the goal is not understood nor the path to choose in order to get there, aren't we fqaced with an infinity of undecideable propositions?

The goal of self aware life ias not only expansion, but a minimization of choice that separates "other" from "self"...."Resistance is futile"....

It is probably true, therefore, that AI of sufficient capacity will seek to metabolize the universe, but is that life?

Religions and governments, which are mechanical ext4ensions of our life perceptions, seek the same thing: proselytize, replicate, extend, make "other" into "self".

I'm not so sure that a universal intelligence capable of understanding truth would not be exactly like the universe we now have!

Re: Artificial General Intelligence: Now Is the Time
posted on 08/11/2007 1:56 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

You are making a lot of assumptions- maybe you have a pretty worked out dialectic.

I think there are higher and lower goals men have.

And that mankind has.

I truly think alien contact is certain shortly...we are bound to find life in the galaxy and universe.

But life itself is just one form of being.


I dont think there's a creator, at least I dont have the need for one in my philosophy.


of course I feel the loneliness of a philosopher.

At first that was hard and I fled to company..any company just to not be alone, but I have learnt solitude in the city as well as friendship.

Also it is the doom of thinkers to feel loneliness because the more you learn the smaller group you are entering.

I think I'll get a wife and 10 children in the next year or so!

Maybe in America to shake me up a bit or Canada.

but the progress to the end of the universe..it's final state is called eschtology.

If you look at THE BRAIN (above in the top bar of the page, you will see loads of articles classed by subject on this idea.

I have thought a lot about it, and believe their is an inevitable logic given Man survives this hard time of his evolution, when out technology far outstrips our wisdom to use it.

One world government should have been by now, especially after the world wars, when the allies had the compliance of the whole world, but because it hasn't there is a race for A.I. ...in order to survive at all, in myh view.

Re: Artificial General Intelligence: Now Is the Time
posted on 08/11/2007 2:15 PM by doojie

[Top]
[Mind·X]
[Reply to this post]

Your concluding paragraph is what I've been saying. Since we haven't found unity in government or religion, there is a race for AI. But AI can no more surpass the algorithms that develop it than religion and government can surpass the human shortcomings to produce unity and world peace.

We may alter our technological applications to find more diversity and creativity, but there is no indication that we can produce a SAI with such intelligence that it creates world peace without deastroying human freedom in the process.

If such an intelligence developed, it would then be alien to us, since it would posess knowledge and the procedures to get us there that we do not have.

The similarity here is to a math system that is primitive and complete, but has no usefulness whatever for exploring Godel's incompleteness theorem. It is already complete and cannot rise above itself.

While humans can recognize their incompleteness and loneliness, they can find no way to develop algorithms or decision procedures to help them overcome loneliness and incompleteness without some form of self alienation.

To grow in knowledsge is to discover more of self, and to discover more of self is to be alienated from others. A SAI that develops algorithms to understand methods we cannot discover will seem quite alien, and we will rebel.

But a SAI would be develioped from mathematics leading to algorithms, and those algorithms, assuming it devloped beyond us, would be of a nature we cannot achieve or be aware of. How do you become what you are not, and know not?

The universe could be a simulation already programmed by a SAI of such intelligence and we simply would not know it. It would operate according to algorithms we could not discover. As such, we would find no proof of it whatever.

To gwet from "here" to "there" we wold have nothing more than faith to guide us, and that faith, of necessity, could not be bound in any type of laws.

Re: Artificial General Intelligence: Now Is the Time
posted on 08/13/2007 10:49 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

Faith is just a set a of behaviours and choices and beleifs about something.

As a scientist i try to rationalise everything i do.

If i do someting that's nuts...there will be an uege to do that in me, just i haven't found where it's coming from.

When Ben coined AGI 6 or 7 years ago, i think, there still were loads of words ofr thingsd in A.I.

we struggle less for terms now.

AT Murray who sometimes posts here called for statdardization in A.I. and other

what is it machine intelligence superintelligence String Artificiail Intelligence

I think i prefer AI becayse i know what that means, and I mean humjan intelligence or above by iy!

THe trick with science is that it's consensual and you dont need faith.

That's your own business and nothing to do with the discourse.

I think

Re: Artificial General Intelligence: Now Is the Time
posted on 09/20/2007 11:25 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

I think Ben's idea is worth popularising because the amount of resources needed to build AI is massive, but the results are cataclysmic.

Mostly it's computation capacity, and all the sub-routines on secondary systems that check it and stop it from blowing us up.


BTW Re: our goals dooje

Our goals to anything...spiritual intelligence...whatever...our gials are anything we want them to be.


There is a school that says you can never basically change your goals.

Noit so.

you could allow a random change eg. plus others


Re: Artificial General Intelligence: Now Is the Time
posted on 09/21/2007 2:11 PM by johnee

[Top]
[Mind·X]
[Reply to this post]

The author mentioned an 'AI Manhattan Project' wouldn't be as big as developing Windows Vista.

Well, given how much money Microsoft has made on it's operating systems, why hasn't some venture capitalist formed this Manhattan Project?

The author makes it sound as if a well organized team of highly effective intellectuals can bang out a fully developed AGI system in relatively short period. Clearly the consequences of such a machine would be astounding. Bill Gates even said that mastering an AI system would be worth ten Microsofts.

So my question is, why hasn't someone or a VC firm backed this up by funding the AI Manhattan Project? That would be the most lucrative investment anyone could ever make. The owner could have all rights to the technology, which they would focus, inevitably, on the stock market. It is my belief no one believes in this concept enough to risk money on it, and I'll give my opinion why...

AI has been "promised" for a long time, but it's never been delivered. I see a huge discontinuity from where we are now (which, ultimately, isn't much different than 50 years ago) to an ultimate super AI which will accelerate us toward the singularity, especially if this is to take place in the next 20-30 years. I understand the idea that once it starts, it will progress geometrically, but it's my belief we are so far away from the baby steps which will kick off that progressive growth that it's not likely in our lifetime.

True, for VCs to invest, they must be convinced of an AGI grand design, and must believe in it, but for them the whole concept is suspect based on the past 50 years (all promises but no delivery). So how will VCs and others begin to see winds of change? Unfortunately, it will take many, many years and years of slow but steady progress, or perhaps someone will have the epiphany of a lifetime.

Either way, the public's take on AI will have to change, and public opinion does not change at a geometric (or even logarithmic) rate.

Re: Artificial General Intelligence: Now Is the Time
posted on 10/20/2008 4:27 AM by mzeldich

[Top]
[Mind·X]
[Reply to this post]

Dear readers, I am still wondering why it is so hard to mimic on an artificial system the mental function carried out by a piece of flesh under a scalp with a body as whole.
Even more interesting that researchers around the Globe are still driven by common believes instead of analyzing the known facts.
About 1 trillion and countless human years are spent already in attempt to develop some clever algorithm capable to simulate creativity.
This is fruitless approach. Until artificial systems will lack subjectivity they will lack creativity also.
I could be partner for enthusiastic people having average technical skills in programming to lead them to creation of the working creative systems.
The following is exemptions from almost complete explanation of

THE THEORETICAL BASES OF THE APPROACH
TO CONSTRUCTION OF
SELF-ORGANIZING ARTIFICIAL,
SYSTEMS


Introduction



In this work the problem of construction of artificial system capable to be creative is considered.

What meaning in this term - Creativity?

At this stage we will name creativity an action directed on reduction of the World to a required, subjectively preferable condition.
(Further this definition will be specified from different point of view.)

Whether it is possible to construct creative system without referring to such indistinctly concepts as: Consciousness, Intelligence, or Rationality?

Let's resort once again to the help which is given to us by the nature. After all, live beings are capable to find the decision to all new and new problems, which are present to them by the way of life.

What functions are carried out by their body at interaction with the World?

For our purposes it is enough to notice, that their body can be divided into these functional parts:

Sense organs;
Nervous system;
Controls
Actuators

We know, that:
Sense organs, at interaction with the World, develop signals;
These signals by nervous channels arriving to control body;
The control body develops its own signals;
The signals developed by control body, operate actuators;
Actuators make actions over the World, for the purpose of its reduction to a required condition.

We can construct the system (Robot) consisting from: a measuring part (Sensor controls); control systems (Controls); and algorithmically uncertain Actuators, placed in the environment possessing property of continuity.

It is necessary to notice, that any actions over the World elements change its condition. So, for example, if our automatic machine has carried out 1 step the World has come to a new condition.

Now we could define the problem as: How to construct the self-trained automatic creative machine capable to independent studying of the features of the environment and its own abilities to influence it?

Let's try to solve it.

In other part of this work the Controls are conditionally allocated from structure of the automatic machine.

The automatic machine is the Robot (physical or simulated), and Controls. Together they could form the self-trained automatic unit, which learns how to use this automatic machine.

The sensory-motor device of the automatic machine consists of a number of gauges and actuators. It is meant, that the controls originally have no data about dependency between properties of the World and the signals arriving to control body.
This is true for actuators also.
The controls originally have no data on links between operations made by actuators and change of a condition of the World.

What initially of the Controls should be able for maintenance of operability of the automatic machine?

By analogy to live organisms we should supply the Robot with set of the features providing preservation of the automatic machine in the minimum working capacity and its abilities for the further development.

szeldich@gmail.com
Best regards Michael