|
|
|
|
|
|
|
Origin >
Visions of the Future >
Accelerated Living
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0294.html
Printable Version |
|
|
|
Accelerated Living
In this article written for PC Magazine, Ray Kurzweil explores how advancing technologies will impact our personal lives.
Originally published September 4, 2001 in PC Magazine. Published on KurzweilAI.net September 24, 2001.
Imagine a Web, circa 2030, that will offer a panoply of virtual environments incorporating all of our senses, and in which there will be no clear distinction between real and simulated people. Consider the day when miniaturized displays on our eyeglasses will provide speech-to-speech translation so we can understand a foreign language in real time--kind of like subtitles on the world. Then, think of a time when noninvasive nanobots will work with the neurotransmitters in our brains to vastly extend our mental capabilities.
These scenarios may seem too futuristic to be plausible by 2030. They require us to consider capabilities never previously encountered, just as people in the nineteenth century had to do when confronted with the concept of the telephone--essentially auditory virtual reality. It would be the first time in history people could "be" with another person hundreds of miles away.
When most people think of the future, they underestimate the long-term power of technological advances--and the speed with which they occur. People assume that the current rate of progress will continue, as will the social repercussions that follow. I call this the intuitive linear view.
However, because the rate of change itself is accelerating, the future is more surprising than we anticipate. In fact, serious assessment of history shows that technological change is exponential. In other words, we won't experience 100 years of progress in the twenty-first century, but rather, we'll witness on the order of 20,000 years of progress (at today's rate of progress, that is).
Exponential growth is a feature of any evolutionary process. And we find it in all aspects of technology: miniaturization, communication, genomic scanning, brain scanning, and many other areas. Indeed, we also find double exponential growth, meaning that the rate of exponential growth itself is growing exponentially.
For example, critics of the early genome project suggested that at the rate with which we could scan DNA base pairs, it would take 10,000 years to finish the project. Yet the project was completed ahead of schedule, because DNA scanning technology grew at a double exponential rate. Another example is the Web explosion of the mid-1990s.
Over the past 25 years, I've created mathematical models for how technology develops. Predictions that I made using these models in the 1980s about the 1990s and the early years of this decade regarding computing power and its impact--automated medical diagnosis, the use of intelligent weapons, investment programs based on pattern recognition, and others--have been relatively accurate.
These models can provide a clear window into the future and form the foundation on which I build my own scenarios for what life will be like in the next 30 years. Computing Gets PersonalThe broad trend in computing has always moved toward making computers more intimate. The first computers were large, remote machines stored behind glass walls. The PC made computing accessible to everyone. In its next phase, computing will become profoundly personal.
By 2010, computation will be everywhere, yet it will appear to disappear as it becomes embedded in everything from our clothing and eyeglasses to our bodies and brains. And underlying it all will be always-on, very-high-bandwidth connections to the Internet.
Medical diagnosis will routinely use computerized devices that travel in our bodies. And neural implants, which are already used today to counteract tremors from neurological disorders, will be used for a much wider range of conditions, including providing vision to people who have recently lost their sight.
As for interaction with computers, very-high-resolution images will be written directly to our retinas from our eyeglasses and contact lenses. This will spur the next paradigm shift: highly realistic, 3-D, visual-auditory virtual reality. Retinal projection systems will provide full-immersion, virtual environments that can either overlay "real" reality or replace it. People will navigate these environments through manual and verbal commands, as well as with body movement. Visiting a Web site will often mean entering virtual-reality environments, such as forests, beaches, and conference rooms.
In contrast to today's crude videoconferencing systems, virtual reality in 2010 will look and sound like being together in "real" reality. You'll be able to establish eye contact, look around your partner, and otherwise have the sense of being together. The sensors and computers in our clothing will track all of our movements and project a 3-D image of ourselves into the virtual world. This will introduce the opportunity to be someone else. The tactile aspect will still be limited, though.
We'll also interact with simulated people--lifelike avatars that engage in flexible, natural-language dialogs--who will be a primary interface with machine intelligence. We will use them to request information, negotiate e-commerce transactions, and make reservations.
Personal avatars will guide us to desired locations (using GPS) and even augment our visual field of view, via our eyeglass displays, with as much background information as desired.
The virtual personalities won't pass the Turing test by 2010, though, meaning we won't be fooled into thinking that they're really human. But by 2030, it won't be feasible to differentiate between real and simulated people.
Another technology that will greatly enhance the realism of virtual reality is nanobots: miniature robots the size of blood cells that travel through the capillaries of our brains and communicate with biological neurons. These nanobots might be injected or even swallowed.
Scientists at the Max Planck Institute have already demonstrated electronic-based neuron transistors that can control the movement of a live leech from a computer. They can detect the firing of a nearby neuron, cause it to fire, or suppress a neuron from firing--all of which amounts to two-way communication between neurons and neuron transistors.
Today, our brains are relatively fixed in design. Although we do add patterns of interneuronal connections and neurotransmitter concentrations as a normal part of the learning process, the capacity of the human brain is highly constrained--and restricted to a mere hundred trillion connections. But because the nanobots will communicate with each other--over a wireless LAN--they could create any set of new neural connections, break existing connections (by suppressing neural firing), or create hybrid biological/nonbiological networks.
Using nanobots as brain extenders will be a significant improvement over today's surgically installed neural implants. And brain implants based on massively distributed intelligent nanobots will ultimately expand our memories by adding trillions of new connections, thereby vastly improving all of our sensory, pattern recognition, and cognitive abilities.
Nanobots will also incorporate all of the senses by taking up positions in close physical proximity to the interneuronal connections coming from all of our sensory inputs (eyes, ears, skin). The nanobots will be programmable through software downloaded from the Web and will be able to change their configurations. They can be directed to leave, so the process is easily reversible.
In addition, these new virtual shared environments could include emotional overlays, since the nanobots will be able to trigger the neurological correlates of emotions, sexual pleasure, and other sensory experiences and reactions.
When we want to experience "real" reality, the nanobots just stay in position (in our capillaries) and do nothing. If we want to enter virtual reality, they suppress all of the inputs coming from the real senses and replace them with signals appropriate for the virtual environment. Our brains could decide to cause our muscles and limbs to move normally, but the nanobots would intercept the inter-neuronal signals to keep our real limbs from moving and instead cause our virtual limbs to move appropriately.
Another scenario enabled by nanobots is the "experience beamer." By 2030, people will beam their entire flow of sensory experiences and, if desired, their emotions, the same way that people broadcast their lives today using Webcams. We'll be able to plug into a Web site and experience other people's lives, the same way characters did in the movie Being John Malkovich. Particularly interesting experiences can be archived and relived at any time.
The ongoing acceleration of computation, communication, and miniaturization, combined with our understanding of the human brain (derived from human-brain reverse engineering), provides the basis for these nanobot-based scenarios. A Double-Edged Sword
Technology may bring us longer and healthier lives, freedom from physical and mental drudgery, and countless creative possibilities, but it also introduces new and salient dangers. For the 21st century, we will see the same intertwined potentials--only greatly amplified.
Consider unrestrained nanobot replication, which requires billions or trillions of such intelligent devices to be useful. The most cost-effective way to scale nanobots up to that level is through self-replication--essentially the same approach seen in the biological world. But just as biological self-replication can go awry (cancer), a defect in the mechanism that curtails nanobot self-replication could also endanger all physical entities--biological or otherwise.
And who will control the nanobots? Organizations (governments or extremist groups) or just a clever individual could put trillions of undetectable nanobots in the water or food supply of an entire population. These "spy" nanobots could then monitor, influence, and even take over our thoughts and actions. Nanobots could also fall prey to software viruses and hacks.
If we described the dangers that exist today to people who lived a couple hundred years ago, they would think it mad to take such risks. But how many people living in 2001 would want to go back to the short, brutish, disease-filled, poverty-stricken, disaster-prone lives that 99 percent of the human race struggled through? We may romanticize the past, but until fairly recently, most of humanity lived extremely fragile lives. Substantial portions of our species still live in this precarious way, which is at least one reason to continue technological progress and the economic enhancement that accompanies it.
My own expectation is that the creative and constructive applications of this technology will dominate, as I believe they do now. And there will be a valuable (and increasingly vocal) role for a constructive Luddite movement.
When examining the impact of future technology, people often go through three stages: awe and wonderment at the potential; dread about the new set of grave dangers; and finally (hopefully), the realization that the only viable path is to set a careful, responsible course that realizes the promise while managing the peril.
Originally published by Ziff Davis Media Inc. at www.pcmag.com.
| | Join the discussion about this article on Mind·X! | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
Re: Life in the 2030s...
|
|
|
|
Personally, I think they'll pass a turing test long before that.
Ray's predictions tend to be overly conservative. His only real "hit" has been concerning chess (one year off) - he's been late on everything else. Hopelessly late when it comes to music (see: "Artwork of the Singularity" - post #15).
A conservative reading of Moore's Law (at 24 months rather than 18), machines (the supercomputers, anyway) will reach human density by 2010; PCs by roughly 2020. Using net-based arrays, speed and complexity equaling human capacity can be achieved today. If such an array existed now, I'd wager it would pass a turing test within months. Most would disagree, of course, but...
The idea that "big advances in biology seem more probable than in computer AI or nanobots" is somewhat skewed, since those advances would be a direct result of computing technology. What this has to do with tree trunks escapes me, though.
Blonde chicks.
Interesting site, by the way. I think I'll start with a short story before leaping into "Five Smooth Stones".
--
David M. McLean
Skinny Devil Music Lab
|
|
|
|
|
|
|
|
|
Re: Life in the 2030s...
|
|
|
|
Hi,
I guess I don't agree that "complexity" in a computer chip is any guarantee of AI. A computer is basically an abacus with flip flop memory. Making it faster does not make a mind. Windows 1995 is a POS, and even if the CPU can run at infinity it's still a POS that will crash all the time. Nor does stuffing any other program into the flip flops AI make. A chip is a dumb thing, and a dumb thing is just that. Pouring intelligence into a dumb thing does not make it smart, any more than pouring voltaic energy into dead body parts makes a living man. You can't pour yourself (your mind's structure) into the chip, and that's the rub. AI will be possible only when a completely new device is invented, working on a totally different principle than the "stored program computer". Gate arrays are a miniscule step in the right direction, granted. An intelligent silicon chip, if such a thing is possible, which I doubt, would not be an abacus and an array of flip flops. It would not even be programmable. You might be able to teach it to program computers, but if you "killed" it and tried to x-ray its internal state, you'd never figure out what it was doing. Call it Winslow's Law: if you can program it, it's not AI. Corollary: if you can examine its internal state, it's not AI
The reference to making love to barky boles of trees was to Kurzweil. Whether it's barky boles or silicon chips, Big K seems to think he can make it live, or at least think. Reminds me of the days of Elisha, Elijah and the worshippers of Baal in the groves. They kept claiming their groves could think, predict the future, do magic, etc., and you know what happened to them. Sure, they got off their rocks on them, but that's what threw them. Lemma to W's Law (above): If you can get off your horn in its barky boles, it may not be AI but Kurzweil has probably been there first.
Ciao,
T.L. Winslow, Fiction Author
www.tlwinslow.com
|
|
|
|
|
|
|
|
|
Re: Life in the 2030s...
|
|
|
|
Well, Ray doesn't need me to defend him, but I think he refers to dumb chips in much the same way as he does dumb neurons. Granted, an abacus with flip-flop memory is only part of how a human brain works (Ray granted that, though, in his last book when he discussed analog and [binary] digital processing), so my claim that a machine of sufficient complexity could pass a turing test in months could be just a case of my ignorance showing. You have to grant, however, that even a dumb machine that possessed human brain density (speed and complexity) - with it's perfect memory - might actually be able to pass such a test.
Of course, I never thought the Turing Test was a decent indicator of self-awareness, but it's a helluva benchmark.
I'll also grant that you can't "pour yourself into the chip", but then I don't think Ray quite said that. That statement is a bit of an oversimplification of his position - wouldn't you agree?
--
David M. McLean
Skinny Devil Music Lab |
|
|
|
|
|
|
|
|
Re: Life in the 2030s...
|
|
|
|
Only part of how a human brain works? I don't think even a part of it works that way. It's damn difficult for a human brain to become an abacus, so hard you have to be an idiot savant usually, which is somebody who devotes his entire mental effort to it, and is indeed less than a man. It's the word complexity that is in question here. Making chips with more packing density, and smaller circuit paths which let electrons travel between nodes faster, is not the same as increasing the complexity of the processing architecture. All the chips hide their circuit complexity behind the same old processing architecture so they can run Windoze and make Bill Gates richer. And it's not even that. They all stick to the stored programmed architecture a la Von Neumann. AI can't even begin to be possible until the chips have a way to vary their architecture in response to their inputs. It's clear that's the human brain isn't an information processing machine, but it is capable of setting up such machines inside itself, and modifying them. I don't think there's anything resembling programming of flip flops going on. It would be more like programmable gate arrays, changing the connections between and, or, and not gates to minimize the number of gate delays between the inputs and the outputs. In a computer all those gates are found in the CPU, and their design is set in concrete. I think of this process, which must go in in a human brain, as more akin to taking a picture, using neurons instead of film, only the picture, after being developed, is compared back to the image and continually optimized by minimizing the number of gate delays. The \"program\", then, is captured in the connections in a giant logic circuit, not in a pattern of flip flops, or what we now call software. How a human brain does this is the biggest mystery of all. Anybody who has tried to minimize a Karnaugh map of only a modest number of variables knows it's a virtually insoluble mathematical problem. Yet the human brain does it by some non-sequential organic process, using the variable connections in the mass of neurons to find a solution by massive parallel techniques or something. Anyway, making the same architecture run faster might satisfy's Moore's Law, but it is getting AI nowhere nearer the Holy Grail. I divine that the basic principle behind the ability of a mind to think is startlingly simple, albeit well hidden by nature so far. It's a matter of a simple principle being repeated innumerable times, and adding up to a mind. Of course, being the world's greatest genius myself, I could probably figure it out and patent it then become richer than Bill Gates with a mean monopoly and run the world into the ground until I brought the world to the brink, but right now I'll pass since I prefer writing fiction :) In the meantime, the secret is safe with such thinkers as Big K going around in circles and not labeling his fiction as such.
<br>
<br>
Ciao,
<br>
<br>
T.L. Winslow, Fiction Author
<br>
www.tlwinslow.com
<br>
<br>
|
|
|
|
|
|
|
|
|
Re: Life in the 2030s...
|
|
|
|
>>"Only part of how a human brain works? I don't think even part of it works that way."<<
Less to do, I'm sure, with Ray's position and more to do with my ability to clearly state his position. I didn't mean a human brain works like an abacus, I meant (re: flip-flop) the brain uses both digital (that is, in discrete steps) and analog (as in, non-stepped continuity) "computation". This was discussed in several of Ray's works, including "Age of Spiritual Machines" (which I gather you didn't like or didn't read).
>>"It's the word complexity that is in question here."<<
Again, my fault. That was my term, not Ray's. By "density", I was referring to both the SPEED of computation (like "fast") and the COMPLEXITY (i.e. - number of different tasks processing simultaneously, difficulty of tasks, type of tasks [play chess, read/write a story, etc.], and the like). Complex in contrast to "simple" (one-task) programs - like the program that constantly kicks my ass in chess, but it can't make a cup of coffee or switch CDs. Again, Ray addresses this in much more detail (and with much better articulation) than I am capable of.
>>"...is not the same as increasing the complexity of the processing architecture."<<
Slam dunk. You nailed me there.
Query: How is a standard CPU chip (like in a Gateway computer) different from a specialized DSP chip (like the Motorola 56000-series), architecturally speaking?
>>"The \"program\", then , is captured in the connections in a giant logic circuit, not in a pattern of flip-flops, or what we now call software."<<
Pardon, but do you mean IN the connections (the space), or in the CONNECTIONS (the physical pattern)?
So, am I to understand you to mean that reverse-engineering a human brain to build a neural-net is as futile as running lightning into Frankenstein's (dead) Monster?
>>"...Karnaugh map..."<<
Sorry - they said "Boolean" and I thought they said "Brew-ski". Woke up a week later in Florida.
>>"...but is getting AI nowhere nearer the Holy Grail."<<
There you have it. My question, then, is "What Holy Grail?". Intelligence? Self-awareness? I guess it depends on how you want to define the terms (or if you want to add another). At the moment, my K-2000 is plenty intelligent enough - but I wonder if it'll collaborate with me 5 years from now?
BTW: You keep smackin' on Kurzweil like that, I'll be convinced you have a crush on him.
--
David M. McLean
Skinny Devil Music Lab
|
|
|
|
|
|
|
|
|
Re: Life in the 2030s...
|
|
|
|
The question, yes, is what is meant by complexity. A program that plays chess, for instance, is a widely touted but misunderstood example. A commercial chess playing program, designed for a single CPU, probably has a lot of work put into its heuristics, that is, the way it evaluates a board position, giving it a numerical score that allows it to be compared with other board positions in the attempt to find the "best". Actually, a really fast CPU with massive parallelism doesn't want or need any heuristics! It will just try every position and use pure brute force. There will be a tree pruning algorithm descended from the alpha beta minimax alg., but that's a side issue. The size of the tree means the number of board positions looked at to make a decision, not how each board position is scored. A board position where the opponent's king is captured will be a "plus infinity", and one where one's own king is captured will be a "minus infinity", and maybe it would be unwise to have any other heuristic! (If the computer can only look 50 turns ahead, even counting the number of pieces on each side and adding that into the board evaluation score could turn out wrong, if there is a super sacrifice that results in forced checkmake but takes 51 turns to work out :) Anyway, there is no intelligence whatsoever in a computer playing chess! It's just a crank and a machine trying combinatorial possibilities and settling on one when it runs out of time. A single mistake in the long chain of steps throws it all off; luckily modern CPUs hardly ever make a mistake. The CPU itself is just going "add a to b. store in c." DUH! A human mind, on the other hand, or even a rat mind, is doing far more complex things, such as "seeing", "feeling", "smelling", and making decisions about whether to eat, flee, fight or fuck, or maybe jackoff. This is because there is no CPU inside a living mind! There is a giant collection of neurons forming three-dimensional logic circuits, collections of and-or-not gates, and feeding them to inputs and supplying the outputs to somewhere (motor functions). When a mind "thinks", it isn't adding instructions to a list, such as "add d to c". It's reconnecting the and-or-not gates according to some input-output relation we don't understand. I'm not sure there's even a "clock" that synchronizes "steps"; no, I'm sure there isn't, although there is an awareness of time built into every cell, a so-called biological clock. Is a mind digital or analog or both? Even this is hard to determine now. Are there even and-or-not gates in the mind? Is there 2-level (binary) logic, or more levels, or so many levels it amounts to analog? You tell me. If you unplugged a computer and x-rayed its contents, you will find a fixed logic circuit inside its CPU, plus a bunch of settings for the giant array of flip flops called memory. This is why one computer can simulate another, also why computers can simulate reality, or an approximation to it with numbers. But so what? A computer can touch the feathers of the bird, but never the soul. (sorry, scratch that) If you killed a mind and x-rayed its contents, you will find a super complex logic circuit based on unknown principles but certainly no flip flops. Probably, the brain will turn to mush and even lose the circuit connections before you can x-ray it. But nowhere will you find a CPU circuit in the neurons, nor a program that plays chess :)
I have read enough of Big K to have a feeling that he just plain doesn't understand this, sorry. He wants to pose himself as a guru and prophet of the the future of computers, and is overselling them like snake oil, IMHO. I don't care if it's five hundred years from now, but computers will always be dumb. By then, if the human race is still here, the true principle of mind will be discovered, and a whole new type of technology developed. Even so, computers will always be with us. Maybe one day intelligent artificial minds will program them for us while we live like Plato's Stepchildren in Trek. But it's not a matter of Moore's Law giving us intelligent Pentiums. Nobody knows how to design a circuit that thinks, regardless of the packing density and speed. Maybe a circuit cannot think, because the massive parallelism needed requires a chemical or even quantum mechanism, not electromagnetic.
And no I don't have a crush on Big K. And no, you're not my straight man to help me bash him. I have no axe to grind other than that I'm just a little jealous of his undeserved fame and recognition by a public that doesn't really understand the issues, while I languish in obscurity... :) Big K is the Bill Gates of AI? :)
Ciao,
T.L. Winslow, Fiction Author
www.tlwinslow.com
|
|
|
|
|
|
|
|
|
Re: Life in the 2030s...musical perspective
|
|
|
|
"...I don't care if it's 500 years from now, computers will always be dumb...." and "...it's not a matter of Moore's law giving us intelligent Pentiums..."
I guess the hangup is in words like "dumb" and "intelligent" and "conscious" and such. I understand your gripe when it comes to "pour(ing) yourself into a chip": If you do that, and Ray is wrong, you are dead and we have a perfect simulation of you claiming to be conscious. Those selling immortality will get rich...until they die, of course. Ray gets rich whether he's selling the snake oil or just books about it.
On the other hand, the simulation of intelligence (on a practical level) is another thing. As machines begin to talk more to us, and as their ability to meaningfully interact increases, it doesn't much matter whether they are self-aware or not. Intelligence becomes an ill-defined concept. That's why I mentioned my K-2000 (a keyboard/synth/sampler made by one of Ray's early companies - Kurzweil Music Systems). Computer composition tools (using CPU chips) and specialized instruments (using DSP chips) are becoming increasingly interactive (with both humans and each other). Soon, and very soon, such tools will make composition and performance and recording of music so simple, that music created by non-musicians will be on par with music of those once considered masters.
I will grant, of course, that these machines are programmed by humans standing on the shoulders of Bach or Hendrix or whoever it is who fits your musical tastes. But that doesn't really invalidate the results. That's what evolution of form is all about: Standing on the shoulders of giants (or even non-giants) to climb a little higher.
Personally, I think this "evolutionary" pressure on actual musicians (trained or not) will be a good thing - weeding out those who can't step up a notch, and forcing those who can out of slow gear. As this happens, machines will reach a point where human "instigation" is less required - perhaps not required at all. Before you know it, HAL is writing musical etudes and haiku.
But I digress.
The point is obvious: Machines will match the results of humans in both creative and non-creative endeavors, which necessarily increases the velocity of change. In the process, all the things that mimic intelligence (like talking and interacting and using humor and wit and voice tone) will become more pervasive. Mimicry or not, machine intelligence (as defined by pragmatic applications) will exist.
--
David M. McLean
Skinny Devil Music Lab
|
|
|
|
|
|
|
|
|
Re: Life in the 2030s...musical perspective
|
|
|
|
The earth has a body of several billion human minds bumping heads. A small fraction of them have "careers" where they try to use their minds for "higher pursuits", such as designing music. Funny how computers are touted as tools for this function, for designing something, when no computer can even understand what simple things like a mother, father, food, or a room are. As long as computers are limited to logic and reasoning aids, they are not going to be AI. Did you hear about that project to type into a computer every fact known? I guess when enough facts are typed into it, it will suddenly live and plot to take over the world :) Real AI will do its own learning, define its own facts. There is a huge gulf here. My main point is that there will be no AI until a new device is invented that requires no programming and has no fixed logic circuits, CPUs, ALUs, RAMs, etc. If this device is entirely electromagnetic, it will have to have a 3-D structure and have a way for connections between primitive and-or-not logic gates to change in a massive parallel fashion or have no chance of working. I suggest that the way to get a clue from nature is to try to unravel the code in DNA using, ahem, computers to do rote searching, sorting, storing and calculation work for the human researchers. In one of my novels I envisaged a great computer program that simulates what happens when a single cell uses DNA to duplicate into two cells, each with a copy of DNA but with different "program pointers". I envisage DNA as not one but a host of programs for cells with a master sequence of development. The initial cell has a program pointer that says "I am the initial cell". When it divides one cell's PP says "I am the left root cell", the other's says "I am the right root cell". The initial cell manufactured two cells out of itself, neither an exact duplicate of the initial cell, starting the development of an embryo. As the cells divide and multiply exponentially, each new cell is constructed based on where in the master program it fits, for instance, a nerve cell, a bone cell, etc. I believe great progress can be made by having one team of researchers trying to divine the way such a program must work, and programming it into a simulation, while a second team studies the DNA to find evidence of such a program, and feeds info back to the first team to modify their programming based on what is actually there. Perhaps even the fastest computers cannot handle a simulation of even a protozoan, but I'm not aware of this line of research even being tried. Have you heard anything? I feel this is the next step now that the HGP is finished and the HGDP has been killed by PC forces. Maybe the way a brain is constructed will emerge from this research. Probably the complete unravelling of the human embryo development will be far harder than any other type. But this may be my species pride :)
Clearly the human brain has structures in it that lower animals don't possess, and that is just what we want to understand the most. This subject is getting stale. Drop it?
T.L. Winslow
www.tlwinslow.com
|
|
|
|
|
|
|
|