Communal Intelligence

Communal Intelligence

Seth Lloyd [10.28.19]

We haven't talked about the socialization of intelligence very much. We talked a lot about intelligence as being individual human things, yet the thing that distinguishes humans from other animals is our possession of human language, which allows us both to think and communicate in ways that other animals don’t appear to be able to. This gives us a cooperative power as a global organism, which is causing lots of trouble. If I were another species, I’d be pretty damn pissed off right now. What makes human beings effective is not their individual intelligences, though there are many very intelligent people in this room, but their communal intelligence.

SETH LLOYD is a theoretical physicist at MIT; Nam P. Suh Professor in the Department of Mechanical Engineering; external professor at the Santa Fe Institute; and author of Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos. Seth Lloyd's Edge Bio Page

=


COMMUNAL INTELLIGENCE

SETH LLOYD: I’m a bit embarrassed because I’ve benefited so much by going close to last in this meeting. I’ve heard so many wonderful things and so many great ideas, which I will shamelessly parrot while trying to ascribe them to the people who mentioned them. This has been a fantastic meeting.

When John first talked about doing something like the Macy Conferences, I didn’t know what they were, so I went back and started to look at that. It was remarkable how prescient the ideas seemed to be. I couldn’t understand that, because why was it that all of a sudden we’re now extremely worried and interested in AI and devices that mimic neural networks? People were worried about it back then, and yet for decades it didn’t seem like people were that worried about this.

Rod Brooks made the point that what happened was the digital revolution took off. Moore’s law went ahead full steam, and anything that wasn’t a von Neumann architecture just wasn’t worth doing because you would soon have a von Neumann machine that would be able to do anything that you could do. People just rather stopped worrying about this for a while.

Now, however, we’re in quite a different era. I do have some things I’d like to say about artificial intelligence and even about quantum machine learning, but I’d like to give a little perspective about Moore’s law. This is from someone who's trying to build computers where you store bits of information on individual atoms and on superconducting quantum computers, and also with people who are trying to extend Moore’s law further and further.

We’re not at the end of Moore’s law right now, but various aspects of it ended long ago. Most noticeably, the processor speed, which had been doubling every few years, crapped out at about three gigahertz around fifteen years ago—around 2003 or something like that—simply so the devices wouldn’t melt. This led to the development of multi-core systems, which are primitive parallelism compared with Danny Hillis’s connection machine but, nonetheless, a form of parallelism.

Now, as people are trying to press down to make the field effect transistor smaller and smaller, quantum mechanical tunneling effects are coming into play, and leakage current is growing when you start to make these transistors smaller than five nanometers or so. At that scale, statistical fluctuations in the number of electrons on the transistor comes into play, and the amount of noise that’s going in the system grows, the wiring problem gets worse. It’s clear that you can’t just have more of the same of Moore’s law. Just making von Neumann-like Intel processors is not going to keep going for that much longer.

What's happening is not that Moore’s law is ending, but it’s fragmenting into a variety of different kinds of systems. People are already using GPUs to do lots of these neural network systems. Field-programmable gate arrays are extremely useful for fast control systems. Neuromorphic computation is being explored, where you make systems that are more analog.

I have to say a little bit about analog versus digital here even though it’s a false dichotomy. When John said he’s going to make us all vote for analog or digital, Danny said, "But that’s so digital of you." The system’s bottom nature is quantum mechanical, as Freeman Dyson pointed out, and quantum mechanics is both analog and digital. Once you operate at this very small scale, the digital nature of the universe is extremely important.

The kind of information processing that Caroline Jones was talking about, information processing that’s going on in the gut, suggests a new set of apps to enlist your gut to compute for you or to enlist your gut to give you the gut feeling of whether this is a spiral galaxy or an elliptical galaxy.

It’s important to note, and Neil Gershenfeld pointed this out, that by far the largest amount of information processing going on in the human body is not in the brain; it’s digital-chemical information processing that’s going on at the level of DNA and RNA, which is the ultimate digital forum for information, because quantum mechanics makes nature digital. It gives you only a certain number of types of elementary particles, which are combined to make only a certain number of types of atoms, which combine to make a large but countable number of molecules. They can be in different places. Somewhere, billions of years ago, living systems figured out how to harness this very microscopic digital nature of nature into encoding genetic information into DNA and RNA, and into the receptor dynamics and the receptors in cells. All cells have receptor dynamics in the metabolism of cells.

As Neil pointed out, if you look at what’s going on in the genetic reproduction in a cell, it takes about a second to bring in something, but there are 1018 operations per second. Whereas, the brain has roughly 1011 neurons, 1015 synapses, and is going at 100 hertz—that’s only 1017 operations per second. These are very large numbers. This has been going on for billions of years. Neurons haven’t been around for billions of years, but by god cells have been, and they've been processing information very effectively in a way that combines analog and digital methods.

A wonderful insight for what happened came from Frank Wilczek’s talk. I agree that there is no singularity that’s going to be taking place anytime soon. Moreover, it is a pity that there aren’t more West Coast people here, because when I go out there I find that a large number of Silicon Valley billionaires seem to believe that the singularity is there and that they themselves will be uploading their consciousness into a computer sometime in the near future. 

Moreover, John was talking about what happens if you don’t read some well-known books. I suspect that if you uploaded yourself to the cloud, even if it were entirely successful and you found yourself as yourself in the cloud but unable to go out for a cappuccino, you might feel that you’d struck a Faustian bargain by definition. There are plenty of stories about people who desire to live forever and the technologies they use. I don't ever remember any one that worked out well, unless maybe you count the New Testament, and I’m not sure we should count that.

ALISON GOPNIK: I had a conversation with a young man at Google at one point who was very keen on the singularity, and I said, "One of the ways that we achieve immortality is by having close relationships with other people—by getting married, by having children." He said that was too much trouble, even having a girlfriend. He’d much rather upload himself into the cloud than actually have a girlfriend. That was a much easier process.

LLOYD: This reminds me of my course at MIT. I write the problems on the board (they’re not posted online). If you want the problems, you either have to go to class or you have to make a friend. I said, "For you MIT students, you’ll have to decide which is harder to do." Let me just say that class attendance is very good.

My mother just died. It was very sad, and I’m still trying to understand that. Of course that’s the kind of immortality that’s worth going for and not the immortality of writing wonderful books or doing great science, even though that’s also a good kind of immortality to strive for. As you say, what's important are the parts of yourself that you leave with the ones whom you love and whom are important to you that propagate in good ways.

This is what I loved about what Frank was saying. If you just look at these numbers for building new devices—and we are going to be building beautiful, huge new devices that have vast amounts of information processing power—that, in the not-so-distant future, will match this roughly 1017 ops per second on something like 1015 bits. That’s something that is likely to happen in the next half century or so, though it’s not going to be by a von Neumann architecture. It’s going to have to be by a variety of different methods.

As discussed by David Chalmers in his talks about consciousness, and emphasized by Rod and Danny and others, people already treat the artificial intelligences in their life as very important companions that they would never be without. By becoming accustomed to treating these artificial intelligences as though they're alive, even if it might not meet the criteria for being able to perceive a gestault is, as I mentioned before, one of the main issues that was brought up back in the Macy Conferences, the early ones. Can an artificial intelligence have a gestault?

Even if we have something that we know for sure is not conscious, doesn’t have gestalt, and it’s a very simple circuit, we still feel for it and don’t want to cause it pain. We haven't talked about the socialization of intelligence very much. We talked a lot about intelligence as being individual human things, yet the thing that distinguishes humans from other animals is our possession of human language, which allows us both to think and communicate in ways that other animals don’t appear to be able to. This gives us a cooperative power as a global organism, which is causing lots of trouble. If I were another species, I’d be pretty damn pissed off right now. What makes human beings effective is not their individual intelligences, though there are many very intelligent people in this room, but their communal intelligence.

My prediction would be that there’s not going to a singularity. But we are going to have devices that are more and more intelligent. We’ll gradually incorporate them in our lives. We already are. And we will learn about ways to help each other. I suspect that this is going to be pretty good. It’s already the case that when new information processing technologies are developed, you can start using your mind for different things. When writing was developed—the original digital technology—that put Homer and other people who memorized gigantic long poems out of a job. When printing was developed and texts were widely available, people complained that the skills they had for memorizing large amounts of things and poetry—which is still a wonderful thing to do—deteriorated.

There’s plenty of evidence that the way people use their memory, given that they have immediate access to Internet search, changes a lot. For myself, I’ll just say that I no longer remember what it was, I just remember what I did to get it. Where did I go? What were the search terms I used to find this? Then I can find it again. Let’s not even mention the fact that nobody knows where the heck they’re going in their head any longer because they just have somebody saying, "Turn left at the next intersection."

This is going to be very interesting. If we think of artificial intelligence as part of the human communal development, then this is going to be very empowering for us and for these artificial intelligences. There are a lot of bad things out there. The fact that the largest amounts of artificial intelligence out there are being used by large corporations to sell us crap we don’t need, I sometimes question their intelligence. I’ve had both my hips replaced, and I frequently get these ads saying, "Dear Seth, you have this artificial hip. Perhaps you’d like to try this other one. Oh, and by the way, here’s a Swiss army knife for you to do it yourself." What are they thinking? I don’t get it.

Moreover, the question is what they could do with that information should they choose. If Google were more like the government of China or if Google reenters China and the government of China asks it to do things for the government of China, then we are in something that’s much worse than 1984 at some level. That's stuff to worry about. This notion was popular with Stephen Hawking and Elon Musk, that we’ll create a maligned artificial intelligence that will take over society. It just seems silly. First of all, we’re far away from having such an artificial intelligence. We’ll have, I would say, centuries before such a thing might exist, and we have plenty of time to make sure that if such a thing exists that we'll be okay.

Reading is helpful for this. We know that if you create an artificial being who is both more intelligent, stronger, and more ethical than you, as Mary Shelley pointed out, you better not treat it as if it’s subhuman. If you do, then it will behave in a psychotic fashion. If we simply choose to be kind to the artificial intelligences that we create, we’ll be going a long way in the right direction. We should also be very careful about the companies that are spying on us and are using artificial intelligence primarily to sell us useless crap over the Internet.

Amongst these technologies that are likely to be useful, these novel technologies of information processing, are quantum computers, which have not yet done anything that a classical computer couldn’t do. However, despite the fact that they’re still piddling and tiny, they now have fifty quantum bits and hundreds of thousands of quantum bit quantum computers are likely to show up soon. These are going to be just one of these information processing tools. They’re now at the stage where they can process information for specialized problems like simulating other physical systems, an application proposed by Richard Feynman, that they can do better than classical supercomputers. That’s going to keep on going.

About six or seven years ago, my post docs and I began looking at applying quantum information processing to do machine learning. The simple intuition is that quantum systems can generate statistics that cannot be generated by any classical computer equipped with a random number generator. They can generate strange and counterintuitive phenomenon. This has been known for more than a century. We also know from the example of things like deep neural networks, or Boltzmann machines, or deep learning that if you build a device that can generate certain kinds of statistics, it can often be used to recognize similar kinds of patterns. So, if quantum systems can generate patterns that cannot be generated classically, perhaps they can also recognize and categorize patterns that can’t be categorized or recognized by a classical system. Moreover, these might go beyond what weirdness like the EPR effect and stuff like that. It might also be that they can find patterns in nature for things that you could never do on a classical computer.

For example, what we first started out doing is exactly these k-means, quantum k-means, and quantum support vector machines, and then moving on to just bread and butter things like regression and principle component analysis, matrix completion (the Netflix algorithm). These are methods that involve linear algebra, and a lot of learning techniques just involve taking gigantic vectors of data and multiplying them by humongous matrices and applying some kind of nonlinear transformation, and then you do it again and you try to train the system to work. Well, quantum mechanics is about humongous vectors and gigantic vector spaces and multiplying them by gigantic matrices, and then doing nonlinear things like measuring and then seeing what happens. If you do encode data in a quantum mechanical state, you can kick serious machine-learning ass. Even with Google’s 50-qubit superconducting quantum computer, you could in principle diagonalize a 1012 by 1012 matrix, something which would take Avogadro’s number of operations ordinarily, and you’re not going to do that classically for quite a while. 

* * * *

W. DANIEL HILLIS: You touched on something that I went back and read because you had mentioned it in an earlier conversation. In the early Macy Conference, in Ashby’s discussion on the chess-playing computer, he talks about an algorithmic chess player, but in his formulation, besides a general purpose machine, he also includes a Geiger counter. He seems to think somehow that this is important. Going back to Alison’s point, Bigelow says, "I agree, it’s different with that, but why don’t we just throw that away, and it’ll all work just as well." Which is in fact what happened, and that was the truth. They were correct that the machine with a true element of randomness was different than a classical machine; it just wasn’t different in the way that was helpful.

LLOYD: That’s an interesting point. Since you mentioned that, I also thought about that some more, about where randomness plays a role. Well, neurons and synapses are noisy because there are small numbers of chemicals. So, neural functioning is quite noisy. The kind of digital cellular level information processing in terms of genetic reproduction is very precise. Nine out of ten of the offspring of an E. coli have exactly the same DNA as the original E. coli, but of course we know that it’s useful to have stochastic processes. In fact, if you stress the E. coli by putting in a bit of alcohol or something in their petri dish, then they start making more mistakes because they’re in a bad genetic place.

This is related to what Neil was saying about state of the art machine learning algorithms. In game theory, what is Nash equilibrium? Nash’s beautiful theorem says that if you have a game, then there are these equilibria where both players can’t change what they’re doing without making things worse for themselves. But in order to achieve that, you need a probabilistic strategy. In order to apply the Kakutani fixed-point theorem, you need a continuous space of strategies so that you could say, "If I change my strategy, it’s not going to work." The best strategies then are these probabilistic strategies. Plenty of times this is a very good thing to do.

HILLIS: But it doesn’t require true randomness. Pseudorandomness works just fine.

FRANK WILCZEK: Although, there have been scientific applications where pseudorandom numbers ran into trouble.

LLOYD: Right. Pseudorandomness can be problematic. It’s expensive computationally and, by definition, it is not random. So, if you happen to hit one of those non-randomnesses at the wrong time, it could cause you trouble.

NEIL GERSHENFELD: What’s your take on the power of partially coherent quantum computers? So, quantum computers, the real true ones are maximally coherent, which means they can be completely entangled, and a lot of the things called quantum computers that have huge numbers of bits are only a little bit coherent, and there’s a big debate about how useful they are.

LLOYD: D-Wave is not a full-blown quantum computer; it's a quantum annealer. You encode the answer to a hard problem in the ground state of a system. If you can find the lowest energy state, then you’ve solved the problem, which is a classical method for doing this as well. As a result, they’re much more immune to noise, the fact that they’re rather incoherent.

The lowest state is the answer. There’s a classical form of this called simulated annealing, where you set up the logical constraints of your problem so that the energy is the number of violated logical constraints. So, the ground state by definition has the lowest energy because none of the constraints are violated. So, it’s a solution. And then you cool it down to try to find the answer.

GERSHENFELD: Another way to say it is you put it in the answer, but you change the question. If you put it in the answer to an easy problem, you then deform it to asking a hard problem, and if you change it slowly enough it stays in the answer.

LLOYD: Quantum annealing is based on what Neil just said: You start at a very easy thing to say, like all the spins in your computer should be pointing this way, and then you gradually turn on this energy function that you wish to find the lowest energy state. There’s a theorem called the adiabatic theorem that says if you do this slowly enough you’ll get there.

This notion of doing computation this way, quantum computation the way it was developed at MIT, but the design for the D-Wave system was developed by my graduate student, Bill Kaminsky and me in 2002. We failed to patent it because we did a little calculation, and we said, "Well, after you’ve entangled about 50 quantum bits, then even under the absolute most optimistic assumptions, that is not going to work. The energy will be too high." Then D-Wave spent $100 billion building this from which I conclude that you should always patent things even if you’re absolutely sure that they’re not going to work.

The D-Wave system is partially coherent. It does solve hard problems. In fact, you can show that having a bunch of noise in the middle is helpful for it. It can very well be helpful for it to have noise in the middle. There are plenty of kinds of computation, including things that were developed by Shannon and von Neumann’s stochastic computing, which were not adopted. They were developed back in the ‘40s and ‘50s but not adopted because of the power of rapidly increasing power of digital computers.

Once you start pressing Moore’s law, your systems are going to be noisy. They are going to be stochastic. They’re going to be quantum mechanical, but they’re going to be semiquantum mechanical. They’re going to be semicoherent. This is a wonderful opportunity to develop a theory and practice of these kinds of computers, which will be the most powerful computers that you could build, where you have to deal with noise and you have to deal with quantum mechanics.

DAVID CHALMERS: The point at which machines achieve human level capacities in a wide range of areas, one of the areas where they'll be at human level capacity is creating artificial intelligences. The moment they get a little bit beyond human level capacities, they’ll be a little bit beyond human level capacities at creating AI, therefore they’ll be able to create AI systems a bit better than those that we can create. Therefore, they’ll be able to create AI systems a bit better than themselves. Iterate until superintelligence. That’s always struck me as a very promising argument. Do you think there’s something wrong with that?

WILCZEK: Things can increase and saturate a bound, or they can take off, or they can do something. They can slowly increase. There’s nothing inevitable about a singularity. The structure of high problems, P versus NP, suggests that there are going to be problems where progress will be very slow.

CHALMERS: Why does it have to be inevitable to be interesting? This happens a lot in arguments about this. You don’t know that’s going to happen. Even if there’s a 10 percent chance it’s going to happen, that’s interesting.

HILLIS: There’s a flaw in the description, which is that it suggests that intelligence is this uni-dimensional thing. Something can be incredibly smart and not have the ability to make a remotely smart machine. You’re assuming a particular dimension of intelligence could go off in that direction, but it would be a very narrow dimension.

CHALMERS: Once you have correlations between capacities, if one dimension goes off, then the things that correlate with it will tend to go off. If one of the things which goes off to infinity is the ability to create AI, then at the very least we get this offshoot line.

LLOYD: First of all, can we just do some numbers again? It’s not going to go off to infinity. Computation is a physical process, indeed, as a number of people in this room are fond of claiming that all of physical dynamics can be thought of as a computation, as information processing, and there’s only a certain amount of information processing you can do. Now, those amounts are large. If you’re willing to turn things into black hole density and compute using black holes or something, but that’s unlikely to happen. If you say we’re going to compute using things that have electrons and ordinary materials that are held together by covalent bonds, then you’re going to have basically ops operating at the level of an electron vault or something like that, and that’s where nature is doing it already.

GOPNIK: It’s curious because if you think about it, we already do that. We do know that the current intelligence that we have, one of its characteristics is that it creates intelligences that are superior to it on a regular basis, which in turn create intelligences that are superior to those intelligences. It doesn’t seem to bother us very much, presumably because we die before we get to great grandchildren, but that process is taking place. It doesn’t strike anyone as being particularly maligned that we’re creating generations that are capable of doing things that we’re not capable of doing.

CHALMERS: Every PhD advisor is trying to create an intelligence greater than theirs.

GOPNIK: In fact, literally succeeding. Right? That’s the whole plan of how human intelligence works, and it is interesting that it strikes us as being hopeful rather than striking us as being maligned.

GERSHENFELD: I find the problem to be Ray Kurzweil's followers, not him. A lot of what Ray does is he projects data. If you look at this data, Ray himself does a good job, and if you just look at the data he projects, it’s an interesting moment. The data projects in an interesting way. It’s about singularity, but do look at Ray’s data. The data is interesting.

LLOYD: There has been this old projection. It’s been noted for at least fifty years that human population is growing super exponentially. As the rate of growth of the population goes, of course it’s proportional to the number of people there, but there’s another positive term that’s proportional to the square of the number of people, which is the number of possible interactions you can have.

The way I make sense of this is exactly because we do have this funky universal human language, and because our intelligence is a communal intelligence, that our capacity comes from not just how many people there are, it’s how many interactions there are between people, and this gives you this proportion of the square. If you integrate that, you find that the population becomes infinite, and if you extrapolate from historical amounts of population, it becomes infinite at something like 2070. It becomes infinite in half a century or something like that. Luckily, it slowed down recently. There are these trends toward singularity.

CAROLINE JONES: People get stupider, too. On the many axes of intelligence, there are many axes right now where people are extinctifying themselves. That’s stupid. That’s a massive failure of intelligence.

LLOYD: We overemphasize. As artificial intelligences get closer to the capacities of human beings, they are already exhibiting behaviors that are very human-like, messing up in weird and inscrutable ways that we don’t understand. Artificial intelligence often leads to real stupidity, and that’s one of the signs that it’s intelligent. Human beings operate in a self-contradictory fashion. We don’t do things rationally, and by god we shouldn’t do things rationally, as you’re arguing. Computers are going to do that as well. Deep neural networks are already being to design the next generation of programming systems. This is not some science fiction. This is happening already.

RODNEY BROOKS: Programming?

LLOYD: Maybe there’s this distinction that’s come up a bunch of times about what’s the difference between a neural net that’s been trained and a program that’s been written into memory.

CHALMERS: I remember back in 1978 when I was a computer hobbyist at twelve years old, there was a program that was released called "The Last One," and it was going to be the program that wrote programs. Once you got the program to write programs, we’re never going to need another one. It didn't quite work out.

STEPHEN WOLFRAM: So, as you realize the main problem is you have to specify what the thing is going to do. With respect to this question about ever increasing intelligence and so on, it will be nice to hear from people what they imagine the definition of intelligence from some physics mathematics point of view might be, because I think it’s all nonsense. In the end you’ll realize that intelligence is just computation, and you realize that computation happens in lots of kinds of systems. It happens in lots of systems in the universe. It’s something where you say we’re going to have this ever-increasing intelligence. This doesn’t make any sense. The universe is already computing in a very efficient, effective way in all kinds of different places. The question is whether this computation is aligned with something that we think of as being human-like intelligent behavior, and that’s a completely different question and one that is quite separate from all these singularity discussions.

CHALMERS: The cash value is doing things that we care about. Right? Like solving problems, curing diseases, winning wars.

LLOYD: That's very good point. As you know, Steve and I have both written books claiming the universe is a giant computer and that we should understand everything in terms of computation. What’s going on is when we’re building computers, particularly when we’re building quantum computers, we’re hacking into the ongoing computation that’s going on and having more of that be computation that we’d like to have.

The real issues are not about the use of comp flops but about the use of joules and about energy that we’re using. Those are the really hard ones. Then it’s going to be okay. If we pay attention to the computers we’re building, if we socialize them, we treat them nicely, they then are part of our human intelligence and not separate from it in the same way that books are not separate from our intelligence.

ROBERT AXELROD: I'm going to take your example of advertisements for hip replacement, which you labeled as stupid, and give an account of why it’s intelligent. You know a lot more people that have had or will have hip replacements or are on the verge of having them than I do. You are a social collector of people who are relevant to hip advertisers. Even though you won’t have need of one, you might find that the one advertised is better than the one you got. 

JONES: But he doesn’t want to be a node in capitalism’s purchasing customers.

AXELROD: I’m just saying the capitalist system that’s advertising hips to him is not stupid. Where’s the intelligence that discovers that you’re a hip replacement node? The answer might be that it’s an automated system already that tests a lot of different ways of focusing ads and finds that people that have purchased something should still be advertised for the same thing, even though, as in your case, you know you’re not going to need another one. The system might have discovered that without anybody designing it to discover that, because they try a whole bunch of stuff and some of it gets good feedback in terms of selling hips or cars or whatever it is. So, it’s a combination. In this case, the intelligence could be accounted for as you’re doing some of the work by collecting hip-relevant people and talking to them when you learn something about hips. The advertising system is also learning that that works, so it’s a combination of human social intelligence and the automated system. It’s a good example we’ve been talking about of how those are going to merge and complement each other.

WILCZEK: It’s poetic that we’re close to the end and bringing together so many themes in terms of hip replacement, but it does illustrate opacity. It illustrates looking at extreme cases.

JOHN BROCKMAN: It gets better. The reason I was energized to do this project was because I went to get a cortisone shot, nothing major, but it was for a pain in my neck, which means they have to do it in a hospital setting. So, I make an appointment at the Hospital for Special Surgery at 3pm, get a cup of coffee, come back, hit my e-mail. First email: New England Burial Society. I get a second e-mail: New England Crematorium dot com. Third email: Casket dot com: "Keep your remains intact for a thousand years." This is very sophisticated because I knew that something was happening, and that something had to be deep learning. I immediately thought of Demis because I know this is beyond Larry Page. Why? Because I made the appointment from my farm in Connecticut, and who knew that I don’t do the boroughs? So, I’m not going to the Brooklyn crematorium. Because that’s where they are. They’re in the Bronx. They’re in Brooklyn.

WILCZEK: But it also illustrates what’s lacking. So, it has opacity. It has looking at extreme cases. What it doesn’t have is ...

LLOYD: Tact.

WILCZEK: It doesn’t have a sense of decency. That’s what we need is somehow to widen the circle of empathy on both sides.

LLOYD: Tact comes from the word to be silent. It’s something we could use. Herb Simon said the world that is information-rich is by necessity attention-poor. He said this in 1956 or something like that. That anticipated our current era. What we need to do as human beings is to protect our time and our attention, to pay attention to the things that are important such as other human beings and the odd, sexy AI.

BROCKMAN: Catherine Bateson asked, "Why can’t we have an AI with humility?" Why can’t we have an AI that asks the question and then says, "Maybe I better sleep on it"?